To start the new year, I provide a link to a discussion I had with Simon White on Phil Halper’s YouTube channel:
In this post I’ll say little that we don’t talk about, but will add some background and mildly amusing anecdotes. I’ll also try addressing the one point of factual disagreement. For the most part, Simon & I entirely agree about the relevant facts; what we’re discussing is the interpretation of those facts. It was a perfectly civil conversation, and I hope it can provide an example for how it is possible to have a positive discussion about a controversial topic+ without personal animus.
First, I’ll comment on the title, in particular the “vs.” This is not really Simon vs. me. This is a discussion between two scientists who are trying to understand how the universe works (no small ask!). We’ve been asked to advocate for different viewpoints, so one might call it “Dark Matter vs. MOND.” I expect Simon and I could swap sides and have an equally interesting discussion. One needs to be able to do that in order to not simply be a partisan hack. It’s not like MOND is my theory – I falsified my own hypothesis long ago, and got dragged reluctantly into this business for honestly reporting that Milgrom got right what I got wrong.
For those who don’t know, Simon White is one of the preeminent scholars working on cosmological computer simulations, having done important work on galaxy formation and structure formation, the baryon fraction in clusters, and the structure of dark matter halos (Simon is the W in NFW halos). He was a Reader at the Institute of Astronomy at the University of Cambridge where we overlapped (it was my first postdoc) before he moved on to become the director of the Max Planck Institute for Astrophysics where he was mentor to many people now working in the field.
That’s a very short summary of a long and distinguished career; Simon has done lots of other things. I highlight these works because they came up at some point in our discussion. Davis, Efstathiou, Frenk, & White are the “gang of four” that was mentioned; around Cambridge I also occasionally heard them referred to as the Cold Dark Mafia. The baryon fraction of clusters was one of the key observations that led from SCDM to LCDM.
The subject of galaxy formation runs throughout our discussion. It is always a fraught issue how things form in astronomy. It is one thing to understand how stars evolve, once made; making them in the first place is another matter. Hard as that is to do in simulations, galaxy formation involves the extra element of dark matter in an expanding universe. Understanding how galaxies come to be is essential to predicting anything about what they are now, at least in the context of LCDM*. Both Simon and I have worked on this subject our entire careers, in very much the same framework if from different perspectives – by which I mean he is a theorist who does some observational work while I’m an observer who does some theory, not LCDM vs. MOND.
When Simon moved to Max Planck, the center of galaxy formation work moved as well – it seemed like he took half of Cambridge astronomy with him. This included my then-office mate, Houjun Mo. At one point I refer to the paper Mo & I wrote on the clustering of low surface brightness galaxies and how I expected them to reside in late-forming dark matter halos**. I often cite Mo, Mao, & White as a touchstone of galaxy formation theory in LCDM; they subsequently wrote an entire textbook about it. (I was already warning them then that I didn’t think their explanations of the Tully-Fisher relation were viable, at least not when combined with the effect we have subsequently named the diversity of rotation curve shapes.)
When I first began to worry that we were barking up the wrong tree with dark matter, I asked myself what could falsify it. It was hard to come up with good answers, and I worried it wasn’t falsifiable. So I started asking other people what would falsify cold dark matter. Most did not answer. They often had a shocked look like they’d never thought about it, and would rather not***. It’s a bind: no one wants it to be false, but most everyone accepts that for it to qualify as physical science it should be falsifiable. So it was a question that always provoked a record-scratch moment in which most scientists simply freeze up.
Simon was one of the first to give a straight answer to this question without hesitation, circa 1999. At that point it was clear that dark matter halos formed central density cusps in simulations; so those “cusps had to exist” in the centers of galaxies. At that point, we believed that to mean all galaxies. The question was complicated by the large dynamical contribution of stars in high surface brightness galaxies, but low surface brightness galaxies were dark matter dominated down to small radii. So we thought these were the ideal place to test the cusp hypothesis.
We no longer believe that. After many attempts at evasion, cold dark matter failed this test; feedback was invoked, and the goalposts started to move. There is now a consensus among simulators that feedback in intermediate mass galaxies can alter the inner mass distribution of dark matter halos. Exactly how this happens depends on who you ask, but it is at least possible to explain the absence of the predicted cusps. This goes in the right direction to explain some data, but by itself does not suffice to address the thornier question of why the distribution of baryons is predictive of the kinematics even when the mass is dominated by dark matter. This is why the discussion focused on the lowest mass galaxies where there hasn’t been enough star formation to drive the feedback necessary to alter cusps. Some of these galaxies can be described as having cusps, but probably not all. Thinking only in those terms elides the fact that MOND has a better record of predictive success. I want to know why this happens; it must surely be telling us something important about how the universe works.
The one point of factual disagreement we encountered had to do with the mass profile of galaxies at large radii as traced by gravitational lensing. It is always necessary to agree on the facts before debating their interpretation, so we didn’t press this far. Afterwards, Simon sent a citation to what he was talking about: this paper by Wang et al. (2016). In particular, look at their Fig. 4:

This plot quantifies the mass distribution around isolated galaxies to very large scales. There is good agreement between the lensing observations and the mock observations made within a simulation. Indeed, one can see an initial downward bend corresponding to the outer part of an NFW halo (the “one-halo term”), then an inflection to different behavior due to the presence of surrounding dark matter halos (the “two-halo term”). This is what Simon was talking about when he said gravitational lensing was in good agreement with LCDM.
I was thinking of a different, closely related result. I had in mind the work of Brouwer et al. (2021), which I discussed previously. Very recently, Dr. Tobias Mistele has made a revised analysis of these data. That’s worthy its own post, so I’ll leave out the details, which can be found in this preprint. The bottom line is in Fig. 2, which shows the radial acceleration relation derived from gravitational lensing around isolated galaxies:

This plot quantifies the radial acceleration due to the gravitational potential of isolated galaxies to very low accelerations. There is good agreement between the lensing observations and the extrapolation of the radial acceleration relation predicted by MOND. There are no features until extremely low acceleration where there may be a hint of the external field effect. This is what I was talking about when I said gravitational lensing was in good agreement with MOND, and that the data indicated a single halo with an r-2 density profile that extends far out where we ought to see the r-3 behavior of NFW.
The two plots above use the same method applied to the same kind of data. They should be consistent, yet they seem to tell a different story. This is the point of factual disagreement Simon and I had, so we let it be. No point in arguing about the interpretation when you can’t agree on the facts.
I do not know why these results differ, and I’m not going to attempt to solve it here. I suspect it has something to do with sample selection. Both studies rely on isolated galaxies, but how do we define that? How well do we achieve the goal of identifying isolated galaxies? No galaxy is an island; at some level, there is always a neighbor. But is it massive enough to perturb the lensing signal, or can we successfully define samples of galaxies that are effectively isolated, so that we’re only looking at the gravitational potential of that galaxy and not that of it plus some neighbors? Looks like there is some work left to do to sort this out.
Stepping back from that, we agreed on pretty much everything else. MOND as a fundamental theory remains incomplete. LCDM requires us to believe that 95% of the mass-energy content of the universe is something unknown and perhaps unknowable. Dark matter has become familiar as a term but remains a mystery so long as it goes undetected in the laboratory. Perhaps it exists and cannot be detected – this is a logical possibility – but that would be the least satisfactory result possible: we might as well resume counting angels on the head of a pin.
The community has been working on these issues for a long time. I have been working on this for a long time. It is a big problem. There is lots left to do.
+I get a lot of kill the messenger from people who are not capable of discussing controversial topics without personal animus. A lot – inevitably from people who know assume they know more about the subject than I do but actually know much less. It is really amazing how many scientists equate me as a person with MOND as a theory without bothering to do any fact-checking. This is logical fallacy 101.
*The predictions of MOND are insensitive to the details of galaxy formation. Though of course an interesting question, we don’t need that in order to make predictions. All we need is the mass distribution that the kinematics respond to – we don’t need to know how it got that way. This is like the solar system, where it suffices to know Newton’s laws to compute orbits; we don’t need to know how the sun and planets formed. In contrast, one needs to know how a galaxy was assembled in LCDM to have any hope of predicting what its distribution of dark matter is and then using that to predict kinematics.
**The ideas Mo & I discussed thirty years ago have reappeared in the literature under the designation “assembly bias.”
***It was often accompanied by “why would you even ask that?” followed by a pained, constipated expression when they realized that every physical theory has to answer that question.
Do “low surface brightness galaxies” correlates to “old galaxies”? Thanks.
The low surface brightness galaxies I studied and referred to in the conversation are gas rich, late type spiral and irregular galaxies in the field. Their stellar populations are younger, on average, than high surface brightness spirals. This was one of the lines of evidence that led me to suspect the formed in late-forming dark matter halos. This property, like so many things, conveys to MOND: they may take longer to form, no dark matter need be involved.
There is a population of early type, spheroidal galaxies that are mostly (though not exclusively old. Those galaxies tend to be strongly clustered, e.g., the dwarf satellites of the Local Group. These come up when Simon mentions very low mass galaxies that should still have cusps.
Primarily, low surface brightness correlates with low stellar mass surface density, modulo only the stellar population mass-to-light ratio. Scatter in the latter went a long way to obfuscate the MOND effect in bright galaxies, but it became very obvious as we obtained data for low surface brightness galaxies.
What a fantastic discussion! Thanks very much to all 3 of you! At some point we (you and your colleague) will figure this out. I have to confess, my bet is on a dynamics solution – we know that Einstien refines Newton, and we know that Einstien breaks down at the quatum scale, as well as at the black hole singularity. I think it was Sean Carroll I first heard use the term UT – Underlying Theory – to refer to this next level of gravitational complexity. I look forward to watching is be discovered (hurry up, I am getting on in years 🙂 )!
One does lose patience.
The community seems to be fracturing along lines of age and patience… Many in the particle dark matter community seem to have lost faith in WIMPs and have moved on to inventing other candidates (a potentially endless task) while others persist in making ever larger, more sensitive detectors – not because they retain the faith they had in the WIMP miracle when they started the industry, but because that’s what they do and they’ve looked so hard already so maybe if the just look a little harder…
I was unaware Sean used the term UT. It is a pretty obvious thing to do. Indeed, it remains a puzzle to me why people who are convinced there has to be some deeper theory of quantum gravity are also some of the most unwilling to embrace new gravitational physics when it stares them in the face.
Oh yeah – I wrote about the hierarchy of theories shortly after I started this blog: https://tritonstation.com/2016/06/27/what-is-theory/ There are similar sentiments on my older MOND pages: http://astroweb.case.edu/ssm/mond/
Agree, this is a wonderfully informative discussion, and one that I am sending to all of my acquaintances and colleagues that I have debated this issue with. My only, very mild complaint, is that I think you, Stacy, were a little too generous about the disagreement. Although touched on briefly, the contrast between MOND as a zero-parameter fit versus LCDM requiring hundreds?? of adjustable parameters in order to get fits that are still not very satisfying, might have been emphasized more. I think the naïve viewer of this discussion might not appreciate this difference between the two approaches.
It is hard to strike the right balance between persuasive communication and giving offense. These conversations can go badly wrong, and quickly. One of the reasons I agreed to do it was because I know Simon is reasonable, knowledgeable, and actually responds to evidence and the application of the scientific method. There are a lot of scientists for whom that is not the case, e.g., http://astroweb.case.edu/ssm/mond/carrollcorrespondence.html
That said, I’m inclined to agree with your assessment. The problem with dark matter is not that it can’t explain the data; it is that there is no data it can’t explain. The Frenk Principle is nearing 30 (see http://astroweb.case.edu/ssm/quotes.html)
I didn’t become interested in MOND myself until I had convinced myself dark matter models were unworkable, largely for the reasons you state. Indeed, I’m most proud of my work on dark matter, showing how problematic these things are (e.g., https://arxiv.org/abs/astro-ph/9801123; https://arxiv.org/abs/2004.14402). But one has to be open to the discussion before one can have it, and most people don’t know what I know in this regard, and refuse to look, especially if you press to hard. It is one of those things everyone has to work out for themselves, and what makes each person go to the bother is specific to each person. Which is to say, I don’t think it is possible to ever get this completely right.
Ouch, sorry to see Sean Carroll reacting like that to the discussion. I have really enjoyed several of his books, and a bunch of his videos. I knew me was strongly pro-LCDM, but he seems a tich unwilling to debate. Darn
LCDM is, and always has been, a curve fitting exercise, whereas MOND is fitting a single curve to data. Neither curve set is explanative. In order for an Underlying Theory to emerge, the major players must admit the Cold Dark exercise has failed: There is no room for the age and weight of structure we observe within the DM/DE constraints, so there is nothing further to be gained from these models that rely upon clearly unphysical assumptions.
You are right that it is the unphysical assumptions of the standard model that are the source of the need for dark matter and dark energy. As with Ptolemy’s epicycles DM&DE are there to fit reality to the model’s assumptions.
For Ptolemy the assumptions were geocentrism and perfect circle orbits, for the standard model they are the expanding universe assumptions – the Cosmos is a simultaneously existing, causally-connected entity and the cause of the redshift-distance relationship is some form of recessional velocity.
Neither assumption is defensible under our current state of knowledge and the combination of the two has produced a cosmological model that is an empirically baseless mathematical contraption that does not, in any way, accurately portray the Cosmos we observe.
Unfortunately, no one in the academic community is allowed to say such a thing without endangering their career. Scientists like Dr McGaugh, who make actual observations, are not allowed to challenge the consensus math-based model and its axioms – under pain of excommunication.
That sociopathic situation, driven by mathematicism, is the underlying source of the crisis in theoretical physics. Modern theoretical physics is trapped in a web of self-deception woven of imaginary entities and events that are not part of empirical reality.
“… Stepping back from that, we agreed on pretty much everything else. MOND as a fundamental theory remains incomplete. LCDM requires us to believe that 95% of the mass-energy content of the universe is something unknown and perhaps unknowable.”
saying that “… 95% of the mass-energy content … is unknown …” kind of seems to me to qualify as an “incomplete theory” as well if we can’t tell what 95% of everything IS.
Yes, but it is a familiar unknown, while MOND is an unfamiliar unknown. This difference may seem trivial, but it is important to the psychology of people working in the field. The existence of dark matter has proceeded from a suspicious extravagance to a necessary evil to Received Wisdom. It is familiar to all workers in the field, providing a context in which to do everything else. Questioning that context requires real work.
Read the special “infinite universe” of the new scientists today https://www.newscientist.nl/product/oneindig-heelal-special-new-scientist/
Two articles caught my attention: the first was about planet 9, that in theory there cannot have been enough matter at that spot in the solar system for the planet to form. So they wrote about research of that maybe it could be a black hole of 3 cm diameter formed close after the big bang and captured by the solar system. Even if many such black holes exist, the probability of this would be small (I’d guess tiny). The idea that many such black holes exist would, they said, also maybe solve dark matter problems: that dark matter are such MACHOs.
The second article was about problems in the universe as a whole, that it is slightly lopsided and the Hubble tension. What I found very nice was that the researchers in the article admitted that updating the lambda-CDM to a better new theory was the only real solution. And that the universe was clearly not understood very well yet.
But I post because in both articles, despite both did consider many perspectives, MOND wasn’t even named or considered as an option to be a solution. Is the atmosphere in astronomy really that any tendency toward MOND is the end of your reputation as astronomist? It’s obvious to me that MOND might very well solve the need for planet 9 and the Hubble tension as well. Or is it simply not knowing about or not believing that these MOND solutions are possible? It’s like the general opinion is set up to state that MOND is not actual science.
What you point out is similar to what Merritt pointed out in his book: no one ever mentions a0, much less MOND. It’s a classic case of don’t-look, don’t-listen, don’t-speak. So yes, the general opinion is set up to state that MOND is not actual science. It is Known, Kahleesi. No need to check any facts.
I was very skeptical of MOND when I first head of it. The difference is that I checked, and the more I learned, the harder it was to dismiss out of hand.
We had a long discussion at the end of a conference in March during which I asked why MOND got so many predictions right if it was wrong? The only answer was uncomfortable silence.
Two days ago, The Astrophysical Journal published another article by Kyu-Hyun Chae about MOND and wide binaries:
https://iopscience.iop.org/article/10.3847/1538-4357/ad0ed5
Hoping for commentary on the Nube galaxy (a very large low surface brightness galaxy) sometime in the not too distant future. See https://www.aanda.org/articles/aa/full_html/2024/01/aa47667-23/aa47667-23.html
Sounds like a nifty LSB galaxy, but the data presented merely establish that it exists; they aren’t good enough to use to test either LCDM or MOND. There is even some confusion about where the HI emission is coming from, i.e., whether it is part of this object or not. Probably?
A paper by Jonathon Fay “On Sciama 1953” was posted to his site: https://jonathanfay.com/articles-2/. on January 3, 2024.
In it he discusses ‘ Dennis Sciama’s simple Machian model of inertia and gravitation and draw(s) out its philosophical implications.’
These lead to an interesting derivation of the Newtonian gravitational constant (eq 48) which includes a small term that might provide a theoretical underpinning for the existence of the MOND ‘constant’ a0. There is an interesting concluding statement:
“The most groundbreaking result of Sciama’s model however, is that gravity necessarily exists as a direct result of Mach’s principle, that is, of the necessity to define a law of inertia in a wholly relational manner, in the absence of any absolute physical structures. Since the mass of the universe is not homogeneously distributed, the inertial field which defines motion in the absence of other forces, cannot be homogeneous either. The expression of this necessary inhomogeneity of the inertial field is simply what ‘gravity’ is. This is the key feature that distinguishes Sciama’s model from other gravitational theories such as general relativity theory. Gravity arises in this model necessarily, it is not something we have to assume before hand and fit to known data. There is no ‘Newtonian fit’.”
Thanks for posting the link Rob. After reading the paper, I did a search into Dennis Sciama’s background and history, which led me in turn to Mach’s and Leibniz’s view’s on Newton’s Gravitation Theories. After further rummaging around, I came upon an interesting paper recently (Aug 2023) released on the arXiv server: Aspects of Machian Gravity (I): A Mathematical Formulation for Mach’s Principle
Although the maths in the paper are a bit beyond this old (non-astronomy) physicist, the line of reasoning being taken seems both intuitive and logical. Although, as the authors point out, there is still much work to do, the possibility of a Machian-based approach to gravity/inertia becoming the physical basis underlying MOND observations, seems promising. This gives me hope that we may finally be coming out of the ‘dark ages’.
More evidence for structures that violate the Cosmological Principle
https://www.bbc.co.uk/news/science-environment-67950749
The report says the findings have been presented at the 243rd meeting of the American Astronomical Society (AAS) in New Orleans.
These really large-scale structures keep turning up. They are genuinely surprising in LCDM, yet people keep explaining them away, or not even bothering. I would think this sort of thing would make an impression, large scale structure being the supposed success of LCDM and all, but it seems more like water off a duck’s back.
The presence of hierarchical structures in Nature is a direct
indication that the “laws” governing high level structures
are “decoupled”(independent/irreducible) (see decoupling theorem in QFT) from the laws governing their “elementary” components.
This was already explicitly stated by P. W. Anderson in his famous, but still not fully internalized: More Is Different: Broken symmetry and the nature of the hierarchical structure of science.
These hierarchies and their emergent properties/behaviors are all around us, from the emergence of the classical world from large assemblies of quantum objects, the emergence of life, intelligence (us) and then the large structures in the universe.
This also has been showed to be the case in formal mathematics with Chaitin’s results showing that complexity is a source of incompleteness(irreducibility).
It seems that complexity is a boundary for the predictive/explanatory power of any theory, this is fully consistent with the hierarchical structure of Science.
It will be an exercise in futility trying to use quantum mechanics to fully model living beings, that’s impossible even in principle, the many hierarchical layers between simple quantum systems and living beings introduce many new emergent properties that are decoupled from the laws governing simple quantum systems, these new emergent properties can only be discovered by direct observations/experiments.
This obviously implies that naive Reductionism is intrinsically limited: there’s no theory of everything, or any underlying theory that covers multiple hierarchical layers of Nature, as each hierarchical layer will have new irreducible emergent properties/behaviors.
It seems that Reality is a lot more richer than modern day theoreticians are willing to accept.
That was a nice discussion, thanks for doing it and posting it here. I often wanted you to go on something like the Lex Fridman podcast, but this was better.
Another interesting discrepancy in the Gamma background.
https://iopscience.iop.org/article/10.3847/2041-8213/acfedd
I found Leonard Susskind’s recent paper very satisfying indeed.
https://www.mdpi.com/2218-1997/9/8/368
“An observer (Figure 7) watching this take place would literally observe the dS turn itself inside out—the tiny black hole growing and becoming the surrounding cosmic horizon while the cosmic horizon shrinks to a tiny black hole (or no black hole at all).”
“The dS-matrix theory shows that the hologram is a single system, with a number of degrees of freedom as large as the largest area of the cosmic horizon when it forms a single connected whole. It is large enough to describe any state of the system but is not much larger. In that sense, it may be identified with the horizon of the dominant saddle point in the path integral. In individual branches of the wave function, no single component of the horizon may be large enough to describe the whole, but the hologram itself is described.”
Why black holes couldn’t be the actual consequence of dark matter existence? Maybe when it gets on the cusp, it gets captured.
Are you asking if the dark matter halo could have created the central black hole, or are you asking if the dark matter halo contains smaller black holes somehow formed from the diffuse dark matter?
Cold dark matter could in principle be composed of black holes. There are several large problems with this hypothesis. A problem in principle is how to make them in sufficient numbers and with adequate mass, beyond a magical invocation of their existence: it is not easy to cram lots of matter into a tiny volume. An empirical problem is that observations of gravitational microlensing now exclude compact objects (including black holes) as being the dark matter over an enormous mass range: they can’t be planetary mass or stellar mass or lots of other masses so there is a serious Goldilocks problem where we need something we can’t form in the first place to exist with exactly the right properties that have allowed it to escape detection. Besides those general problems, my objection is that as an effective form of cold dark matter that might merely exist, this hypothesis does nothing to address the question than motivates this entire blog: why does MOND get right predictions that CDM cannot even make? https://tritonstation.com/2023/01/05/question-of-the-year-and-a-challenge/
I am not saying that dark matter is composed by Black Holes but that the cusp predicted actually ends up feeding the black holes in the center of the galaxy. Then the black hole smooths out the cusp.
Ah. That’s a good question. There is a literature on that. Black holes have a tiny cross section, so little DM mass winds up in a central BH despite its cuspy distribution: it takes only a tiny bit of angular momentum to miss it.
Having a strong central mass concentration in the form of a black hole tends to make the dark matter more cuspy rather than less – see, e.g., https://arxiv.org/abs/astro-ph/0308385.
And again, the cusp-core problem is a symptom of interpreting MONDian behavior in terms of DM halos. Mechanisms to turn a cusp turn into a core only treat the symptom, not the underlying malaise.
It seems we should suppose that the formation and growth of the central black hole is intimately connected to the formation and growth of dark matter (IF dark matter should actually exist). Leaving out for now how that could happen, it seems that it must. How else could there to be a clear link between the distribution of dark matter and baryons, which (IF dark matter exists) the data shows?
Presumably the central black hole is the link. It surely interacts with baryons. It’s formation and growth must be rather directly linked to the formation and growth of dark matter, or else there is NO dark matter as it is currently understood. That’s the way I see it at the moment.
There is a correlation between the baryonic masses of galaxies (or at least the bulge component) and the masses of their central black holes. So there is certainly a connection of that sort. The connection with the DM halo is considerably more tenuous – for many years, we assumed a 1:1 relation between halo mass and visible galaxy mass. That does not hold; the relation has to be highly nonlinear*, so interpret that as you will.
*That this was the case is one of the things that made me skeptical of dark matter, as it seemed like we were injecting a rolling fudge factor into the problem to relate Mhalo to Mgal however we needed it to be. The community, like me, spent all of the ’90s trying to avoid exactly this. It spent the ’00s gradually wrapping its head around the need. Since the teens it has been accepted as self-evident.
What do you think of A. Deur https://inspirehep.net/authors/1049232 theory of gravitational confinement? https://arxiv.org/abs/2306.00992
I’ve just noticed that in Banik et al’s paper, the v-tilde used is a 2D relative velocity divided by what the Newtonian velocity v_c would have been IF space would be 2D! Can somebody perhaps explain how that can possibly give correct results in their analysis?
As mathematician, I would suspect that dividing by a velocity that comes from an inverse-distance law (and thus NOT an inverse-square distance law) should be based on assuming an inverse distance law (as in the deep-MOND regime without EFE rather than Newtonian gravity). In fact, I would expect in a Newtonian 3D world that the values would depend on r, since v_c is a factor of the square root of r too large. How should v-tilde not scale with the square root of r_sky except in a world with not inverse square but inverse distance law?
Nevermind, I forgot a factor of r.
Here’s another oddity though: if I compare Banik et al’s figure 10 with figure 12 (these show the same relation) the blue line in the first is WB only and in the second is MOND. In 2-3 kAU, the peak is just before 0.5 and a small bulge at 1. In 3-5 kAU a broad peak around 0.5. In 5-12 the peak is at 0.5 and in 12-30 just after it. All four blue distributions are the same in figure 10 compared to figure 12, except that figure 12 has an other minimum or something. How can that disprove MOND rather than support it? I’m seriously doubting their MCMC usage now.
Again nevermind, I mistakenly thought figure 10 was data. It’s a distribution from MOND simulation. Sorry for taking your time.
I enjoyed the discussion. Thank you for posting it.
At the end of the video there is mention of sociology and falsifiability. With regard to falsifiability, it is a demarcation criterion. However, it is no longer considered viable among philosophers of science. The Wikipedia link,
https://en.m.wikipedia.org/wiki/Demarcation_problem
has some discussion leading up to the work of Laudan. The IEP link,
https://iep.utm.edu/pseudoscience-demarcation/
has more to say about Laudan and the responses to him.
In summary, demarcation may be a meaningless pseudoproblem. If not, it is a “slippery slope” for which an effective combination of criteria will almost always be problematic.
Mathematics has been suffering with this for a long time. I received my degree in mathematics from the University of Chicago in 1986. Its mathematics department is part of its physical sciences division. By contrast, I believe that computer science at Stanford University is in a philosophy division. At the link,
https://web.archive.org/web/20130727184333/http://www.cs.auckland.ac.nz/~chaitin/lowell.html
one can find a whig history of the transition from mathematics related to physical science to mathematics demarcated by philosophers.
Given that one can list a parade of names claiming to complete the phrase,
“Mathematics is …”
in incompatible ways, why should any mathematician abide by these mere opinions? Clearly, mathematical physics is not physics. But, these demarcations invariably fail to accommodate mathematical physics.
(Yes, even category theory. Analysis is not algebra and every known proof of the fundamental theorem of algebra relies on analysis.)
Science is suffering a similar problem.
A philosophy significant in the reduction of mathematics to mere linguistic analysis had been positivism. With regard to physics, one can point directly to Ernst Mach.
https://www.britannica.com/topic/positivism/The-critical-positivism-of-Mach-and-Avenarius
I had been confused about the use of the word “empirical” on physics blogs some time ago. Now, it appears to mean little more than “we will gather some numbers with measurements and argue over what we think they mean.”
And, that goes back to a positivist demarcation of science.
Philosophers of science have written a lot on this, and much of it is interesting. Much of it is not, and they seem to disagree with each other simply to have something to do. So while I appreciate that there are subtleties to the issue beyond the common conception of falsifiability, I don’t care whether it is in fashion with them or not, I only care if it is useful to me as a physical scientist (not a mathematical physicist or a biologist or what have you. Clearly there is a demarcation issue in physics, as I don’t recognize string theory as being physics because it does not relate to the real world in an experimentally testable way.) But that is an aside. The issue here is how do we disabuse ourselves of the notion of dark matter if it happens to be wrong? Here the concept of falsifiability is useful, as it helps distinguish statements like “thar be invisible mass” from legitimate physical hypotheses like a WIMP with a specific range of possible mass and interaction cross-section. Legitimate WIMP hypotheses have been thoroughly falsified at this juncture; merely asserting that “thar be invisible mass” is not a scientific statement because it is not falsifiable/testable/whatever word you want to use. I don’t care what we call it, that essence of falsifiablity remains absolutely essential to distinguishing physical reality from the many forms of mental masturbation.