I want to start by thanking those of you who have contributed to maintaining this site. This is not a money making venture, but it does help offset the cost of operations.
The title is not related to this, but rather to a flood of papers addressing the questions posed in recent posts. I was asking last time “take it where?” because it is hard to know what cosmology under UT will look like. In particular, how does structure formation work? We need a relativistic theory to progress further than we already have.
There are some papers that partially address this question. Very recently, there have been a whole slew of them. That’s good! It is also a bit overwhelming – I cannot keep up! Here I note a few recent papers that touch on structure formation in MOND. This is an incomplete list, and I haven’t had the opportunity to absorb much of it.
First, there is a paper by Milgrom with his relativistic BIMOND theory. It shows some possibility of subtle departures from FLRW along the lines of what I was describing with UT. Intriguingly, it explicitly shows that the assumptions we made to address structure formation with plain MOND should indeed hold. This is important because a frequent excuse employed to avoid acknowledging MOND’s predictions is that they don’t count if there is no relativistic theory. This is more a form of solution aversion rather than a serious scientific complaint, but people sure lean hard into it. So go read Milgrom’s papers.
Another paper I was looking forward to but didn’t know was in the offing is a rather general treatment of structure formation in relativistic extensions of MOND. There does seem to be some promise for assessing what could work in theories like AeST, and how it relates to earlier work. As a general treatment, there are a lot of options to sort through. Doing so will take a lot of effort by a lot of people over a considerable span of time.
There is also work on gravitational waves, and a variation dubbed a khronometric theory. I, well, I know what both of them are talking about to some extent, and yet some of what they say is presently incomprehensible to me. Clearly I have a lot still to learn. That’s a good problem to have.
I have been thinking for a while now that what we need is a period of a theoretical wild west. People need to try ideas, work through their consequences, and see what works and what does not. Ultimately, most ideas will fail, as there can only be one correct depiction of reality (I sure hope). It will take a lot of work and angst and bickering before we get there: this is perhaps only the beginning of what has already been a long journey for those of us who have been paying attention.
New and stirring things are belittled because if they are not belittled, the humiliating question arises, ‘Why then are you not taking part in them?’
I had written most of the post below the line before an exchange with a senior colleague who accused me of asking us to abandon General Relativity (GR). Anyone who read the last post knows that this is the opposite of true. So how does this happen?
Much of the field is mired in bad ideas that seemed like good ideas in the 1980s. There has been some progress, but the idea that MOND is an abandonment of GR I recognize as a misconception from that time. It arose because the initial MOND hypothesis suggested modifying the law of inertia without showing a clear path to how this might be consistent with GR. GR was built on the Equivalence Principle (EP), the equivalence1 of gravitational charge with inertial mass. The original MOND hypothesis directly contradicted that, so it was a fair concern in 1983. It was not by 19842. I was still an undergraduate then, so I don’t know the sociology, but I get the impression that most of the community wrote MOND off at this point and never gave it further thought.
I guess this is why I still encounter people with this attitude, that someone is trying to rob them of GR. It’s feels like we’re always starting at square one, like there has been zero progress in forty years. I hope it isn’t that bad, but I admit my patience is wearing thin.
I’m trying to help you. Don’t waste you’re entire career chasing phantoms.
What MOND does ask us to abandon is the Strong Equivalence Principle. Not the Weak EP, nor even the Einstein EP. Just the Strong EP. That’s a much more limited ask that abandoning all of GR. Indeed, all flavors of EP are subject to experimental test. The Weak EP has been repeatedly validated, but there is nothing about MOND that implies platinum would fall differently from titanium. Experimental tests of the Strong EP are less favorable.
I understand that MOND seems impossible. It also keeps having its predictions come true. This combination is what makes it important. The history of science is chock full of ideas that were initially rejected as impossible or absurd, going all the way back to heliocentrism. The greater the cognitive dissonance, the more important the result.
Continuing the previous discussion of UT, where do we go from here? If we accept that maybe we have all these problems in cosmology because we’re piling on auxiliary hypotheses to continue to be able to approximate UT with FLRW, what now?
I don’t know.
It’s hard to accept that we don’t understand something we thought we understood. Scientists hate revisiting issues that seem settled. Feels like a waste of time. It also feels like a waste of time continuing to add epicycles to a zombie theory, be it LCDM or MOND or the phoenix universe or tired light or whatever fantasy reality you favor. So, painful as it may be, one has find a little humility to step back and take account of what we know empirically independent of the interpretive veneer of theory.
Still, to give one pertinent example, BBN only works if the expansion rate is as expected during the epoch of radiation domination. So whatever is going on has to converge to that early on. This is hardly surprising for UT since it was stipulated to contain GR in the relevant limit, but we don’t actually know how it does so until we work out what UT is – a tall order that we can’t expect to accomplish overnight, or even over the course of many decades without a critical mass of scientists thinking about it (and not being vilified by other scientists for doing so).
Another example is that the cosmological principle – that the universe is homogeneous and isotropic – is observed to be true in the CMB. The temperature is the same all over the sky to one part in 100,000. That’s isotropy. The temperature is tightly coupled to the density, so if the temperature is the same everywhere, so is the density. That’s homogeneity. So both of the assumptions made by the cosmological principle are corroborated by observations of the CMB.
The cosmological principle is extremely useful for solving the equations of GR as applied to the whole universe. If the universe has a uniform density on average, then the solution is straightforward (though it is rather tedious to work through to the Friedmann equation). If the universe is not homogeneous and isotropic, then it becomes a nightmare to solve the equations. One needs to know where everything was for all of time.
Starting from the uniform condition of the CMB, it is straightforward to show that the assumption of homogeneity and isotropy should persist on large scales up to the present day. “Small” things like galaxies go nonlinear and collapse, but huge volumes containing billions of galaxies should remain in the linear regime and these small-scale variations average out. One cubic Gigaparsec will have the same average density as the next as the next, so the cosmological principle continues to hold today.
Anyone spot the rub? I said homogeneity and isotropy should persist. This statement assumes GR. Perhaps it doesn’t hold in UT?
This aspect of cosmology is so deeply embedded in everything that we do in the field that it was only recently that I realized it might not hold absolutely – and I’ve been actively contemplating such a possibility for a long time. Shouldn’t have taken me so long. Felten (1984) realized right away that a MONDian universe would depart from isotropy by late times. I read that paper long ago but didn’t grasp the significance of that statement. I did absorb that in the absence of a cosmological constant (which no one believed in at the time), the universe would inevitably recollapse, regardless of what the density was. This seems like an elegant solution to the flatness/coincidence problem that obsessed cosmologists at the time. There is no special value of the mass density that provides an over/under line demarcating eternal expansion from eventual recollapse, so there is no coincidence problem. All naive MOND cosmologies share the same ultimate fate, so it doesn’t matter what we observe for the mass density.
MOND departs from isotropy for the same reason it forms structure fast: it is inherently non-linear. As well as predicting that big galaxies would form by z=10, Sanders (1998) correctly anticipated the size of the largest structures collapsing today (things like the local supercluster Laniakea) and the scale of homogeneity (a few hundred Mpc if there is a cosmological constant). Pretty much everyone who looked into it came to similar conclusions.
But MOND and cosmology, as we know it in the absence of UT, are incompatible. Where LCDM encompasses both cosmology and the dynamics of bound systems (dark matter halos3), MOND addresses the dynamics of low acceleration systems (the most common examples being individual galaxies) but says nothing about cosmology. So how do we proceed?
For starters, we have to admit our ignorance. From there, one has to assume some expanding background – that much is well established – and ask what happens to particles responding to a MONDian force-law in this background, starting from the very nearly uniform initial condition indicated by the CMB. From that simple starting point, it turns out one can get a long way without knowing the details of the cosmic expansion history or the metric that so obsess cosmologists. These are interesting things, to be sure, but they are aspects of UT we don’t know and can manage without to some finite extent.
For one, the thermal history of the universe is pretty much the same with or without dark matter, with or without a cosmological constant. Without dark matter, structure can’t get going until after thermal decoupling (when the matter is free to diverge thermally from the temperature of the background radiation). After that happens, around z = 200, the baryons suddenly find themselves in the low acceleration regime, newly free to respond to the nonlinear force of MOND, and structure starts forming fast, with the consequences previously elaborated.
But what about the expansion history? The geometry? The big questions of cosmology?
Again, I don’t know. MOND is a dynamical theory that extends Newton. It doesn’t address these questions. Hence the need for UT.
I’ve encountered people who refuse to acknowledge4 that MOND gets predictions like z=10 galaxies right without a proper theory for cosmology. That attitude puts the cart before the horse. One doesn’t look for UT unless well motivated. That one is able to correctly predict 25 years in advance something that comes as a huge surprise to cosmologists today is the motivation. Indeed, the degree of surprise and the longevity of the prediction amplify the motivation: if this doesn’t get your attention, what possibly could?
There is no guarantee that our first attempt at UT (or our second or third or fourth) will work out. It is possible that in the search for UT, one comes up with a theory that fails to do what was successfully predicted by the more primitive theory. That just lets you know you’ve taken a wrong turn. It does not mean that a correct UT doesn’t exist, or that the initial prediction was some impossible fluke.
One candidate theory for UT is bimetric MOND. This appears to justify the assumptions made by Sanders’s early work, and provide a basis for a relativistic theory that leads to rapid structure formation. Whether it can also fit the acoustic power spectrum of the CMB as well as LCDM and AeST has yet to be seen. These things take time and effort. What they really need is a critical mass of people working on the problem – a community that enjoys the support of other scientists and funding institutions like NSF. Until we have that5, progress will remain grudgingly slow.
1The equivalence of gravitational charge and inertial mass means that the m in F=GMm/d2 is identically the same as the m in F=ma. Modified gravity changes the former; modified inertia the latter.
2Bekenstein & Milgrom (1984) showed how a modification of Newtonian gravity could avoid the non-conservation issues suffered by the original hypothesis of modified inertia. They also outlined a path towards a generally covariant theory that Bekenstein pursued for the rest of his life. That he never managed to obtain a completely satisfactory version is often cited as evidence that it can’t be done, since he was widely acknowledged as one of the smartest people in the field. One wonders why he persisted if, as these detractors would have us believe, the smart thing to do was not even try.
4I have entirely lost patience with this attitude. If a phenomena is correctly predicted in advance in the literature, we are obliged as scientists to take it seriously+. Pretending that it is not meaningful in the absence of UT is just an avoidance strategy: an excuse to ignore inconvenient facts.
+I’ve heard eminent scientists describe MOND’s predictive ability as “magic.” This also seems like an avoidance strategy. I, for one, do not believe in magic. That it works as well as it does – that it works at all – must be telling us something about the natural world, not the supernatural.
5There does exist a large and active community of astroparticle physicists trying to come up with theories for what the dark matter could be. That’s good: that’s what needs to happen, and we should exhaust all possibilities. We should do the same for new dynamical theories.
Imagine if you are able that General Relativity (GR) is correct yet incomplete. Just as GR contains Newtonian gravity in the appropriate limit, imagine that GR itself is a limit of some still more general theory that we don’t yet know about. Let’s call it Underlying Theory (UT) for short. This is essentially the working hypothesis of quantum gravity, but here I want to consider a more general case in which the effects of UT are not limited to the tiny netherworld of the Planck scale. Perhaps UT has observable consequences on very large scales, or a scale that is not length-based at all. What would that look like, given that we only know GR?
For starters, it might mean that the conventional Friedmann-Robertson-Walker (FRW) cosmology derived from GR is only a first approximation to the cosmology of the unknown deeper theory UT. In the first observational tests, FRW will look great, as the two are practically indistinguishable. As the data improve though, awkward problems might begin to crop up. What and where we don’t know, so our first inclination will not be to infer the existence of UT, but rather to patch up FRW with auxiliary hypotheses. Since the working presumption here is that GR is a correct limit, FRW will continue be a good approximation, and early departures will seem modest: they would not be interpreted as signs of UT.
What do we expect for cosmology anyway? A theory is only as good as its stated predictions. After Hubble established in the 1920s that galaxies external to the Milky Way existed and that the universe was expanding, it became clear that this was entirely natural in GR. Indeed, what was not natural was a static universe, the desire for which had led Einstein to introduce the cosmological constant (his “greatest blunder”).
A wide variety of geometries and expansion histories are possible with FRW. But there is one obvious case that stands out, that of Einstein-de Sitter (EdS, 1932). EdS has a matter density Ωm exactly equal to unity, balancing on the divide between a universe that expands forever (Ωm < 1) and one that eventually recollapses (Ωm > 1). The particular case Ωm = 1 is the only natural scale in the theory. It is also the only FRW model with a flat geometry, in the sense that initially parallel beams of light remain parallel indefinitely. These properties make it special in a way that obsessed cosmologists for many decades. (In retrospect, this obsession has the same flavor as the obsession the Ancients had with heavenly motions being perfect circles*.) A natural cosmology would therefor be one in which Ωm = 1 in normal matter (baryons).
By the 1970s, it was clear that there was no way you could have Ωm = 1 in baryons. There just wasn’t enough normal matter, either observed directly, or allowed by Big Bang Nucleosynthesis. Despite the appeal of Ωm = 1, it looked like we lived in an open universe with Ωm < 1.
This did not sit well with many theorists, who obsessed with the flatness problem. The mass density parameter evolves if it is not identically equal to one, so it was really strange that we should live anywhere close to Ωm = 1, even Ωm = 0.1, if the universe was going to spend eternity asymptoting to Ωm → 0. It was a compelling argument, enough to make most of us accept (in the early 1980s) the Inflationary model of the early universe, as Inflation gives a natural mechanism to drive Ωm → 1. The bulk of this mass could not be normal matter, but by then flat rotation curves had been discovered, along with a ton of other evidence that a lot of matter was dark. A third element that came in around the same time was another compelling idea, supersymmetry, which gave a natural mechanism by which the unseen mass could be non-baryonic. The confluence of these revelations gave us the standard cold dark matter (SCDM) cosmological model. It was EdS with Ωm = 1 mostly in dark matter. We didn’t know what the dark matter was, but we had a good idea (WIMPs), and it just seemed like a matter of tracking them down.
SCDM was absolutely Known for about a decade, pushing two depending on how you count. We were very reluctant to give it up. But over the course of the 1990s, it became clear [again] that Ωm < 1. What was different was a willingness, even a desperation, to accept and rehabilitate Einstein’s cosmological constant. This seemed to solve all cosmological problems, providing a viable concordance cosmology that satisfied all then-available data, salvaged Inflation and a flat geometry (Ωm + ΩΛ = 1, albeit at the expense of the coincidence problem, which is worse in LCDM than it is in open models), and made predictions that came true for the accelerated expansion rate and the location of the first peak of the acoustic power spectrum. This was a major revelation that led to Nobel prizes and still resonates today in the form of papers trying to suss out the nature of this so-called dark energy.
What if the issue is even more fundamental? Taking a long view, subsuming many essentialdetails, we’ve gone from a natural cosmology (EdS) to a less natural one (an open universe with a low density in baryons) to SCDM (EdS with lots of non-baryonic dark matter) to LCDM. Maybe these are just successive approximations we’ve been obliged to make in order for FLRW** to mimic UT? How would we know?
One clue might be if the concordance region closed. Here is a comparison of a compilation of constraints assembled by students in my graduate cosmology course in 2002 (plus 2003 WMAP) with 2018 Planck parameters:
The shaded regions were excluded by the sum of the data available in 2003. The question I wondered then was whether the small remaining white space was indeed the correct answer, or merely the least improbable region left before the whole picture was ruled out. Had we painted ourselves into a corner?
If we take these results and the more recent Planck fits at face value, yes: nothing is left, the window has closed. However, other things change over time as well. For example, I’d grant a higher upper limit to Ωm than is illustrated above. The rotation curve line represents an upper limit that no longer pertains if dark matter halos are greatly modified by feedback. We were trying to avoid invoking that deus ex machina then, but there’s no helping it now.
Still, you can see in this diagram what we now call the Hubble tension. To solve that within the conventional FLRW framework, we have to come up with some new free parameter. There are lots of ideas that invoke new physics.
Maybe the new physics is UT? Maybe we have to keep tweaking FLRW because cosmology has reached a precision such that FLRW is no longer completely adequate as an approximation to UT? But if we are willing to add new parameters via “new physics” made up to address each new problem (dark matter, dark energy, something new and extra for the Hubble tension) so we can keep tweaking it indefinitely, how would we ever recognize that all we’re doing is approximating UT? If only there were different data that suggested new physics in an independent way.
Attitude matters. If we think both LCDM and the existence of dark matter is proven beyond a reasonable doubt, as clearlymany physicists do, then any problem that arises is just a bit of trivia to sort out. Despite the current attention being given to the Hubble tension, I’d wager that most of the people not writing papers about it are presuming that the problem will go away: traditional measures of the Hubble constant will converge towards the Planck value. That might happen (or appear to happen through the magic of confirmation bias), and I would expect that myself if I hadn’t worked on H0 directly. It’s a lot easier to dismiss such things when you haven’t been involved enough to know how hard they are to dismiss***.
That last sentence pretty much sums up the community’s attitude towards MOND. That led me to pose the question of the year earlier. I have not heard any answers, just excuses to not have to answer. Still, these issues are presumably not unrelated. That MOND has so many predictions – even in cosmology – come true is itself an indication of UT. From that perspective, it is not surprising that we have to keep tweaking FLRW. Indeed, from this perspective, parameters like ΩCDM are chimeras lacking in physical meaning. They’re just whatever they need to be to fit whatever subset of the data is under consideration. That independent observations pretty much point to the same value is far compelling evidence in favor of LCDM than the accuracy of a fit to any single piece of information (like the CMB) where ΩCDM can be tuned to fit pretty much any plausible power spectrum. But is the stuff real? I make no apologies for holding science to a higher standard than those who consider a fit to the CMB data to be a detection.
It has taken a long time for cosmology to get this far. One should take a comparably long view of these developments, but we generally do not. Dark matter was already received wisdom when I was new to the field, unquestionably so. Dark energy was new in the ’90s but has long since been established as received wisdom. So if we now have to tweak it a little to fix this seemingly tiny tension in the Hubble constant, that seems incremental, not threatening to the pre-existing received wisdom. From the longer view, it looks like just another derailment in an excruciatingly slow-moving train wreck.
So I ask again: what would falsify FLRW cosmology? How do we know when to think outside this box, and not just garnish its edges?
*The obsession with circular motion continued through Copernicus, who placed the sun at the center of motion rather than the earth, but continued to employ epicycles. It wasn’t until over a half century later that Kepler finally broke with this particular obsession. In retrospect, we recognize circular motion as a very special case of the many possibilities available with elliptical orbits, just as EdS is only one possible cosmology with a flat geometry once we admit the possibility of a cosmological constant.
**FLRW = Friedmann-Lemaître-Robertson-Walker. I intentionally excluded Lemaître from the early historical discussion because he (and the cosmological constant) were mostly excluded from considerations at that time. Mostly.
Someone with a longer memory than my own is Jim Peebles. I happened to bump into him while walking across campus while in Princeton for a meeting in early 2019. (He was finally awarded a Nobel prize later that year; it should have been in association with the original discovery of the CMB). On that occasion, he (unprompted) noted an analogy between the negative attitude towards the cosmological constant that was prevalent in the community pre-1990s to that for MOND now. NOT that he was in any way endorsing MOND; he was just noting that the sociology had the same texture, and could conceivably change on a similar timescale.
***Note that I am not dismissing the Planck results or any other data; I am suggesting the opposite: the data have become so good that it is impossible to continue to approximate UT with tweaks to FLRW (hence “new physics”). I’m additionally pointing out that important new physics has been staring us in the face for a long time.
Kuhn noted that as paradigms reach their breaking point, there is a divergence of opinions between scientists about what the important evidence is, or what even counts as evidence. This has come to pass in the debate over whether dark matter or modified gravity is a better interpretation of the acceleration discrepancy problem. It sometimes feels like we’re speaking about different topics in a different language. That’s why I split the diagram version of the dark matter tree as I did:
Evidence indicating acceleration discrepancies in the universe and various flavors of hypothesized solutions.
Astroparticle physicists seem to be well-informed about the cosmological evidence (top) and favor solutions in the particle sector (left). As more of these people entered the field in the ’00s and began attending conferences where we overlapped, I recognized gaping holes in their knowledge about the dynamical evidence (bottom) and related hypotheses (right). This was part of my motivation to develop an evidence-based course1 on dark matter, to try to fill in the gaps in essential knowledge that were obviously being missed in the typical graduate physics curriculum. Though popular on my campus, not everyone in the field has the opportunity to take this course. It seems that the chasm has continued to grow, though not for lack of attempts at communication.
Part of the problem is a phase difference: many of the questions that concern astroparticle physicists (structure formation is a big one) were addressed 20 years ago in MOND. There is also a difference in texture: dark matter rarely predicts things but always explains them, even if it doesn’t. MOND often nails some predictions but leaves other things unexplained – just a complete blank. So they’re asking questions that are either way behind the curve or as-yet unanswerable. Progress rarely follows a smooth progression in linear time.
I have become aware of a common construction among many advocates of dark matter to criticize “MOND people.” First, I don’t know what a “MOND person” is. I am a scientist who works on a number of topics, among them both dark matter and MOND. I imagine the latter makes me a “MOND person,” though I still don’t really know what that means. It seems to be a generic straw man. Users of this term consistently paint such a luridly ridiculous picture of what MOND people do or do not do that I don’t recognize it as a legitimate depiction of myself or of any of the people I’ve met who work on MOND. I am left to wonder, who are these “MOND people”? They sound very bad. Are there any here in the room with us?
I am under no illusions as to what these people likely say when I am out of ear shot. Someone recently pointed me to a comment on Peter Woit’s blog that I would not have come across on my own. I am specifically named. Here is a screen shot:
This concisely pinpoints where the field2 is at, both right and wrong. Let’s break it down.
let me just remind everyone that the primary reason to believe in the phenomenon of cold dark matter is the very high precision with which we measure the CMB power spectrum, especially modes beyond the second acoustic peak
This is correct, but it is not the original reason to believe in CDM. The history of the subject matters, as we already believed in CDM quite firmly before any modes of the acoustic power spectrum of the CMB were measured. The original reasons to believe in cold dark matter were (1) that the measured, gravitating mass density exceeds the mass density of baryons as indicated by BBN, so there is stuff out there with mass that is not normal matter, and (2) large scale structure has grown by a factor of 105 from the very smooth initial condition indicated initially by the nondetection of fluctuations in the CMB, while normal matter (with normal gravity) can only get us a factor of 103 (there were upper limits excluding this before there was a detection). Structure formation additionally imposes the requirement that whatever the dark matter is moves slowly (hence “cold”) and does not interact via electromagnetism in order to evade making too big an impact on the fluctuations in the CMB (hence the need, again, for something non-baryonic).
When cold dark matter became accepted as the dominant paradigm, fluctuations in the CMB had not yet been measured. The absence of observable fluctuations at a larger level sufficed to indicate the need for CDM. This, together with Ωm > Ωb from BBN (which seemed the better of the two arguments at the time), sufficed to convince me, along with most everyone else who was interested in the problem, that the answer had3 to be CDM.
This all happened before the first fluctuations were observed by COBE in 1992. By that time, we already believed firmly in CDM. The COBE observations caused initial confusion and great consternation – it was too much! We actually had a prediction from then-standard SCDM, and it had predicted an even lower level of fluctuations than what COBE observed. This did not cause us (including me) to doubt CDM (thought there was one suggestion that it might be due to self-interacting dark matter); it seemed a mere puzzle to accommodate, not an anomaly. And accommodate it we did: the power in the large scale fluctuations observed by COBE is part of how we got LCDM, albeit only a modest part. A lot of younger scientists seem to have been taught that the power spectrum is some incredibly successful prediction of CDM when in fact it has surprised us at nearly every turn.
As I’ve related here before, it wasn’t until the end of the century that CMB observations became precise enough to provide a test that might distinguish between CDM and MOND. That test initially came out in favor of MOND – or at least in favor of the absence of dark matter: No-CDM, which I had suggested as a proxy for MOND. Cosmologists and dark matter advocates consistently omit this part of the history of the subject.
I had hoped that cosmologists would experience the same surprise and doubt and reevaluation that I had experienced when MOND cropped up in my own data when it cropped up in theirs. Instead, they went into denial, ignoring the successful prediction of the first-to-second peak amplitude ratio, or, worse, making up stories that it hadn’t happened. Indeed, the amplitude of the second peak was so surprising that the first paper to measure it omitted mention of it entirely. Just didn’t talk about it, let alone admit that “Gee, this crazy prediction came true!” as I had with MOND in LSB galaxies. Consequently, I decided that it was better to spend my time working on topics where progress could be made. This is why most of my work on the CMB predates “modes beyond the second peak” just as our strong belief in CDM also predated that evidence. Indeed, communal belief in CDM was undimmed when the modes defining the second peak were observed, despite the No-CDM proxy for MOND being the only hypothesis to correctly predict it quantitatively a priori.
That said, I agree with clayton’s assessment that
CDM thinks [the second and third peak] should be about the same
That this is the best evidence now is both correct and a much weaker argument than it is made out to be. It sounds really strong, because a formal fit to the CMB data require a dark matter component at extremely high confidence – something approaching 100 sigma. This analysis assumes that dark matter exist. It does not contemplate that something else might cause the same effect, so all it really does, yet again, is demonstrate that General Relativity cannot explain cosmology when restricted to the material entities we concretely know to exist.
Given the timing, the third peak was not a strong element of my original prediction, as we did not yet have either a first or second peak. We hadn’t yet clearly observed peaks at all, so what I was doing was pretty far-sighted, but I wasn’t thinking that far ahead. However, the natural prediction for the No-CDM picture I was considering was indeed that the third peak should be lower than the second, as I’ve discussed before.
The No-CDM model (blue line) that correctly predicted the amplitude of the second peak fails to predict that of the third. Data from the Planck satellite; model line from McGaugh (2004); figure from McGaugh (2015).
In contrast, in CDM, the acoustic power spectrum of the CMB can do a wide variety of things:
Acoustic power spectra calculated for the CMB for a variety of cosmic parameters. From Dodelson & Hu (2002).
Given the diversity of possibilities illustrated here, there was never any doubt that a model could be fit to the data, provided that oscillations were observed as expected in any of the theories under consideration here. Consequently, I do not find fits to the data, though excellent, to be anywhere near as impressive as commonly portrayed. What does impress me is consistency with independent data.
What impresses me even more are a priori predictions. These are the gold standard of the scientific method. That’s why I worked my younger self’s tail off to make a prediction for the second peak before the data came out. In order to make a clean test, you need to know what both theories predict, so I did this for both LCDM and No-CDM. Here are the peak ratios predicted before there were data to constrain them, together with the data that came after:
The ratio of the first-to-second (left) and second-to-third peak (right) amplitude ratio in LCDM (red) and No-CDM (blue) as predicted by Ostriker & Steinhardt (1995) and McGaugh (1999). Subsequent data as labeled.
The left hand panel shows the predicted amplitude ratio of the first-to-second peak, A1:2. This is the primary quantity that I predicted for both paradigms. There is a clear distinction between the predicted bands. I was not unique in my prediction for LCDM; the same thing can be seen in other contemporaneous models. All contemporaneous models. I was the only one who was not surprised by the data when they came in, as I was the only one who had considered the model that got the prediction right: No-CDM.
The same No-CDM model fails to correctly predict the second-to-third peak ratio, A2:3. It is, in fact, way off, while LCDM is consistent with A2:3, just as Clayton says. This is a strong argument against No-CDM, because No-CDM makes a clear and unequivocal prediction that it gets wrong. Clayton calls this
a stone-cold, qualitative, crystal clear prediction of CDM
which is true. It is also qualitative, so I call it weak sauce. LCDM could be made to fit a very large range of A2:3, but it had already got A1:2 wrong. We had to adjust the baryon densityoutside the allowed range in order to make it consistent with the CMB data. The generous upper limit that LCDM might conceivably have predicted in advance of the CMB data was A1:2 < 2.06, which is still clearly less than observed. For the first years of the century, the attitude was that BBN had been close, but not quite right – preference being given to the value needed to fit the CMB. Nowadays, BBN and the CMB are said to be in great concordance, but this is only true if one restricts oneself to deuterium measurements obtained after the “right” answer was known from the CMB. Prior to that, practically all of the measurements for all of the important isotopes of the light elements, deuterium, helium, and lithium, all concurred that the baryon density Ωbh2 < 0.02, with the consensus value being Ωbh2 = 0.0125 ± 0.0005. This is barely half the value subsequently required to fit the CMB (Ωbh2 = 0.0224 ± 0.0001). But what’s a factor of two among cosmologists? (In this case, 4 sigma.)
Taking the data at face value, the original prediction of LCDM was falsified by the second peak. But, no problem, we can move the goal posts, in this case by increasing the baryon density. The successful prediction of the third peak only comes after the goal posts have been moved to accommodate the second peak. Citing only the comparable size of third peak to the second while not acknowledging that the second was too small elides the critical fact that No-CDM got something right, a priori, that LCDM did not. No-CDM failed only after LCDM had already failed. The difference is that I acknowledge its failure while cosmologists elide this inconvenient detail. Perhaps the second peak amplitude is a fluke, but it was a unique prediction that was exactly nailed and remains true in all subsequent data. That’s a pretty remarkable fluke4.
LCDM wins ugly here by virtue of its flexibility. It has greater freedom to fit the data – any of the models in the figure of Dodelson & Hu will do. In contrast. No-CDM is the single blue line in my figure above, and nothing else. Plausible variations in the baryon density make hardly any difference: A1:2 has to have the value that was subsequently observed, and no other. It passed that test with flying colors. It flunked the subsequent test posed by A2:3. For LCDM this isn’t even a test, it is an exercise in fitting the data with a model that has enough parameters5 to do so.
In those days, when No-CDM was the only correct a priori prediction, I would point out to cosmologists that it had got A1:2 right when I got the chance (which was rarely: I was invited to plenty of conferences in those days, but none on the CMB). The typical reaction was usually outright denial6 though sometimes it warranted a dismissive “That’s not a MOND prediction.” The latter is a fair criticism. No-CDM is just General Relativity without CDM. It represented MOND as a proxy under the ansatz that MOND effects had not yet manifested in a way that affected the CMB. I expected that this ansatz would fail at some point, and discussed some of the ways that this should happen. One that’s relevant today is that galaxies form early in MOND, so reionization happens early, and the amplitude of gravitational lensing effects is amplified. There is evidence for both of these now. What I did not anticipate was a departure from a damping spectrum around L=600 (between the second and third peaks). That’s a clear deviation from the prediction, which falsifies the ansatz but not MOND itself. After all, they were correct in noting that this wasn’t a MOND prediction per se, just a proxy. MOND, like Newtonian dynamics before it, is relativity adjacent, but not itself a relativistic theory. Neither can explain the CMB on their own. If you find that an unsatisfactory answer, imagine how I feel.
The same people who complained then that No-CDM wasn’t a real MOND prediction now want to hold MOND to the No-CDM predicted power spectrum and nothing else. First it was the second peak isn’t a real MOND prediction! then when the third peak was observed it became no way MOND can do this! This isn’t just hypocritical, it is bad science. The obvious way to proceed would be to build on the theory that had the greater, if incomplete, predictive success. Instead, the reaction has consistently been to cherry-pick the subset of facts that precludes the need for serious rethinking.
This brings us to sociology, so let’s examine some more of what Clayton has to say:
Any talk I’ve ever seen by McGaugh (or more exotic modified gravity people like Verlinde) elides this fact, and they evade the questions when I put my hand up to ask. I have invited McGaugh to a conference before specifically to discuss this point, and he just doesn’t want to.
There is so much to unpack here, I hardly know where to start. By saying I “elide this fact” about the qualitatively equality of the second and third peak, Clayton is basically accusing me of lying by omission. This is pretty rich coming from a community that consistently elides the history I relate above, and never addresses the question raised by MOND’s predictive power.
Intellectual honesty is very important to me – being honest that MOND predicted what I saw in low surface brightness where my own prediction was wrong is what got me into this mess in the first place. It would have been vastly more convenient to pretend that I never heard of MOND (at first I hadn’t7) and act like that never happened. That would be an lie of omission. It would be a large lie, a lie that denies an important aspect of how the world works (what we’re supposed to uncover through science), the sort of lie that cleric Paul Gerhardt may have had in mind when he said
When a man lies, he murders some part of the world.
Clayton is, in essence, accusing me of exactly that by failing to mention the CMB in talks he has seen. That might be true – I give a lot of talks. He hasn’t been to most of them, and I usually talk about things I’ve done more recently than 2004. I’ve commented explicitly on this complaint before –
There’s only so much you can address in a half hour talk. [This is a recurring problem. No matter what I say, there always seems to be someone who asks “why didn’t you address X?” where X is usually that person’s pet topic. Usually I could do so, but not in the time allotted.]
– so you may appreciate my exasperation at being accused of dishonesty by someone whose complaint is so predictable that I’ve complained before about people who make this complaint. I’m only human – I can’t cover all subjects for all audiences every time all the time. Moreover, I do tend to choose to discuss subjects that may be news to an audience, not simply reprise the greatest hits they want to hear. Clayton obviously knows about the third peak; he doesn’t need to hear about it from me. This is the scientific equivalent of shouting Freebird!at a concert.
It isn’t like I haven’t talked about it. I have been rigorously honest about the CMB, and certainly have not omitted mention of the third peak. Here is a comment from February 2003 when the third peak was only tentatively detected:
Page et al. (2003) do not offer a WMAP measurement of the third peak. They do quote a compilation of other experiments by Wang et al. (2003). Taking this number at face value, the second to third peak amplitude ratio is A2:3 = 1.03 +/- 0.20. The LCDM expectation value for this quantity was 1.1, while the No-CDM expectation was 1.9. By this measure, LCDM is clearly preferable, in contradiction to the better measured first-to-second peak ratio.
the Boomerang data and the last credible point in the 3-year WMAP data both have power that is clearly in excess of the no-CDM prediction. The most natural interpretation of this observation is forcing by a mass component that does not interact with photons, such as non-baryonic cold dark matter.
There are lots like this, including my review for CJP and this talk given at KITP where I had been asked to explicitly take the side of MOND in a debate format for an audience of largely particle physicists. The CMB, including the third peak, appears on the fourth slide, which is right up front, not being elided at all. In the first slide, I tried to encapsulate the attitudes of both sides:
I did the same at a meeting in Stony Brook where I got a weird vibe from the audience; they seemed to think I was lying about the history of the second peak that I recount above. It will be hard to agree on an interpretation if we can’t agree on documented historical facts.
More recently, this image appears on slide 9 of this lecture from the cosmology course I just taught (Fall 2022):
I recognize this slide from talks I’ve given over the past five plus years; this class is the most recent place I’ve used it, not the first. On some occasions I wrote “The 3rd peak is the best evidence for CDM.” I do not recall which all talks I used this in; many of them were likely colloquia for physics departments where one has more time to cover things than in a typical conference talk. Regardless, these apparently were not the talks that Clayton attended. Rather than it being the case that I never address this subject, the more conservative interpretation of the experience he relates would be that I happened not to address it in the small subset of talks that he happened to attend.
But do go off, dude: tell everyone how I never address this issue and evade questions about it.
I have been extraordinarily patient with this sort of thing, but I confess to a great deal of exasperation at the perpetual whataboutism that many scientists engage in. It is used reflexively to shut down discussion of alternatives: dark matter has to be right for this reason (here the CMB); nothing else matters (galaxy dynamics), so we should forbid discussion of MOND. Even if dark matter proves to be correct, the CMB is being used an excuse to not address the question of the century: why does MOND get so many predictions right? Any scientist with a decent physical intuition who takes the time to rub two brain cells together in contemplation of this question will realize that there is something important going on that simply invoking dark matter does not address.
In fairness to McGaugh, he pointed out some very interesting features of galactic DM distributions that do deserve answers. But it turns out that there are a plurality of possibilities, from complex DM physics (self interactions) to unmodelable SM physics (stellar feedback, galaxy-galaxy interactions). There are no such alternatives to CDM to explain the CMB power spectrum.
Thanks. This is nice, and why I say it would be easier to just pretend to never have heard of MOND. Indeed, this succinctly describes the trajectory I was on before I became aware of MOND. I would prefer to be recognized for my own work – of whichthereisplenty – than an association with a theory that is not my own – an association that is born of honestly reporting a surprising observation. I find my reception to be more favorable if I just talk about the data, but what is the point of taking data if we don’t test the hypotheses?
I have gone to great extremes to consider all the possibilities. There is not a plurality of viable possibilities; most of these things do not work. The specific ideas that are cited here are known not work. SIDM apears to work because it has more free parameters than are required to describe the data. This is a common failing of dark matter models that simply fit some functional form to observed rotation curves. They can be made to fit the data, but they cannot be used to predict the way MOND can.
Feedback is even worse. Never mind the details of specific feedback models, and think about what is being said here: the observations are to be explained by “unmodelable [standard model] physics.” This is a way of saying that dark matter claims to explain the phenomena while declining to make a prediction. Don’t worry – it’ll work out! How can that be considered better than or even equivalent to MOND when many of the problems we invoke feedback to solve are caused by the predictions of MOND coming true? We’re just invoking unmodelable physics as a deus ex machina to make dark matter models look like something they are not. Are physicists straight-up asserting that it is better to have a theory that is unmodelable than one that makes predictions that come true?
Returning to the CMB, are there no “alternatives to CDM to explain the CMB power spectrum”? I certainly do not know how to explain the third peak with the No-CDM ansatz. For that we need a relativistic theory, like Beklenstein‘s TeVeS. This initially seemed promising, as it solved the long-standing problem of gravitational lensing in MOND. However, it quickly became clear that it did not work for the CMB. Nevertheless, I learned from this that there could be more to the CMB oscillations than allowed by the simple No-CDM ansatz. The scalar field (an entity theorists love to introduce) in TeVeS-like theories could play a role analogous to cold dark matter in the oscillation equations. That means that what I thought was a killer argument against MOND – the exact same argument Clayton is making – is not as absolute as I had thought.
Writing down a new relativistic theory is not trivial. It is not what I do. I am an observational astronomer. I only play at theory when I can’t get telescope time.
Comic from the Far Side by Gary Larson.
So in the mid-00’s, I decided to let theorists do theory and started the first steps in what would ultimately become the SPARC database (it took a decade and a lot of effort by Jim Schombert and Federico Lelli in addition to myself). On the theoretical side, it also took a long time to make progress because it is a hard problem. Thanks to work by Skordis & Zlosnik on a theory they [now] call AeST8, it is possible to fit the acoustic power spectrum of the CMB:
I consider this to be a demonstration, not necessarily the last word on the correct theory, but hopefully an iteration towards one. The point here is that it is possible to fit the CMB. That’s all that matters for our current discussion: contrary to the steady insistence of cosmologists over the past 15 years, CDM is not the only way to fit the CMB. There may be other possibilities that we have yet to figure out. Perhaps even a plurality of possibilities. This is hard work and to make progress we need a critical mass of people contributing to the effort, not shouting rubbish from the peanut gallery.
As I’ve done before, I like to take the language used in favor of dark matter, and see if it also fits when I put on a MOND hat:
As a galaxy dynamicist, let me just remind everyone that the primary reason to believe in MOND as a physical theory and not some curious dark matter phenomenology is the very high precision with which MOND predicts, a priori, the dynamics of low-acceleration systems, especially low surface brightness galaxies whose kinematics were practically unknown at the time of its inception. There is a stone-cold, quantitative, crystal clear prediction of MOND that the kinematics of galaxies follows uniquely from their observed baryon distributions. This is something CDM profoundly and irremediably gets wrong: it predicts that the dark matter halo should have a central cusp9 that is not observed, and makes no prediction at all for the baryon distribution, let alone does it account for the detailed correspondence between bumps and wiggles in the baryon distribution and those in rotation curves. This is observed over and over again in hundreds upon hundreds of galaxies, each of which has its own unique mass distribution so that each and every individual case provides a distinct, independent test of the hypothesized force law. In contrast, CDM does not even attempt a comparable prediction: rather than enabling the real-world application to predict that this specific galaxy will have this particular rotation curve, it can only refer to the statistical properties of galaxy-like objects formed in numerical simulations that resemble real galaxies only in the abstract, and can never be used to directly predict the kinematics of a real galaxy in advance of the observation – an ability that has been demonstrated repeatedly by MOND. The simple fact that the simple formula of MOND is so repeatably correct in mapping what we see to what we get is to me the most convincing way to see that we need a grander theory that contains MOND and exactly MOND in the low acceleration limit, irrespective of the physical mechanism by which this is achieved.
That is stronger language than I would ordinarily permit myself. I do so entirely to show the danger of being so darn sure. I actually agree with clayton’s perspective in his quote; I’m just showing what it looks like if we adopt the same attitude with a different perspective. The problems pointed out for each theory are genuine, and the supposed solutions are not obviously viable (in either case). Sometimes I feel like we’re up the proverbial creek without a paddle. I do not know what the right answer is, and you should be skeptical of anyone who is sure that he does. Being sure is the sure road to stagnation.
1It may surprise some advocates of dark matter that I barely touch on MOND in this course, only getting to it at the end of the semester, if at all. It really is evidence-based, with a focus on the dynamical evidence as there is a lot more to this than seems to be appreciated by most physicists*. We also teach a course on cosmology, where students get the material that physicists seem to be more familiar with.
*I once had a colleague who was is a physics department ask how to deal with opposition to developing a course on galaxy dynamics. Apparently, some of the physicists there thought it was not a rigorous subject worthy of an entire semester course – an attitude that is all too common. I suggested that she pointedly drop the textbook of Binney & Tremaine on their desks. She reported back that this technique proved effective.
2I do not know who clayton is; that screen name does not suffice as an identifier. He claims to have been in contact with me at some point, which is certainly possible: I talk to a lot of people about these issues. He is welcome to contact me again, though he may wish to consider opening with an apology.
3One of the hardest realizations I ever had as a scientist was that both of the reasons (1) and (2) that I believed to absolutely require CDM assumed that gravity was normal. If one drops that assumption, as one must to contemplate MOND, then these reasons don’t require CDM so much as they highlight that something is very wrong with the universe. That something could be MOND instead of CDM, both of which are in the category of who ordered that?
4In the early days (late ’90s) when I first started asking why MOND gets any predictions right, one of the people I asked was Joe Silk. He dismissed the rotation curve fits of MOND as a fluke. There were 80 galaxies that had been fit at the time, which seemed like a lot of flukes. I mention this because one of the persistent myths of the subject is that MOND is somehow guaranteed to magically fit rotation curves. Erwin de Blok and I explicitly showed that this was not true in a 1998 paper.
5I sometimes hear cosmologists speak in awe of the thousands of observed CMB modes that are fit by half a dozen LCDM parameters. This is impressive, but we’re fitting a damped and driven oscillation – those thousands of modes are not all physically independent. Moreover, as can be seen in the figure from Dodelson & Hu, some free parameters provide more flexibility than others: there is plenty of flexibility in a model with dark matter to fit the CMB data. Only with the Planck data do minortensions arise, the reaction to which is generally to add more free parameters, like decoupling the primordial helium abundance from that of deuterium, which is anathema to standard BBN so is sometimes portrayed as exciting, potentially new physics.
For some reason, I never hear the same people speak in equal awe of the hundreds of galaxy rotation curves that can be fit by MOND with a universal acceleration scale and a single physical free parameter, the mass-to-light ratio. Such fits are over-constrained, and every single galaxy is an independent test. Indeed, MOND can predict rotation curves parameter-free in cases where gas dominates so that the stellar mass-to-light ratio is irrelevant.
How should we weigh the relative merit of these very different lines of evidence?
6On a number of memorable occasions, people shouted “No you didn’t!” On smaller number of those occasions (exactly two), they bothered to look up the prediction in the literature and then wrote to apologize and agree that I had indeed predicted that.
7If you read this paper, part of what you will see is me being confused about how low surface brightness galaxies could adhere so tightly to the Tully-Fisher relation. They should not. In retrospect, one can see that this was a MOND prediction coming true, but at the time I didn’t know about that; all I could see was that the result made no sense in the conventional dark matter picture.
Some while after we published that paper, Bob Sanders, who was at the same institute as my collaborators, related to me that Milgrom had written to him and asked “Do you know these guys?”
8Initially they had called it RelMOND, or just RMOND. AeST stands for Aether-Scalar-Tensor, and is clearly a step along the lines that Bekenstein made with TeVeS.
In addition to fitting the CMB, AeST retains the virtues of TeVeS in terms of providing a lensing signal consistent with the kinematics. However, it is not obvious that it works in detail – Tobias Mistele has a brand new paper testing it, and it doesn’t look good at extremely low accelerations. With that caveat, it significantly outperforms extant dark matter models.
There is an oft-repeated fallacy that comes up any time a MOND-related theory has a problem: “MOND doesn’t work therefore it has to be dark matter.” This only ever seems to hold when you don’t bother to check what dark matter predicts. In this case, we should but don’t detect the edge of dark matter halos at higher accelerations than where AeST runs into trouble.
9Another question I’ve posed for over a quarter century now is what would falsify CDM? The first person to give a straight answer to this question was Simon White, who said that cusps in dark matter halos were an ironclad prediction; they had to be there. Many years later, it is clear that they are not, but does anyone still believe this is an ironclad prediction? If it is, then CDM is already falsified. If it is not, then what would be? It seems like the paradigm can fit any surprising result, no matter how unlikely a priori. This is not a strength, it is a weakness. We can, and do, add epicycle upon epicycle to save the phenomenon. This has been my concern for CDM for a long time now: not that it gets some predictions wrong, but that it can apparently never get a prediction so wrong that we can’t patch it up, so we can never come to doubt it if it happens to be wrong.
That’s the question of the year, and perhaps of the century. I’ve been asking it since before this century began, and I have yet to hear a satisfactory answer. Most of the relevant scientific community has aggressively failed to engage with it. Even if MOND is wrong for [insert favorite reason], this does not relieve us of the burden to understand why it gets many predictions right – predictions that have repeatedly come as a surprise to the community that has declined to engage, preferring to ignore the elephant in the room.
It is not good enough to explain MOND phenomenology post facto with some contrived LCDM model. That’s mostly1 what is on offer, being born of the attitude that we’re sure LCDM is right, so somehow MOND phenomenology must emerge from it. We could just as [un]reasonably adopt the attitude that MOND is correct, so surely LCDM phenomenology happens as a result of trying to fit the standard cosmological model to some deeper, subtly different theory.
A basic tenet of the scientific method is that if a theory has its predictions come true, we are obliged to acknowledge its efficacy. This is how we know when to change our minds. This holds even if we don’t like said theory – especially if we don’t like it.
That was my experience with MOND. It correctly predicted the kinematics of the low surface brightness galaxies I was interested in. Dark matter did not. The data falsified all the models available at the time, including my own dark matter-based hypothesis. The only successful a priori predictions were those made by Milgrom. So what am I to conclude2 from this? That he was wrong?
I understand the reluctance to engage. It really ticked me off that my own model was falsified. How could this stupid theory of Milgrom’s do better for my galaxies? Indeed, how could it get anything right? I had no answer to this, nor does the wider community. It is not for lack of trying on my part; I’ve spent a lot of time3 building conventional dark matter models. They don’t work. Most of the models made by others that I’ve seen are just variations on models I had already considered and rejected as obviously unworkable. They might look workable from one angle, but they inevitably fail from some other, solving one problem at the expense of another.
Predictive success does not guarantee that a theory is right, but it does make it better than competing theories that fail for the same prediction. This is where MOND and LCDM are difficult to compare, as the relevant data are largely incommensurate. Where one is eloquent, the other tends to be muddled. However, it has been my experience that MOND more frequently reproduces the successes of dark matter than vice-versa. I expect this statement comes as a surprise to some, as it certainly did to me (see the comment line of astro-ph/9801102). The people who say the opposite clearly haven’t bothered to check2 as I have, or even to give MOND a real chance. If you come to a problem sure you know the answer, no data will change your mind. Hence:
A challenge: What would falsify the existence of dark matter?
If LCDM is a scientific theory, it should be falsifiable4. Dark matter, by itself, is a concept, not a theory: mass that is invisible. So how can we tell if it’s not there? Once we have convinced ourselves that the universe is full of invisible stuff that we can’t see or (so far) detect any other way, how do we disabuse ourselves of this notion, should it happen to be wrong? If it is correct, we can in principle find it in the lab, so its existence can be confirmed. But is it falsifiable? How?
That is my challenge to the dark matter community: what would convince you that the dark matter picture is wrong? Answers will vary, as it is up to each individual to decide for themself how to answer. But there has to be an answer. To leave this basic question unaddressed is to abandon the scientific method.
I’ll go first. Starting in 1985 when I was first presented evidence in a class taught by Scott Tremaine, I was as much of a believer in dark matter as anyone. I was even a vigorous advocate, for a time. What convinced me to first doubt the dark matter picture was the fine-tuning I had to engage in to salvage it. It was only after that experience that I realized that the problems I was encountering were caused by the data doing what MOND had predicted – something that really shouldn’t happen if dark matter is running the show. But the MOND part came after; I had already become dubious about dark matter in its own context.
Falsifiability is a question every scientist who works on dark matter needs to face. What would cause you to doubt the existence of dark matter? Nothing is not a scientific answer. Neither is it correct to assert that the evidence for dark matter is already overwhelming. That is a misstatement: the evidence for acceleration discrepancies is overwhelming, but these can be interpreted as evidence for either dark matter or MOND.
This important thing is to establish criteria by which you would change your mind. I changed my mind before: I am no longer convinced that the solution the acceleration discrepancy has to be non-baryonic dark matter. I will change my mind again if the evidence warrants. Let me state, yet again, what would cause me to doubt that MOND is a critical element of said solution. There are lots of possibilities, as MOND is readily falsifiable. Three important ones are:
MOND getting a fundamental prediction wrong;
Detecting dark matter;
Answering the question of the year.
None of these have happened yet. Just shouting MOND is falsified already! doesn’t make it so: the evidence has to be both clear and satisfactory. For example,
MOND might be falsified by cluster data, but it’s apparent failure is not fundamental. There is a residual missing mass problem in the richest clusters, but there’s nothing in MOND that says we have to have detected all the baryons by now. Indeed, LCDM doesn’t fare better, just differently, with both theories suffering a missing baryon problem. The chief difference is that we’re willing to give LCDM endless mulligans but MOND none at all. Where the problem for MOND in clusters comes up all the time, the analogous problem in LCDM is barely discussed, and is not even recognized as a problem.
A detection of dark matter would certainly help. To be satisfactory, it can’t be an isolated signal in a lone experiment that no one else can reproduce. If a new particle is detected, its properties have to be correct (e.g, it has the right mass density, etc.). As always, we must be wary of some standard model event masquerading as dark matter. WIMP detectors will soon reach the neutrino background accumulated from all the nuclear emissions of stars over the course of cosmic history, at which time they will start detecting weakly interacting particles as intended: neutrinos. Those aren’t the dark matter, but what are the odds that the first of those neutrino detections will be eagerly misinterpreted as dark matter?
Finally, the question of the year: why does MOND get any prediction right? To provide a satisfactory answer to this, one must come up with a physical model that provides a compelling explanation for the phenomena and has the same ability as MOND to make novel predictions. Just building a post-hoc model to match the data, which is the most common approach, doesn’t provide a satisfactory, let alone a compelling, explanation for the phenomenon, and provides no predictive power at all. If it did, we could have predicted MOND-like phenomenology and wouldn’t have to build these models after the fact.
So far, none of these three things have been clearly satisfied. The greatest danger to MOND comes from MOND itself: the residual mass discrepancy in clusters, the tension in Galactic data (some of which favor MOND, other of which don’t), and the apparent absence of dark matter in some galaxies. While these are real problems, they are also of the scale that is expected in the normal course of science: there are always tensions and misleading tidbits of information; I personally worry the most about the Galactic data. But even if my first point is satisfied and MOND fails on its own merits, that does not make dark matter better.
A large segment of the scientific community seems to suffer a common logical fallacy: any problem with MOND is seen as a success for dark matter. That’s silly. One has to evaluate the predictions of dark matter for the same observation to see how it fares. My experience has been that observations that are problematic for MOND are also problematic for dark matter. The latter often survives by not making a prediction at all, which is hardly a point in its favor.
Other situations are just plain weird. For example, it is popular these days to cite the absence of dark matter in some ultradiffuse galaxies as a challenge to MOND, which they are. But neither does it make sense to have galaxies without dark matter in a universe made of dark matter. Such a situation can be arranged, but the circumstances are rather contrived and usually involve some non-equilibrium dynamics. That’s fine; that can happen on rare occasions, but disequilibrium situations can happen in MOND too (the claims of falsification inevitably assume equilibrium). We can’t have it both ways, permitting special circumstances for one theory but not for the other. Worse, some examples of galaxies that are claimed to be devoid of dark matter are as much a problem for LCDM as for MOND. A disk galaxy devoid of either can’t happen; we need something to stabilize disks.
So where do we go from here? Who knows! There are fundamental questions that remain unanswered, and that’s a good thing. There is real science yet to be done. We can make progress if we stick to the scientific method. There is more to be done than measuring cosmological parameters to the sixth place of decimals. But we have to start by setting standards for falsification. If there is no observation or experimental result that would disabuse you of your current belief system, then that belief system is more akin to religion than to science.
1There are a few ideas, like superfluid dark matter, that try to automatically produce MOND phenomenology. This is what needs to happen. It isn’t clear yet whether these ideas work, but reproducing the MOND phenomenology naturally is a minimum standard that has to be met for a model to be viable. Run of the mill CDM models that invoke feedback do not meet this standard. They can always be made to reproduce the data once observed, but not to predict it in advance as MOND does.
2There is a common refrain that “MOND fits rotation curves and nothing else.” This is a myth, plain and simple. A good, old-fashioned falsehood sustained by the echo chamber effect. (That’s what I heard!) Seriously: if you are a scientist who thinks this, what is your source? Did it come from a review of MOND, or from idle chit-chat? How many MOND papers have you read? What do you actually know about it? Ignorance is not a strong position from which to draw a scientific conclusion.
3Like most of the community, I have invested considerably more effort in dark matter than in MOND. Where I differ from much of the galaxy formation community* is in admitting when those efforts fail. There is a temptation to slap some lipstick on the dark matter pig and claim success just to go along to get along, but what is the point of science if that is what we do when we encounter an inconvenient result? For me, MOND has been an incredibly inconvenient result. I would love to be able to falsify it, but so far intellectual honesty forbids.
*There is a widespread ethos of toxic positivity in the galaxy formation literature, which habitually puts a more positive spin on results than is objectively warranted. I’m aware of at least one prominent school where students are taught “to be optimistic” and omit mention of caveats that might detract from the a model’s reception. This is effective in a careerist sense, but antithetical to the scientific endeavor.
4The word “falsification” carries a lot of philosophical baggage that I don’t care to get into here. The point is that there must be a way to tell if a theory is wrong. If there is not, we might as well be debating the number of angels that can dance on the head of a pin.
I would like to write something positive to close out the year. Apparently, it is not in my nature, as I am finding it difficult to do so. I try not to say anything if I can’t say anything nice, and as a consequence I have said little here for weeks at a time.
Still, there are good things that happened this year. JWST launched a year ago. The predictions I made for it at that time have since been realized. There have been some bumps along the way, with some of the photometric redshifts for very high z galaxies turning out to be wrong. They have not all turned out to be wrong, and the current consensus seems to be converging towards acceptance of there existing a good number of relatively bright galaxies at z > 10. Some of these have been ‘confirmed’ by spectroscopy.
I remain skeptical of some of the spectra as well as the photometric redshifts. There isn’t much spectrum to see at these rest frame ultraviolet wavelengths. There aren’t a lot of obvious, distinctive features in the spectra that make for definitive line identifications, and the universe is rather opaque to the UV photons blueward of the Lyman break. Here is an example from the JADES survey:
Images and spectra of z > 10 galaxy candidates from JADES. [Image Credits: NASA, ESA, CSA, M. Zamani (ESA/Webb), Leah Hustak (STScI); Science Credits: Brant Robertson (UC Santa Cruz), S. Tacchella (Cambridge), E. Curtis-Lake (UOH), S. Carniani (Scuola Normale Superiore), JADES Collaboration]
Despite the lack of distinctive spectral lines, there is a clear shape that is ramping up towards the blue until hitting a sharp edge. This is consistent with the spectrum of a star forming galaxy with young stars that make a lot of UV light: the upward bend is expected for such a population, and hard to explain otherwise. The edge is cause by opacity: intervening gas and dust gobbles up those photons, few of which are likely to even escape their host galaxy, much less survive the billions of light-years to be traversed between there-then and here-now. So I concur that the most obvious interpretation of these spectra is that of high-z galaxies even if we don’t have the satisfaction of seeing blatantly obvious emission lines like C IV or Mg II (ionized species of carbon and magnesium that are frequently seen in the spectra of quasars). [The obscure nomenclature dates back to nineteenth century laboratory spectroscopy. Mg I is neutral, Mg II singly ionized, C IV triply ionized.]
Even if we seem headed towards consensus on the reality of big galaxies at high redshift, the same cannot yet be said about their interpretation. This certainly came as a huge surprise to astronomers not me. The obvious interpretation is the theory that predicted this observation in advance, no?
Apparently not. Another predictable phenomenon is that people will self-gaslight themselves into believing that this was expected all along. I have been watching in real time as the community makes the transition from “there is nothing above redshift 7” (the prediction of LCDM contemporary with Bob Sanders’s MOND prediction that galaxy mass objects form by z=10) to “this was unexpected!” and is genuinely problematic to “Nah, we’re good.” This is the same trajectory I’ve seen the community take with the cusp-core problem, the missing satellite problem, the RAR, the existence of massive clusters of galaxies at surprisingly high redshift, etc., etc. A theory is only good to the extent that its predictions are not malleable enough to be made to fit any observation.
As I was trying to explain on twitter that individually high mass galaxies had not been expected in LCDM, someone popped into my feed to assert that they had multiple simulations with galaxies that massive. That certainly had not been the case all along, so this just tells me that LCDM doesn’t really make a prediction here that can’t be fudged (crank up the star formation efficiency!). This is worse than no prediction at all: you can never know that you’re wrong, as you can fix any failing. Worse, it has been my experience that there is always someone willing to play the role of fixer, usually some ambitious young person eager to gain credit for saving the most favored theory. It works – I can point to many Ivy league careers that followed this approach. They don’t even have to work hard at it, as the community is predisposed to believe what they want to hear.
These are all reasons why predictions made in advance of the relevant observation are the most valuable.
That MOND has consistently predicted, in advance, results that were surprising to LCDM is a fact that the community apparently remains unaware of. Communication is inefficient, so for a long time I thought this sufficed as an explanation. That is no longer the case; the only explanation that fits the sociological observations is that the ignorance is willful.
“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
Upton Sinclair
We have been spoiled. The last 400 years has given us the impression that science progresses steadily and irresistibly forward. This is in no way guaranteed. Science progresses in fits and starts; it only looks continuous when the highlights are viewed in retrospective soft focus. Progress can halt and even regress, as happened abruptly with the many engineering feats of the Romans with the fall of their empire. Science is a human endeavor subject to human folly, and we might just as easily have a thousand years of belief in invisible mass as we did in epicycles.
Despite all this, I remain guardedly optimistic that we can and will progress. I don’t know what the right answer is. The first step is to let go of being sure that we do.
I’ll end with a quote pointed out to me by David Merritt that seems to apply today as it did centuries ago:
“The scepticism of that generation was the most uncompromising that the world has known; for it did not even trouble to deny: it simply ignored. It presented a blank wall of perfect indifference alike to the mysteries of the universe and to the solutions of them.”
We are visual animals. What we see informs our perception of the world, so it often helps to make a sketch to help conceptualize difficult material. When first confronted with MOND phenomenology in galaxies that I had been sure were dark matter dominated, I made a sketch to help organize my thoughts. Here is a scan of the original dark matter tree that I drew on a transparency (pre-powerpoint!) in 1995:
The original dark matter tree.
At the bottom are the roots of the problem: the astronomical evidence for mass discrepancies. From these grow the trunk, which splits into categories of possible solutions, which in turn branch into ever more specific possibilities. Most of these items were already old news at the time: I was categorizing, not inventing. Indeed, some things have been rebranded over time without changing all that much, with strange nuggets now being known as macros (a generalization to describe dark matter candidates of nuclear density) and asymmetric gravity becoming MOG. The more things change, the more they stay the same.
I’ve used this picture many times in talks, both public and scientific. It helps to focus the mind. I updated it for the 2012 review Benoit Famaey wrote (see our Fig. 1), but I don’t think I really improved on the older version, which Don Lincoln had adapted for the cover illustration of an issue of Physics Teacher (circa 2013), with some embellishment by their graphic artists. That’s pretty good, but I prefer my original.
Though there are no lack of buds on the tree, there have certainly been more ideas for dark matter candidates over the past thirty years, so I went looking to see if someone had attempted a similar exercise to categorize or at least corral all the ideas people have considered. Tim Tait made one such figure, but you have to already be an expert to make any sense of it, it being a sort of Venn diagram of the large conceptual playground that is theoretical particle physics.
This is nice: well organized and pleasantly symmetric, and making good use of color to distinguish different types of possibilities. One can recognize many of the same names from the original tree like MACHOs and MOND, along with newer, related entities like Macros and TeVeS. Interestingly, WIMPs are not mentioned, despite dominating the history of the field. They are subsumed under supersymmetry, which is now itself just a sub-branch of weak-scale possibilities rather than the grand unified theory of manifest inevitability that it was once considered to be. It is a sign of how far we have come that the number one candidate, the one that remains the focus of dozens of large experiments, doesn’t even come up by name. It is also a sign of how far we have yet to go that it seems preferable to many to invent new dark matter candidates than take seriously alternatives that have had much greater predictive success.
A challenge one faces in doing this exercise is to decide which candidates deserve mention, and which are just specific details that should be grouped under some more major branch. As a practical matter, it is impossible to wedge everything in, nor does every wild idea we’ve ever thought up deserve equal mention: Kaluza-Klein dark matter is not a coequal peer to WIMPs. But how do we be fair about making that call? It may not be possible.
I wanted to see how the new diagram mapped to the old tree, so I chopped it up and grafted each piece onto the appropriate branch of the original tree:
New blossoms on the old dark matter tree.
This works pretty well. It looks like the tree has blossomed with more ideas, which it has. There are more possibilities along well-established branches, and entirely new branches that I could only anticipate with question marks that allowed for the possibility of things we had not yet thought up. The tree is getting bushy.
Ultimately, the goal is not to have an ever bushier tree, but rather the opposite: we want to find the right answer. As an experimentalist, one wants to either detect or exclude specific dark matter candidates. As an scientist, I want to apply the wealth of observational knowledge we have accumulated like a chainsaw in the hands of an overzealous gardener to hack off misleading branches until the tree has been pruned down to a single branch, the one (and hopefully only one) correct answer.
As much as I like Bertone & Tait’s hexagonal image, it is very focused on ideas in particle physics. Five of the six branches are various forms of dark matter, while the possibility of modified gravity is grudgingly acknowledged in only one. It is illustrated as a dull grey that is unlike the bright, cheerful colors granted to the various flavors of dark matter candidates. To be sure, there are more ideas for solutions to the mass discrepancy problem from the particle physics than anywhere else, but that doesn’t mean they all deserve equal mention. One looking at this diagram might get the impression that the odds of dark matter:modified gravity are 5:1, which seems at once both biased against the latter and yet considerably more generous than its authors likely intended.
There is no mention at all of the data at the roots of the problem. That is all subsumed in the central DARK MATTER, as if we’re looking down at the top of the tree and recognize that it must have a central trunk, but cannot see its roots. This is indeed an apt depiction of the division between physics and astronomy. Proposed candidates for dark matter have emerged primarily from the particle physics community, which is what the hexagon categorizes. It takes for granted the evidence for dark matter, which is entirely astronomical in nature. This is not a trivial point; I’ve often encountered particle physicists who are mystified that astronomers have the temerity of think they can contribute to the dark matter debate despite 100% (not 90%, nor 99%, nor even 99.9%, but 100%) of the evidence for mass discrepancies stemming from observations of the sky. Apparently, our job was done when we told them we needed something unseen, and we should remain politely quiet while the Big Brains figure it out.
For a categorization of solutions, I suppose it is tolerable if dangerous divorced from the origins of the problem to leave off the evidence. There is another problem with placing DARK MATTER at the center. This is a linguistic problem that raises deep epistemological issues that most scientists working in the field rarely bother to engage with. Words matter; the names we use frame how we think about the problem. By calling it the dark matter problem, we presuppose the answer. A more appropriate term might be mass discrepancy, which was in use for a while by more careful-minded people, but it seems to have fallen into disuse. Dark matter is easier to say and sounds way more cool.
Jacob Bekenstein pointed out that an even better term would be acceleration discrepancy. That’s what we measure, after all. The centripetal acceleration in spiral galaxies exceeds that predicted by the observed distribution of visible matter. Mass is an inference, and a sloppy one at that: dynamical data only constrain the mass enclosed by the last measured point. The total mass of a dark matter halo depends on how far it extends, which we never observe because the darn stuff is invisible. And of course we only infer the existence of dark matter by assuming that the force law is correct. That gravity as taught to us by Einstein and Newton should apply to galaxies seems like a pretty darn good assumption, but it is just that. By calling it the dark matter problem, we make it all about unseen mass and neglect the possibility that the inference might go astray with that first, basic assumption.
So I’ve made a new picture, placing the acceleration discrepancy at the center where it belongs. The astronomical observations that inform the problem are on the vertical axis while the logical possibilities for physics solutions are on the horizontal axis. I’ve been very spare in filling in both: I’m trying to trace the logical possibilities with a minimum of bias and clutter, so I’ve retained some ideas that are pretty well excluded.
For example, on the dark matter side, MACHOs are pretty well excluded at this point, as are most (all?) dark matter candidates composed of Standard Model particles. Normal matter just doesn’t cut it, but I’ve left that sector in as a logical possibility that was considered historically and shouldn’t be forgotten. On the dynamical side, one of the first thoughts is that galaxies are big so perhaps the force law changes at some appropriate scale much large than the solar system. At this juncture, we have excluded all modifications to the force law that are made at a specific length scale.
The acceleration discrepancy diagram.
There are too many lines of observational evidence to do justice to here. I’ve lumped an enormous amount of it into a small number of categorical bins. This is not ideal, but some key points are at least mentioned. I invite the reader to try doing the exercise with pencil and paper. There are serious limits imposed by what you can physically display in a font the eye can read with a complexity limited to that which does not make the head explode. I fear I may already be pushing both.
I have made a split between dynamical and cosmological evidence. These tend to push the interpretation one way or the other, as hinted by the colors. Which way one goes depends entirely on how one weighs rather disparate lines of evidence.
I’ve also placed the things that were known from the outset of the modern dark matter paradigm closer to the center than those that were not. That galaxies and clusters of galaxies needed something more than meets the eye was known, and informed the need for dark matter. That the dynamics of galaxies over a huge range of mass, size, surface brightness, gas fraction, and morphology are organized by a few simple empirical relations was not yet known. The Baryonic Tully-Fisher Relation (BTFR) and the Radial Acceleration Relation (RAR) are critical pieces of evidence that did not inform the construction of the current paradigm, and are not satisfactorily explained by it.
Similarly for cosmology, the non-baryonic cold dark matter paradigm was launched by the observation that the dynamical mass density apparently exceeds that allowed for normal matter by primordial nucleosynthesis. This, together with the need to grow the observed large scale structure from the very smooth initial condition indicated by the cosmic microwave background (CMB), convinced nearly everyone (including myself) that there must be some new form of non-baryonic dark matter particle outside the realm of the Standard Model. Detailed observations of the power spectra of both galaxies and the CMB are important corroborating observations that did not yet exist at the time the idea took hold. We also got our predictions for these things very wrong initially, hence the need to change from Standard CDM to Lambda CDM.
Most of the people I have met who work on dark matter candidates seem to be well informed of cosmological constraints. In contrast, their knowledge of galaxy dynamics often seems to start and end with “rotation curves are flat.” There is quite a lot more to it than that. But, by and large, they stopped listening at “therefore we need dark matter” and were off and running with ideas for what it could be. There is a need to reassess the viability of these ideas in the light of the BTFR and the RAR.
People who work on galaxy dynamics are concerned with the obvious connections between dynamics and the observed stars and are inclined to be suspicious of the cosmological inference requiring non-baryonic dark matter. Over the years, I have repeatedly been approached by eminent dynamicists who have related in hushed tones, less the cosmologists overhear, that the dark matter must be baryonic. I can understand their reticence, since I was, originally, one of those people who they didn’t want to have overhear. Baryonic dark mater was crazy – we need more mass than is allowed by big bang nucleosynthesis! I usually refrained from raising this issue, as I have plenty of reasons to sympathize, and try to be a sympathetic ear even when I don’t. I did bring it up in an extended conversation with Vera Rubin once, who scoffed that the theorists were too clever by half. She reckoned that if she could demonstrate that Ωm = 1 in baryons one day, that they would have somehow fixed nucleosynthesis by the next. Her attitude was well-grounded in experience.
A common attitude among advocates of non-baryonic dark matter is that the power spectrum of the CMB requires its existence. Fits to the data require a non-baryonic component at something like 100 sigma. That’s pretty significant evidence.
The problem with this attitude is that it assumes General Relativity (GR). That’s the theory in which the fits are made. There is, indeed, no doubt that the existence of cold dark matter is required in order to make the fits in the context of GR: it does not work without it. To take this as proof of the existence of cold dark mater is entirely circular logic. Indeed, that we have to invent dark matter as a tooth fairy to save GR might be interpreted as evidence against it, or at least as an indication that there might exist a still more general theory.
Nevertheless, I do have sympathy for the attitude that any idea that is going to work has to explain all the data – including both dynamical and cosmological evidence. Where one has to be careful is to assume that the explanation we currently have is unique – so unique that no other theory could ever conceivably explain it. By that logic, MOND is the only theory that uniquely predicted both the BTFR and the RAR. So if we’re being even-handed, cold dark matter is ruled out by the dynamical relations identified after its invention at least as much as its competitors are excluded by the detailed, later measurement of the power spectrum of the CMB.
If we believe all the data, and hold all theories to the same high standard, none survive. Not a single one. A common approach seems to be to hold one’s favorite theory to a lower standard. I will not dignify that with a repudiation. The challenge with data both astronomical and cosmological, is figuring out what to believe. It has gotten better, but you can’t rely on every measurement being right, or – harder to bear in mind – actually measure what you want it to measure. Do the orbits of gas clouds in spiral galaxies trace the geodesics of test particles in perfectly circular motion? Does the assumption of hydrostatic equilibrium in the intracluster medium (ICM) of clusters of galaxies provide the same tracer of the gravitational potential as dynamics? There is an annoying offset in the acceleration scale measured by the two distinct methods. Is that real, or some systematic? It seems to be real, but it is also suspicious for appearing exactly where the change in method occurs.
The characteristic acceleration scale in extragalactic systems as a function of their observed baryonic mass. This is always close to the ubiquitous scale of 10-10 m/s/s first recognized by Milgrom. There is a persistent offset for clusters of galaxies that occurs where we switch from dynamical to hydrostatic tracers of the potential (Fig. 48 from Famaey & McGaugh 2012).
One will go mad trying to track down every conceivable systematic. Trust me, I’ve done the experiment. So an exercise I like to do is to ask what theory minimizes the amount of data I have to ignore. I spent several years reviewing all the data in order to do this exercise when I first got interested in this problem. To my surprise, it was MOND that did best by this measure, not dark matter. To this date, clusters of galaxies remain the most problematic for MOND in having a discrepant acceleration scale – a real problem that we would not hesitate to sweep under the rug if dark matter suffered it. For example, the offset the EAGLE simulation requires to [sort of] match the RAR is almost exactly the same amplitude as what MOND needs to match clusters. Rather than considering this to be a problem, they apply the required offset and call it natural to have missed by this much.
Most of the things we call evidence for dark matter are really evidence for the acceleration discrepancy. A mental hang up I had when I first came to the problem was that there’s so much evidence for dark matter. That is a misstatement stemming from the linguistic bias I noted earlier. There’s so much evidence for the acceleration discrepancy. I still see professionals struggle with this, often citing results as being contradictory to MOND that actually support it. They seem not to have bothered to check, as I have, and are content to repeat what they heard someone else assert. I sometimes wonder if the most lasting contribution to science made by the dark matter paradigm is as one giant Asch conformity experiment.
If we repeat today the exercise of minimizing the amount of data we have to disbelieve, the theory that fares best is the Aether Scalar Tensor (AeST) theory of Skordis & Zlosnik. It contains MOND in the appropriate limit while also providing an excellent fit to the power spectrum of galaxies and the CMB (see also the updated plots in their paper). Hybrid models struggle to do both while the traditional approach of simply adding mass in new particles does not provide a satisfactory explanation of the MOND phenomenology. They can be excluded unless we indulge in the special pleading that invokes feedback or other ad hoc auxiliary hypotheses. Similarly, more elaborate ideas like self-interacting dark matter were dead on arrival for providing a mechanism to solve the wrong problem: the cores inferred in dark matter halos are merely a symptom of the more general MONDian phenomenology; the proposed solution addresses the underlying disease about as much as a band-aid helps an amputation.
Does that mean AeST is the correct theory? Only in the sense that MOND was the best theory when I first did this exercise in the previous century. The needle has swung back and forth since then, so it might swing again. But I do hope that it is a step in a better direction.
Dark matter remains undetected in the laboratory. This has been true for forever, so I don’t know what drivesthe timing of the recent spate of articles encouraging us to keep the faith, that dark matter is still a better idea than anything else. This depends on how we define “better.”
There is a long-standing debate in the philosophy of science about the relative merits of accommodation and prediction. A scientific theory should have predictive power. It should also explain all the relevant data. To do the latter almost inevitably requires some flexibility in order to accommodate things that didn’t turn out exactly as predicted. What is the right mix? Do we lean more towards prediction, or accommodation? The answer to that defines “better” in this context.
One of the recent articles is titled “The dark matter hypothesis isn’t perfect, but the alternatives are worse” by Paul Sutter. This perfectly encapsulates the choice one has to make in what is unavoidably a value judgement. Is it better to accommodate, or to predict (see the Spergel Principle)? Dr. Sutter comes down on the side of accommodation. He notes a couple of failed predictions of dark matter, but mentions no specific predictions of MOND (successful or not) while concluding that dark matter is better because it explains more.
One important principle in science is objectivity. We should be even-handed in the evaluation of evidence for and against a theory. In practice, that is very difficult. As I’ve written before, it made me angry when the predictions of MOND came true in my data for low surface brightness galaxies. I wanted dark matter to be right. I felt sure that it had to be. So why did this stupid MOND theory have any of its predictions come true?
One way to check your objectivity is to look at it from both sides. If I put on a dark matter hat, then I largely agree with what Dr. Sutter says. To quote one example:
The dark matter hypothesis isn’t perfect. But then again, no scientific hypothesis is. When evaluating competing hypotheses, scientists can’t just go with their guts, or pick one that sounds cooler or seems simpler. We have to follow the evidence, wherever it leads. In almost 50 years, nobody has come up with a MOND-like theory that can explain the wealth of data we have about the universe. That doesn’t make MOND wrong, but it does make it a far weaker alternative to dark matter.
OK, so now let’s put on a MOND hat. Can I make the same statement?
The MOND hypothesis isn’t perfect. But then again, no scientific hypothesis is. When evaluating competing hypotheses, scientists can’t just go with their guts, or pick one that sounds cooler or seems simpler. We have to follow the evidence, wherever it leads. In almost 50 years, nobody has detected dark matter, nor come up with a dark matter-based theory with the predictive power of MOND. That doesn’t make dark matter wrong, but it does make it a far weaker alternative to MOND.
So, which of these statements is true? Well, both of them. How do we weigh the various lines of evidence? Is it more important to explain a large variety of the data, or to be able to predict some of it? This is one of the great challenges when comparing dark matter and MOND. They are incommensurate: the set of relevant data is not the same for both. MOND makes no pretense to provide a theory of cosmology, so it doesn’t even attempt to explain much of the data so beloved by cosmologists. Dark matter explains everything, but, broadly defined, it is not a theory so much as an inference – assuming gravitational dynamics are inviolate, we need more mass than meets the eye. It’s a classic case of comparing apples and oranges.
While dark matter is a vague concept in general, one can build specific theories of dark matter that are predictive. Simulations with generic cold dark matter particles predict cuspy dark matter halos. Galaxies are thought to reside in these halos, which dominate their dynamics. This overlaps with the predictions of MOND, which follow from the observed distribution of normal matter. So, do galaxies look like tracer particles orbiting in cuspy halos? Or do their dynamics follow from the observed distribution of light via Milgrom’s strange formula? The relevant subset of the data very clearly indicate the latter. When head-to-head comparisons like this can be made, the a priori predictions of MOND win, hands down, over and over again. [If this statement sounds wrong, try reading the relevant scientificliterature. Being an expert on dark matter does not automatically make one an expert on MOND. To be qualified to comment, one should know what predictive successes MOND has had. People who say variations of “MOND only fits rotation curves” are proudly proclaiming that they lack this knowledge.]
It boils down to this: if you want to explain extragalactic phenomena, use dark matter. If you want to make a prediction – in advance! – that will come true, use MOND.
A lot of the debate comes down to claims that anything MOND can do, dark matter can do better. Or at least as well. Or, if not as well, good enough. This is why conventionalists are always harping about feedback: it is the deus ex machina they invoke in any situation where they need to explain why their prediction failed. This does nothing to explain why MOND succeeded where they failed.
This post-hoc reasoning is profoundly unsatisfactory. Dark matter, being invisible, allows us lots of freedom to cook up an explanation for pretty much anything. My long-standing concern for the dark matter paradigm is not the failure of any particular prediction, but that, like epicycles, it has too much explanatory power. We could use it to explain pretty much anything. Rotation curves flat when they should be falling? Add some dark matter. No such need? No dark matter. Rising rotation curves? Sure, we could explain that too: add more dark matter. Only we don’t, because that situation doesn’t arise in nature. But we could if we had to. (See, e.g., Fig. 6 of de Blok & McGaugh 1998.)
There is no requirement in dark matter that rotation curves be as flat as they are. If we start from the prior knowledge that they are, then of course that’s what we get. If instead we independently try to build models of galactic disks in dark matter halos, very few of them wind up with realistic looking rotation curves. This shouldn’t be surprising: there are, in principle, an uncountably infinite number of combinations of galaxies and dark matter halos. Even if we impose some sensible restrictions (e.g., scaling the mass of one component with that of the other), we still don’t get it right. That’s one reason that we have to add feedback, which suffices according to some, and not according to others.
In contrast, the predictions of MOND are unique. The kinematics of an object follow from its observed mass distribution. The two are tied together by the hypothesized force law. There is a one-to-one relation between what you see and what you get.
From the perspective of building dark matter models, it’s like the proverbial needle in the haystack: the haystack is the volume of possible baryonic disk plus dark matter halo combinations; the one that “looks like” MOND is the needle. Somehow nature plucks the MOND-like needle out of the dark matter haystack every time it makes a galaxy.
The dark matter haystack. Galaxies might lie anywhere in this voluminous, multiparameter space, but in practice they inevitably seem to reside in the negligibly small part of the volume that “looks like” MOND.
Dr. Sutter says that we shouldn’t go with our gut. That’s exactly what I wanted to do, long ago, to maintain my preference for dark matter. I’d love to do that now so that I could stop having this argument with otherwise reasonable people.
Instead of going with my gut, I’m making a probabilistic statement. In Bayesian terms, the odds of observing MONDian behavior given the prior that we live in a universe made of dark matter are practically zero. In MOND, observing MONDian behavior is the only thing that can happen. That’s what we observe in galaxies, over and over again. Any information criterion shows a strong quantitative preference for MOND when dynamical evidence is considered. That does not happen when cosmological data are considered because MOND makes no prediction there. Concluding that dark matter is better overlooks the practical impossibility that MOND-like phenomenolgy is observed at all. Of course, once one knows this is what the data show, it seems a lot more likely, and I can see that effect in the literature over the long arc of scientific history. This is why, to me, predictive power is more important than accommodation: what we predict before we know the answer is more important than whatever we make up once the answer is known.
The successes of MOND are sometimes minimized by lumping all galaxies into a single category. That’s not correct. Every galaxy has a unique mass distribution; each one is an independent test. The data for galaxies extend over a large dynamic range, from dwarfs to giants, from low to high surface brightness, from gas to star dominated cases. Dismissing this by saying “MOND only explains rotation curves” is like dismissing Newton for only explaining planets – as if every planet, moon, comet, and asteroid aren’t independent tests of Newton’s inverse square law.
Two galaxies with very different mass distributions. Neither are well explained by dark matter, which provides no reason for the detailed shapes encapsulated by Sancisi’s Law. In contrast, MOND describes these naturally: features in the rotation curves follow from those in the baryon distributions because the force law tells them to.
MOND does explain more that rotation curves. That was the first thing I checked. I spent several years looking at all of the data, and have reviewed the situation many times since. What I found surprising is how much MOND explains, if you let it. More disturbing was how often I came across claims in the literature that MOND was falsified by X only to try the analysis myself and find that, no, if you bother to do it right, that’s pretty much just what it predicts. Not in every case, of course – no hypothesis is perfect – but I stopped bothering after several hundred cases. Literally hundreds. I can’t keep up with every new claim, and it isn’t my job to do so. My experience has been that as the data improve, so too does its agreement with MOND.
Dr. Sutter’s article goes farther, repeating a common misconception that “the tweaking of gravity under MOND is explicitly designed to explain the motions of stars within galaxies.” This is an overstatement so strong as to be factually wrong. MOND was explicitly designed to produce flat rotation curves – as was dark matter. However, there is a lot more to it than that. Once we write down the force law, we’re stuck with it. It has lots of other unavoidable consequences that lead to genuine predictions. Milgrom explicitly laid out what these consequences would be, and basically all of them have subsequently been observed. I include a partial table in my last review; it only ends where it does because I had to stop somewhere. These were genuine, successful, a priori predictions – the gold standard in science. Some of them can be explained with dark matter, but many cannot: they make no sense, and dark matter can only accommodate them thanks to its epic flexibility.
Dr. Sutter makes a number of other interesting points. He says we shouldn’t “pick [a hypothesis] that sounds cooler or seems simpler.” I’m not sure which seems cooler here – a universe pervaded by a mysterious invisible mass that we can’t [yet] detect in the laboratory but nevertheless controls most of what goes on out there seems pretty cool to me. That there might also be some fundamental aspect of the basic theory of gravitational dynamics that we’re missing also seems like a pretty cool possibility. Those are purely value judgments.
Simplicity, however, is a scientific value known as Occam’s razor. The simpler of competing theories is to be preferred. That’s clearly MOND: we make one adjustment to the force law, and that’s it. What we lack is a widely accepted, more general theory that encapsulates both MOND and General Relativity.
In dark matter, we multiply entities unnecessarily – there is extra mass composed of unknown particles that have no place in the Standard Model of particle physics (which is quite full up) so we have to imagine physics beyond the standard model and perhaps an entire dark sector because why just one particle when 85% of the mass is dark? and there could also be dark photons to exchange forces that are only active in the dark sector as well as entire hierarchies of dark particles that maybe have their own ecosystem of dark stars, dark planets, and maybe even dark people. We, being part of the “normal” matter, are just a minority constituent of this dark universe; a negligible bit of flotsam compared to the dark sector. Doesn’t it make sense to imagine that the dark sector has as rich and diverse a set of phenomena as the “normal” sector? Sure – if you don’t mind abandoning Occam’s razor. Note that I didn’t make any of this stuff up; everything I said in that breathless run-on sentence I’ve heard said by earnest scientists enthusiastic about how cool the dark sector could be. Bugger Occam.
There is also the matter of timescales. Dr. Sutter mentions that “In almost 50 years, nobody has come up with a MOND-like theory” that does all that we need it to do. That’s true, but for the typo. Next year (2023) will mark the 40th anniversary of Milgrom’s first publications on MOND, so it hasn’t been half a century yet. But I’ve heard recurring complaints to this effect before, that finding the deeper theory is taking too long. Let’s examine that, shall we?
First, remember some history. When Newton introduced his inverse square law of universal gravity, it was promptly criticized as a form of magical thinking: How, Sir, can you have action at a distance? The conception at the time was that you had to be in physical contact with an object to exert a force on it. For the sun to exert a force on the earth, or the earth on the moon, seemed outright magical. Leibnitz famously accused Newton of introducing ‘occult’ forces. As a consequence, Newton was careful to preface his description of universal gravity as everything happening as if the force was his famous inverse square law. The “as if” is doing a lot of work here, basically saying, in modern parlance “OK, I don’t get how this is possible, I know it seems really weird, but that’s what it looks like.” I say the same about MOND: galaxies behave as if MOND is the effective force law. The question is why.
As near as I can tell from reading the history around this, and I don’t know how clear this is, but it looks like it took about 20 years for Newton to realize that there was a good geometric reason for the inverse square law. We expect our freshman physics students to see that immediately. Obviously Newton was smarter than the average freshman, so why’d it take so long? Was he, perhaps, preoccupied with the legitimate-seeming criticisms of action at a distance? It is hard to see past a fundamental stumbling block like that, and I wonder if the situation now is analogous. Perhaps we are missing something now that will seems obvious in retrospect, distracted by criticisms that will seem absurd in the future.
Many famous scientists built on the dynamics introduced by Newton. The Poisson equation isn’t named the Newton equation because Newton didn’t come up with it even though it is fundamental to Newtonian dynamics. Same for the Lagrangian. And the classical Hamiltonian. These developments came many decades after Newton himself, and required the efforts of many brilliant scientists integrated over a lot of time. By that standard, forty years seems pretty short: one doesn’t arrive at a theory of everything overnight.
What is the right measure? The integrated effort of the scientific community is more relevant than absolute time. Over the past forty years, I’ve seen a lot of push back against even considering MOND as a legitimate theory. Don’t talk about that! This isn’t exactly encouraging, so not many people have worked on it. I can count on my fingers the number of people who have made important contributions to the theoretical development of MOND. (I am not one of them. I am an observer following the evidence, wherever it leads, even against my gut feeling and to the manifest detriment of my career.) It is hard to make progress without a critical mass of people working on a problem.
Of course, people have been looking for dark matter for those same 40 years. More, really – if you want to go back to Oort and Zwicky, it has been 90 years. But for the first half century of dark matter, no one was looking hard for it – it took that long to gel as a serious problem. These things take time.
Nevertheless, for several decades now there has been an enormous amount of effort put into all aspects of the search for dark matter: experimental, observational, and theoretical. There is and has been a critical mass of people working on it for a long time. There have been thousands of talented scientists who have contributed to direct detection experiments in dozens of vast underground laboratories, who have combed through data from X-ray and gamma-ray observatories looking for the telltale signs of dark matter decay or annihilation, who have checked for the direct production of dark matter particles in the LHC; even theorists who continue to hypothesize what the heck the dark matter could be and how we might go about detecting it. This research has been well funded, with billions of dollars having been spent in the quest for dark matter. And what do we have to show for it?
Zero. Nada. Zilch. Squat. A whole lot of nothing.
This is equal to the amount of funding that goes to support research on MOND. There is no faster way to get a grant proposal rejected than to say nice things about MOND. So one the one hand, we have a small number of people working on the proverbial shoestring, while on the other, we have a huge community that has poured vast resources into the attempt to detect dark matter. If we really believe it is taking too long, perhaps we should try funding MOND as generously as we do dark matter.
I went on a bit of a twitter bender yesterday about the early claims about high mass galaxies at high redshift, which went on long enough I thought I should share it here.
For those watching the astro community freak out about bright, high redshift galaxies being detected by JWST, some historical context in an amusing anecdote…
The 1998 October conference was titled “After the dark ages, when galaxies were young (the universe at 2 < z < 5).” That right there tells you what we were expecting. Redshift 5 was high – when the universe was a mere billion years old. Before that, not much going on (dark ages).
This was when the now famous SN Ia results corroborating the acceleration of the expansion rate predicted by concordance LCDM were shiny and new. Many of us already strongly suspected we needed to put the Lambda back in cosmology; the SN results sealed the deal.
One of the many lines of evidence leading to the rehabilitation of Lambda – previously anathema – was that we needed a bit more time to get observed structures to form. One wants the universe to be older than its contents, an off and on problem with globular clusters for forever.
A natural question that arises is just how early do galaxies form? The horizon of z=7 came up in discussion at lunch, with those of us who were observers wondering how we might access that (JWST being the answer long in the making).
Famed simulator Carlos Frenk was there, and assured us not to worry. He had already done LCDM simulations, and knew the timing.
“There is nothing above redshift 7.”
He also added “don’t quote me on that,” which I’ve respected until now, but I think the statute of limitations has expired.
Everyone present immediately pulled out their wallet and chipped in $5 to endow the “7-up” prize for the first persuasive detection of an object at or above redshift seven.
A committee was formed to evaluate claims that might appear in the literature, composed of Carlos, Vera Rubin, and Bruce Partridge. They made it clear that they would require a high standard of evidence: at least two well-identified lines; no dropouts or photo-z’s.
That standard wasn’t met for over a decade, with z=6.96 being the record holder for a while. The 7-up prize was entirely tongue in cheek, and everyone forgot about it. Marv Leventhal had offered to hold the money; I guess he ended up pocketing it.
I believe the winner of the 7-up prize should have been Nial Tanvir for GRB090423 at z~8.2, but I haven’t checked if there might be other credible claims, and I can’t speak for the committee.
At any rate, I don’t think anyone would now seriously dispute that there are galaxies at z>7. The question is how big do they get, how early? And the eternal mobile goalpost, what does LCDM really predict?
Carlos was not wrong. There is no hard cutoff, so I won’t quibble about arbitrary boundaries like z=7. It takes time to assemble big galaxies, & LCDM does make a reasonably clear prediction about the timeline for that to occur. Basically, they shouldn’t be all that big that soon.
Here is a figure adapted from the thesis Jay Franck wrote here 5 years ago using Spitzer data (round points). It shows the characteristic brightness (Schechter M*) of galaxies as a function of redshift. The data diverge from the LCDM prediction (squares) as redshift increases.
The divergence happens because real galaxies are brighter (more stellar mass has assembled into a single object) than predicted by the hierarchical timeline expected in LCDM.
Remarkably, the data roughly follow the green line, which is an L* galaxy magically put in place at the inconceivably high redshift of z=10. Galaxies seem to have gotten big impossibly early. This is why you see us astronomers flipping our lids at the JWST results. Can’t happen.
Except that it can, and was predicted to do so by Bob Sanders a quarter century ago: “Objects of galaxy mass are the first virialized objects to form (by z=10) and larger structure develops rapidly.”
The reason is MOND. After decoupling, the baryons find themselves bereft of radiation support and suddenly deep in the low acceleration regime. Structure grows fast and becomes nonlinear almost immediately. It’s as if there is tons more dark matter than we infer nowadays.
I referreed that paper, and was a bit disappointed that Bob had beat me to it: I was doing something similar at the time, with similar results. Instead of being hard to form structure quickly as in LCDM, it’s practically impossible to avoid in MOND.
He beat me to it, so I abandoned writing that paper. No need to say the same thing twice! Didn’t think we’d have to wait so long to test it.
I’ve reviewed this many times. Most recently in January, in anticipation of JWST, on my blog.
But you get the point. Every time you see someone describe the big galaxies JWST is seeing as unexpected, what they mean is unexpected in LCDM. It doesn’t surprise me at all. It is entirely expected in MOND, and was predicted a priori.
The really interesting thing to me, though, remains what LCDM really predicts. I already see people rationalizing excuses. I’ve seen this happen before. Many times. That’s why the field is in a rut.
Progress towards the dark land.
So are we gonna talk our way out of it this time? I’m no longer interested in how; I’m sure someone will suggest something that will gain traction no matter how unsatisfactory.
Special pleading.
The only interesting question is if LCDM makes a prediction here that can’t be fudged. If it does, then it can be falsified. If it doesn’t, it isn’t science.
Experimentalist with no clue what he has signed up for about to find out how hard it is to hunt down an invisible target.
But can we? Is LCDM subject to falsification? Or will we yet again gaslight ourselves into believing that we knew it all along?
In previousposts, I briefly described some of the results that provoked a crisis of faith in the mid-1990s. Up until that point, I was an ardent believer in the cold dark matter paradigm. But it no longer made sense as an explanation for galaxy dynamics. It didn’t just not make sense, it seemed strewn with self-contradictions, all of which persist to this day.
Amidst this crisis of faith, there came a chance meeting in Middle-Earth: Moti Milgrom visited Cambridge, where I was a postdoc at the time, and gave a talk. I almost didn’t go to this talk because it had modified gravity in the title and who wanted to waste their time listening to that nonsense? I had yet to make any connection between the self-contradictions the data posed for dark matter and something as dire as an entirely different paradigm.
Despite my misgivings, I did go to Milgrom’s talk. Not knowing that I was there or what I worked on, he casually remarked on some specific predictions for low surface brightness galaxies. These sounded like what I was seeing, in particular the things that were most troublesome for the dark matter interpretation. I became interested.
Long story short, it is a case in which, had MOND not already existed, we would have had to invent it. As Sherlock Holmes famously put it
When you have eliminated the impossible, whatever remains, however improbable, must be the truth.
Sir Arthur Conan Doyle
Modified Newtonian Dynamics
There is one and only one theory that predicted in advance the observations described above: the Modified Newtonian Dynamics (MOND) introduced by Milgrom (1983a,b,c). MOND is an extension of Newtonian theory (Milgrom, 2020). It is not a generally covariant theory, so is not, by itself, a complete replacement for General Relativity. Nevertheless, it makes unique, testable predictions within its regime of applicability (McGaugh, 2020).
The basic idea of MOND is that the force law is modified at an acceleration scale, a0. For large accelerations, g ≫ a0, everything is normal and Newtonian: g = gN, where gN is the acceleration predicted by the observed luminous mass distribution obtained by solving the Poisson equation. At low accelerations, the effective acceleration tends towards the limit
g → √(a0gN) for g ≪ a0 (5)
(Bekenstein & Milgrom, 1984; Milgrom, 1983c). This limit is called the deep MOND regime in contrast to the Newtonian regime at high accelerations. The two regimes are smoothly connected by an interpolation function μ(g/a0) that is not specified (Milgrom, 1983c).
The motivation to make an acceleration-based modification is to explain flat rotation curves (Bosma, 1981; Rubin et al., 1978) that also gives a steep Tully-Fisher relation similar to that which is observed (Aaronson et al., 1979). A test particle in a circular orbit around a point mass Mp in the deep MOND regime (eq. (5)) will experience a centripetal acceleration
Vc2/R = √(a0GMp/R2). (6)
Note that the term for the radius R cancels out, so eq. (6) reduces to
Vc4 = a0GMp (7)
which the reader will recognize as the Baryonic Tully-Fisher relation
Mb = A Vf4 (8)
with A = ζ/(a0G) where ζ is a geometrical factor of order unity.
This simple math explains the flatness of rotation curves. This is not a prediction; it was an input that motivated the theory, as it motivated dark matter. Unlike dark matter, in which rotation curves might rise or fall, the rotation curves of isolated galaxies must tend towards asymptotic flatness.
MOND also explains the Tully-Fisher relation. Indeed, there are several distinct aspects to this prediction. That the relation exists at all is a strong prediction. Fundamentally, the Baryonic Tully-Fisher Relation (BTFR) is a relation between the baryonic mass of a galaxy and its flat rotation speed. There is no dark matter involved: Vf is not a property of a dark matter halo, but of the galaxy itself.
One MOND prediction is the slope of the BTFR: the power law scaling M ~ Vx has x = 4 exactly. While the infrared data of Aaronson et al. (1979) suggested such a slope, the exact value was not well constrained at that time. It was not until later that Tully-Fisher was empirically recognized as a relation driven by baryonic mass (McGaugh et al., 2000), as anticipated by MOND. Moreover, the slope is only four when a good measurement of the flat rotation velocity is available (Verheijen, 2001; McGaugh, 2005, 2012); common proxies like the line-width only crudely approximate the result and typically return shallower slopes (e.g., Zaritsky et al., 2014), as do samples of limited dynamic range (e.g., Pizagno et al., 2007). The latter are common in the literature: selection effects strongly favor bright galaxies, and the majority of published Tully-Fisher relations are dominated by high mass galaxies (M∗ > 1010 M⊙). Consequently, the behavior of the Baryonic Tully-Fisher relation remains somewhat controversial to this day (e.g., Mancera Piña et al., 2019; Ogle et al., 2019). This appears to be entirely a matter of data quality (McGaugh et al., 2019). The slope of the relation is indistinguishable from 4 when a modicum of quality control is imposed (Lelli et al., 2016b; McGaugh, 2005, 2012; Schombert et al., 2020; Stark et al., 2009; Trachternach et al., 2009). Indeed, only a slope of four successfully predicted the rotation speeds of low mass galaxies (Giovanelli et al., 2013; McGaugh, 2011).
Another aspect of the Tully-Fisher relation is its normalization. This is set by fundamental constants: Newton’s constant, G, and the acceleration scale of MOND, a0. For ζ = 0.8, A = 50 M⊙ km−4 s4. However, there is no theory that predicts the value of a0, which has to be set by the data. Moreover, this scale is distance-dependent, so the precise value of a0 varies with adjustments to the distance scale. For this reason, in part, the initial estimate of a0 = 2 × 10−10 m s−2 of (Milgrom, 1983a) was a bit high. Begeman et al. (1991) used the best data then available to obtain a0 = 1.2 × 10−10 m s−2. The value of Milgrom’s acceleration constant has not varied meaningfully since then (Famaey and McGaugh, 2012; Li et al., 2018; McGaugh, 2011; McGaugh et al., 2016; Sanders and McGaugh, 2002). This is a consistency check, but not a genuine7 prediction.
An important consequence of MOND is that the Tully-Fisher relation is absolute: it should have no dependence on size or surface brightness (Milgrom, 1983a). The mass of baryons is the only thing that sets the flat amplitude of the rotation speed. It matters not at all how those baryons are distributed. MOND was the only theory to correctly predict this in advance of the observation (McGaugh and de Blok, 1998b). The fine-tuning problem that we face conventionally is imposed by this otherwise unanticipated result.
The absolute nature of the Tully-Fisher relation in MOND further predicts that it has no physical residuals whatsoever. That is to say, scatter around the relation can only be caused by observational errors and scatter in the mass-to-light ratios of the stars. The latter is an irreducible unknown: we measure the luminosity produced by the stars in a galaxy, but what we need to know is the mass of those stars. The conversion between them can never be perfect, and inevitably introduces some scatter into the relation. Nevertheless, we can make our best effort to account for known sources of scatter. Between scatter expected from observational uncertainties and that induced by variations in the mass-to-light ratio, the best data are consistent with the prediction of zero intrinsic scatter (McGaugh, 2005, 2012; Lelli et al., 2016b, 2019). Of course, it is impossible to measure zero, but it is possible to set an upper limit on the intrinsic scatter that is very tight by extragalactic standards (<6%Lelli et al., 2019). This leaves very little room for variations beyond the inevitable impact of the stellar mass-to-light ratio. The scatter is no longer entirely accounted for when lower quality data are considered (McGaugh, 2012), but this is expected in astronomy: lower quality data inevitably admit systematic uncertainties that are not readily accounted for in the error budget.
Milgrom (1983a) made a number of other specific predictions. In MOND, the acceleration expected for kinematics follows from the surface density of baryons. Consequently, low surface brightness means low acceleration. Interpreted in terms of conventional dynamics, the prediction is that the ratio of dynamical mass to light, Mdyn/L should increase as surface brightness decreases. This happens both globally — LSB galaxies appear to be more dark matter dominated than HSB galaxies (see Fig. 4(b) of McGaugh and de Blok, 1998a), and locally — the need for dark matter sets in at smaller radii in LSB galaxies than in HSB galaxies (Figs. 3 and 14 of McGaugh and de Blok, 1998b; Famaey and McGaugh, 2012, respectively).
One may also test this prediction by plotting the rotation curves of galaxies binned by surface brightness: acceleration should scale with surface brightness. It does (Figs. 4 and 16 of McGaugh and de Blok, 1998b; Famaey and McGaugh, 2012, respectively). This observation has been confirmed by near-infrared data. The systematic variation of color coded surface brightness is already obvious with optical data, as in Fig. 15 of Famaey and McGaugh (2012), but these suffer some scatter from variations in the stellar mass-to-light ratio. These practically vanish with near-infrared data, which provide such a good tracer of the surface mass density of stars that the equivalent plot is a near-perfect rainbow (Fig. 3 of both McGaugh et al., 2019; McGaugh, 2020). The data strongly corroborate the prediction of MOND that acceleration follows from baryonic surface density.
The central density relation (Fig. 6, Lelli et al., 2016c) was also predicted by MOND (Milgrom, 2016). Both the shape and the amplitude of the correlation are correct. Moreover, the surface density ӆ at which the data bend follows directly from the acceleration scale of MOND: a0 = Gӆ. This surface density also corresponds to the stability limit for disks (Brada & Milgrom, 1999; Milgrom, 1989). The scale we had to insert by hand in dark matter models is a consequence of MOND.
Since MOND is a force law, the entirety of the rotation curve should follow from the baryonic mass distribution. The stellar mass-to-light ratio can modulate the amplitude of the stellar contribution to the rotation curve, but not its shape, which is specified by the observed distribution of light. Consequently, there is rather limited freedom in fitting rotation curves.
Example fits are shown in Fig. 8. The procedure is to construct Newtonian mass models by numerically solving the Poisson equation to determine the gravitational potential that corresponds to the observed baryonic mass distribution. Indeed, it is important to make a rigorous solution of the Poisson equation in order to capture details in the shape of the mass distribution (e.g., the wiggles in Fig. 8). Common analytic approximations like the exponential disk assume these features out of existence. Building proper mass models involves separate observations for the stars, conducted at optical or near-infrared wavelengths, and the gas of the interstellar medium, which is traced by radio wavelength observations. It is sometimes necessary to consider separate mass-to-light ratios for the stellar bulge and disk components, as there can be astrophysical differences between these distinct stellar populations (Baade, 1944). This distinction applies in any theory.
Fig. 8. Example rotation curve fits. MOND fits (heavy solid lines: Li et al., 2018) to the rotation curves of a bright, star-dominated galaxy (UGC 2953, left panel) and a faint, gas-dominated galaxy (DDO 64, right panel). The thin solid lines shows the Newtonian expectation, which is the sum of the atomic gas (dotted lines), stellar disk (dashed lines), and stellar bulge (dash-dotted line; present only in UGC 2953). Note the different scales: UGC 2953 is approximately 400 times more massive than DDO 64.
The gravitational potential of each baryonic component is represented by the circular velocity of a test particle in Fig. 8. The amplitude of the rotation curve of the mass model for each stellar component scales as the square root of its mass-to-light ratio. There is no corresponding mass-to-light ratio for the gas of the interstellar medium as there is a well-understood relation between the observed flux at 21 cm and the mass of hydrogen atoms that emit it (Draine, 2011). Consequently, the line for the gas components in Fig. 8 is practically fixed.
In addition to the mass-to-light ratio, there are two “nuisance” parameters that are sometimes considered in MOND fits: distance and inclination. These are known from independent observations, but of course these have some uncertainty. Consequently, the best MOND fit sometimes occurs for slightly different values of the distance and inclination, within their observational uncertainties (Begeman et al., 1991; de Blok & McGaugh, 1998; Sanders, 1996).
Distance matters because it sets the absolute scale. The further a galaxy, the greater its mass for the same observed flux. The distances to individual galaxies are notoriously difficult to measure. Though usually not important, small changes to the distance can occasionally have powerful effects, especially in gas rich galaxies. Compare, for example, the fit to DDO 154 by Li et al. (2018) to that of Ren et al. (2019).
Inclinations matter because we must correct the observed velocities for the inclination of each galaxy as projected on the sky. The inclination correction is V = Vobs/sin(i), so is small at large inclinations (edge-on) but large at small inclinations (face-on). For this reason, dynamical analyses often impose an inclination limit. This is an issue in any theory, but MOND is particularly sensitive since M ∝ V4 so any errors in the inclination are amplified to the fourth power (see Fig. 2 of de Blok & McGaugh, 1998). Worse, inclination estimates can suffer systematic errors (de Blok & McGaugh, 1998; McGaugh, 2012; Verheijen, 2001): a galaxy seen face-on may have an oval distortion that makes it look more inclined than it is, but it can’t be more face-on than face-on.
MOND fits will fail if either the distance or inclination is wrong. Such problems cannot be discerned in fits with dark matter halos, which have ample flexibility to absorb the imparted variance (see Fig. 6 of de Blok & McGaugh, 1998). Consequently, a fit with a dark matter halo will not fail if the distance happens to be wrong; we just won’t notice it.
The best-fit mass-to-light ratios found in MOND rotation curve fits can be checked against independent stellar population models. There is no guarantee that this procedure will return plausible values for the stellar mass-to-light ratio. Nevertheless, MOND fits recover the amplitude that is expected for stellar populations, the expected variation with color, and the band-dependent scatter (e.g., Fig. 28 of Famaey and McGaugh, 2012). Indeed, to a good approximation, the rotation curve can be predicted directly from near-infrared data (McGaugh, 2020; Sanders and Verheijen, 1998) modulo only the inevitable scatter in the mass-to-light ratio. This is a spectacular success of the paradigm that is not shared by dark matter fits (de Blok et al., 2003; de Blok & McGaugh, 1997; Kent, 1987).
Gas rich galaxies provide an even stronger test. When gas dominates the mass budget, the mass-to-light ratio of the stars ceases to have much leverage on the fit. There is no fitting parameter for gas equivalent to the mass-to-light ratio for stars: the gas mass follows directly from the observations. This enables MOND to predict the locations of such galaxies in the Baryonic Tully-Fisher plane (McGaugh, 2011) and essentially their full rotation curves (Sanders, 2019) with no free parameters (McGaugh, 2020).
It should be noted that the acceleration scale a0 is kept fixed when fitting rotation curves. If one allows a0 to vary, both it and the mass-to-light ratio spread over an unphysically large range of values (Li et al., 2018). The two are highly degenerate, causing such fits to be meaningless (Li et al., 2021): the data do not have the power to constrain multiple parameters per galaxy.
Table 2 lists the successful predictions of MOND that are discussed here. A more comprehensive list is given by Famaey and McGaugh (2012) and McGaugh (2020) who also discuss some of the problems posed for dark matter. MOND has had many predictive successes beyond rotation curves (e.g., McGaugh and Milgrom, 2013a,b; McGaugh, 2016) and has inspired successful predictions in cosmology (e.g., Sanders, 1998; McGaugh, 1999, 2000; Sanders, 2001; McGaugh, 2015, 2018). In this context, it makes sense to associate LSB galaxies with low density fluctuations in the initial conditions, thereby recovering the success of DD while its ills are cured by the modified force law. Galaxy formation in general is likely to proceed hierarchically but much more rapidly than in ΛCDM (Sanders, 2001; Stachniewicz and Kutschera, 2001), providing a natural explanation for both the age of stars in elliptical galaxies and allowing for a subsequent settling time for the disks of spiral galaxies (Wittenburg et al., 2020).
Prediction
Observation
Tully-Fisher Relation
Slope = 4
+
No size or surface brightness residuals
+
Mdyn/L depends on surface brightness
+
Central density relation
+
Rotation curve fits
+
Stellar population mass-to-light ratios
+
Mb alone specifies Vf
+
Table 2. Predictions of MOND.
The expert cosmologist may object that there is a great deal more data that must be satisfied. These have been reviewed elsewhere (Bekenstein, 2006; Famaey and McGaugh, 2012; McGaugh, 2015; Sanders and McGaugh, 2002) and are beyond the scope of this discussion. Here I note only that my experience has been that reports of MOND’s falsification are greatly exaggerated. Indeed, it has a great deal more explanatory power for a wider variety of phenomena than is generally appreciated (McGaugh and de Blok, 1998a,b).
The most serious, though certainly not the only, outstanding challenge to MOND is the dynamics of clusters of galaxies (Angus et al., 2008; Sanders and McGaugh, 2002). Contrary to the case in most individual galaxies and some groups of galaxies (Milgrom, 2018, 2019), MOND typically falls short of correcting the mass discrepancy in rich clusters by a factor of ~ 2 in mass. This can be taken as completely fatal, or as a being remarkably close by the standards of astrophysics. Which option one chooses seems to be mostly a matter of confirmation bias: those who are quick to dismiss MOND are happy to spot their own models a factor of two in mass, and even to assert that it is natural to do so (e.g., Ludlow et al., 2017). MOND is hardly alone in suffering problems with clusters of galaxies, which also present problems for ΛCDM (e.g., Angus & McGaugh, 2008; Asencio et al., 2021; Meneghetti et al., 2020).
A common fallacy seems to be that any failing of MOND is automatically considered to be support for ΛCDM. This is seldom the case. More often than not, observations that are problematic for MOND are also problematic for ΛCDM. We do not perceive them as such because we are already convinced that non-baryonic dark matter must exist. From that perspective, any problem encountered by ΛCDM is a mere puzzle that will inevitably be solved, while any problem encountered by MOND is a terminal failure of an irredeemably blasphemous hypothesis. This speaks volumes about human nature but says nothing about how the universe works.
The plain fact is that MOND made many a priori predictions that subsequently came true. This is the essence of the scientific method. LCDM and MOND are largely incommensurate, but whenever I have been able to make a straight comparison, MOND has been the more successful theory. So what am I supposed to say? That it is wrong? Perhaps it is, but that doesn’t make dark matter right. Rather, the predictive successes of MOND must be teaching us something. The field will not progress until these are incorporated into mainstream thinking.