A Blog About the Science and Sociology of Cosmology and Dark Matter
Stacy McGaugh is an astrophysicist and cosmologist who studies galaxies, dark matter, and theories of modified gravity. He is an expert on low surface brightness galaxies, a class of objects in which the stars are spread thin compared to bright galaxies like our own Milky Way. He demonstrated that these dim galaxies appear to be dark matter dominated, providing unique tests of theories of galaxy formation and modified gravity.
Professor McGaugh is currently the chair of the Department of Astronomy at Case Western Reserve University in Cleveland, Ohio, and director of the Warner and Swasey Observatory. Previously he was a member of the faculty at the University of Maryland, having also held research fellowships at Rutgers, the Department of Terrestrial Magnetism of the Carnegie Institution of Washington, and the Institute of Astronomy at the University of Cambridge after earning his Ph.D. from the University of Michigan.
This is a very real problem in academia, and I don’t doubt that it is a common feature of many human endeavors. Part of it is just that people don’t know enough to know what they don’t know. That is to say, so much has been written that it can be hard to find the right reference to put any given fever dream promptly to never-ending sleep. However, that’s not the real problem.
The problem is exactly what Sabine says it is. People keep pushing ideas that have been debunked. Why let facts get in the way of a fancy idea?
I spent a lot of my early career working in the context of non-baryonic dark matter. For a long time, I was enthusiastic about it, but I’ve become skeptical. I continue to work on it, just in case. But I soured on it for good reasons, reasons I have explained repeatedly in exhaustive detail. Some people appreciate this level of detail, but most do not. This is the sort of thing Sabine is talking about. People don’t engage seriously with these problems.
Maybe I’m wrong to be skeptical of dark matter? I could accept that – one cannot investigate this wide universe in which we find ourselves without sometimes coming to the wrong conclusions. Has it been demonstrated that the concerns I raised were wrong? No. Rather than grapple with the problems raised, people have simply ignored them – or worse, assert that they aren’t problems at all without demonstrating anything of the sort. Heck, I’ve even seen people take lists of problems and spin them as virtues.
To give one very quick example, consider the physical interpretation of the Tully-Fisher relation. This has varied over time, and there are many flavors. But usually it is supposed that the luminosity is set by the stellar mass, and the rotation speed by the dark matter mass. If we (reasonably) presume that the stellar mass is proportional to the dark mass, viola – Tully-Fisher. This all sounds perfectly plausible, so most people don’t think any harder about it. No problem at al.
Well, one small problem: this explanation does not work. The velocity is not uniquely set by the dark matter halo. In the range of radii accessible to measurement, the contribution of the baryonic mass is non-negligible in high surface brightness galaxies. If that sounds a little technical, it is. One has to cope at this level to play in the sandbox.
Once we appreciate that we cannot just ignore the baryons, explaining Tully-Fisher becomes a lot harder – in particular, the absence of surface brightness residuals. Higher surface brightness galaxies should rotate faster at a given mass, but they don’t. The easy way to fix this is to suppose that the baryonic mass is indeed negligible, but this leads straight to a contradiction with the diversity of rotation curves following from the central density relation. The kinematics know about the shape of the baryonic mass distribution, not just its total. Solving all these problems simultaneously becomes a game of cosmic whack-a-mole: fixing one aspect of the problem makes another worse. All too often, people are so focused on one aspect of a problem that they don’t realize that their fix comes at the expense of something else. It’s like knocking a hole in one side of a boat to obtain material to patch a hole in the other side of the same boat.
Except they are sure. Problem solved! is what people want to hear, so that’s what they hear. Nobody bothers to double check whether the “right” answer in indeed right when it agrees with their preconceptions. And there is always someone willing to make that assertion.
Kuhn noted that as paradigms reach their breaking point, there is a divergence of opinions between scientists about what the important evidence is, or what even counts as evidence. This has come to pass in the debate over whether dark matter or modified gravity is a better interpretation of the acceleration discrepancy problem. It sometimes feels like we’re speaking about different topics in a different language. That’s why I split the diagram version of the dark matter tree as I did:
Astroparticle physicists seem to be well-informed about the cosmological evidence (top) and favor solutions in the particle sector (left). As more of these people entered the field in the ’00s and began attending conferences where we overlapped, I recognized gaping holes in their knowledge about the dynamical evidence (bottom) and related hypotheses (right). This was part of my motivation to develop an evidence-based course1 on dark matter, to try to fill in the gaps in essential knowledge that were obviously being missed in the typical graduate physics curriculum. Though popular on my campus, not everyone in the field has the opportunity to take this course. It seems that the chasm has continued to grow, though not for lack of attempts at communication.
Part of the problem is a phase difference: many of the questions that concern astroparticle physicists (structure formation is a big one) were addressed 20 years ago in MOND. There is also a difference in texture: dark matter rarely predicts things but always explains them, even if it doesn’t. MOND often nails some predictions but leaves other things unexplained – just a complete blank. So they’re asking questions that are either way behind the curve or as-yet unanswerable. Progress rarely follows a smooth progression in linear time.
I have become aware of a common construction among many advocates of dark matter to criticize “MOND people.” First, I don’t know what a “MOND person” is. I am a scientist who works on a number of topics, among them both dark matter and MOND. I imagine the latter makes me a “MOND person,” though I still don’t really know what that means. It seems to be a generic straw man. Users of this term consistently paint such a luridly ridiculous picture of what MOND people do or do not do that I don’t recognize it as a legitimate depiction of myself or of any of the people I’ve met who work on MOND. I am left to wonder, who are these “MOND people”? They sound very bad. Are there any here in the room with us?
I am under no illusions as to what these people likely say when I am out of ear shot. Someone recently pointed me to a comment on Peter Woit’s blog that I would not have come across on my own. I am specifically named. Here is a screen shot:
This concisely pinpoints where the field2 is at, both right and wrong. Let’s break it down.
let me just remind everyone that the primary reason to believe in the phenomenon of cold dark matter is the very high precision with which we measure the CMB power spectrum, especially modes beyond the second acoustic peak
This is correct, but it is not the original reason to believe in CDM. The history of the subject matters, as we already believed in CDM quite firmly before any modes of the acoustic power spectrum of the CMB were measured. The original reasons to believe in cold dark matter were (1) that the measured, gravitating mass density exceeds the mass density of baryons as indicated by BBN, so there is stuff out there with mass that is not normal matter, and (2) large scale structure has grown by a factor of 105 from the very smooth initial condition indicated initially by the nondetection of fluctuations in the CMB, while normal matter (with normal gravity) can only get us a factor of 103 (there were upper limits excluding this before there was a detection). Structure formation additionally imposes the requirement that whatever the dark matter is moves slowly (hence “cold”) and does not interact via electromagnetism in order to evade making too big an impact on the fluctuations in the CMB (hence the need, again, for something non-baryonic).
When cold dark matter became accepted as the dominant paradigm, fluctuations in the CMB had not yet been measured. The absence of observable fluctuations at a larger level sufficed to indicate the need for CDM. This, together with Ωm > Ωb from BBN (which seemed the better of the two arguments at the time), sufficed to convince me, along with most everyone else who was interested in the problem, that the answer had3 to be CDM.
This all happened before the first fluctuations were observed by COBE in 1992. By that time, we already believed firmly in CDM. The COBE observations caused initial confusion and great consternation – it was too much! We actually had a prediction from then-standard SCDM, and it had predicted an even lower level of fluctuations than what COBE observed. This did not cause us (including me) to doubt CDM (thought there was one suggestion that it might be due to self-interacting dark matter); it seemed a mere puzzle to accommodate, not an anomaly. And accommodate it we did: the power in the large scale fluctuations observed by COBE is part of how we got LCDM, albeit only a modest part. A lot of younger scientists seem to have been taught that the power spectrum is some incredibly successful prediction of CDM when in fact it has surprised us at nearly every turn.
As I’ve related here before, it wasn’t until the end of the century that CMB observations became precise enough to provide a test that might distinguish between CDM and MOND. That test initially came out in favor of MOND – or at least in favor of the absence of dark matter: No-CDM, which I had suggested as a proxy for MOND. Cosmologists and dark matter advocates consistently omit this part of the history of the subject.
I had hoped that cosmologists would experience the same surprise and doubt and reevaluation that I had experienced when MOND cropped up in my own data when it cropped up in theirs. Instead, they went into denial, ignoring the successful prediction of the first-to-second peak amplitude ratio, or, worse, making up stories that it hadn’t happened. Indeed, the amplitude of the second peak was so surprising that the first paper to measure it omitted mention of it entirely. Just didn’t talk about it, let alone admit that “Gee, this crazy prediction came true!” as I had with MOND in LSB galaxies. Consequently, I decided that it was better to spend my time working on topics where progress could be made. This is why most of my work on the CMB predates “modes beyond the second peak” just as our strong belief in CDM also predated that evidence. Indeed, communal belief in CDM was undimmed when the modes defining the second peak were observed, despite the No-CDM proxy for MOND being the only hypothesis to correctly predict it quantitatively a priori.
That said, I agree with clayton’s assessment that
CDM thinks [the second and third peak] should be about the same
That this is the best evidence now is both correct and a much weaker argument than it is made out to be. It sounds really strong, because a formal fit to the CMB data require a dark matter component at extremely high confidence – something approaching 100 sigma. This analysis assumes that dark matter exist. It does not contemplate that something else might cause the same effect, so all it really does, yet again, is demonstrate that General Relativity cannot explain cosmology when restricted to the material entities we concretely know to exist.
Given the timing, the third peak was not a strong element of my original prediction, as we did not yet have either a first or second peak. We hadn’t yet clearly observed peaks at all, so what I was doing was pretty far-sighted, but I wasn’t thinking that far ahead. However, the natural prediction for the No-CDM picture I was considering was indeed that the third peak should be lower than the second, as I’ve discussed before.
In contrast, in CDM, the acoustic power spectrum of the CMB can do a wide variety of things:
Given the diversity of possibilities illustrated here, there was never any doubt that a model could be fit to the data, provided that oscillations were observed as expected in any of the theories under consideration here. Consequently, I do not find fits to the data, though excellent, to be anywhere near as impressive as commonly portrayed. What does impress me is consistency with independent data.
What impresses me even more are a priori predictions. These are the gold standard of the scientific method. That’s why I worked my younger self’s tail off to make a prediction for the second peak before the data came out. In order to make a clean test, you need to know what both theories predict, so I did this for both LCDM and No-CDM. Here are the peak ratios predicted before there were data to constrain them, together with the data that came after:
The left hand panel shows the predicted amplitude ratio of the first-to-second peak, A1:2. This is the primary quantity that I predicted for both paradigms. There is a clear distinction between the predicted bands. I was not unique in my prediction for LCDM; the same thing can be seen in other contemporaneous models. All contemporaneous models. I was the only one who was not surprised by the data when they came in, as I was the only one who had considered the model that got the prediction right: No-CDM.
The same No-CDM model fails to correctly predict the second-to-third peak ratio, A2:3. It is, in fact, way off, while LCDM is consistent with A2:3, just as Clayton says. This is a strong argument against No-CDM, because No-CDM makes a clear and unequivocal prediction that it gets wrong. Clayton calls this
a stone-cold, qualitative, crystal clear prediction of CDM
which is true. It is also qualitative, so I call it weak sauce. LCDM could be made to fit a very large range of A2:3, but it had already got A1:2 wrong. We had to adjust the baryon densityoutside the allowed range in order to make it consistent with the CMB data. The generous upper limit that LCDM might conceivably have predicted in advance of the CMB data was A1:2 < 2.06, which is still clearly less than observed. For the first years of the century, the attitude was that BBN had been close, but not quite right – preference being given to the value needed to fit the CMB. Nowadays, BBN and the CMB are said to be in great concordance, but this is only true if one restricts oneself to deuterium measurements obtained after the “right” answer was known from the CMB. Prior to that, practically all of the measurements for all of the important isotopes of the light elements, deuterium, helium, and lithium, all concurred that the baryon density Ωbh2 < 0.02, with the consensus value being Ωbh2 = 0.0125 ± 0.0005. This is barely half the value subsequently required to fit the CMB (Ωbh2 = 0.0224 ± 0.0001). But what’s a factor of two among cosmologists? (In this case, 4 sigma.)
Taking the data at face value, the original prediction of LCDM was falsified by the second peak. But, no problem, we can move the goal posts, in this case by increasing the baryon density. The successful prediction of the third peak only comes after the goal posts have been moved to accommodate the second peak. Citing only the comparable size of third peak to the second while not acknowledging that the second was too small elides the critical fact that No-CDM got something right, a priori, that LCDM did not. No-CDM failed only after LCDM had already failed. The difference is that I acknowledge its failure while cosmologists elide this inconvenient detail. Perhaps the second peak amplitude is a fluke, but it was a unique prediction that was exactly nailed and remains true in all subsequent data. That’s a pretty remarkable fluke4.
LCDM wins ugly here by virtue of its flexibility. It has greater freedom to fit the data – any of the models in the figure of Dodelson & Hu will do. In contrast. No-CDM is the single blue line in my figure above, and nothing else. Plausible variations in the baryon density make hardly any difference: A1:2 has to have the value that was subsequently observed, and no other. It passed that test with flying colors. It flunked the subsequent test posed by A2:3. For LCDM this isn’t even a test, it is an exercise in fitting the data with a model that has enough parameters5 to do so.
In those days, when No-CDM was the only correct a priori prediction, I would point out to cosmologists that it had got A1:2 right when I got the chance (which was rarely: I was invited to plenty of conferences in those days, but none on the CMB). The typical reaction was usually outright denial6 though sometimes it warranted a dismissive “That’s not a MOND prediction.” The latter is a fair criticism. No-CDM is just General Relativity without CDM. It represented MOND as a proxy under the ansatz that MOND effects had not yet manifested in a way that affected the CMB. I expected that this ansatz would fail at some point, and discussed some of the ways that this should happen. One that’s relevant today is that galaxies form early in MOND, so reionization happens early, and the amplitude of gravitational lensing effects is amplified. There is evidence for both of these now. What I did not anticipate was a departure from a damping spectrum around L=600 (between the second and third peaks). That’s a clear deviation from the prediction, which falsifies the ansatz but not MOND itself. After all, they were correct in noting that this wasn’t a MOND prediction per se, just a proxy. MOND, like Newtonian dynamics before it, is relativity adjacent, but not itself a relativistic theory. Neither can explain the CMB on their own. If you find that an unsatisfactory answer, imagine how I feel.
The same people who complained then that No-CDM wasn’t a real MOND prediction now want to hold MOND to the No-CDM predicted power spectrum and nothing else. First it was the second peak isn’t a real MOND prediction! then when the third peak was observed it became no way MOND can do this! This isn’t just hypocritical, it is bad science. The obvious way to proceed would be to build on the theory that had the greater, if incomplete, predictive success. Instead, the reaction has consistently been to cherry-pick the subset of facts that precludes the need for serious rethinking.
This brings us to sociology, so let’s examine some more of what Clayton has to say:
Any talk I’ve ever seen by McGaugh (or more exotic modified gravity people like Verlinde) elides this fact, and they evade the questions when I put my hand up to ask. I have invited McGaugh to a conference before specifically to discuss this point, and he just doesn’t want to.
There is so much to unpack here, I hardly know where to start. By saying I “elide this fact” about the qualitatively equality of the second and third peak, Clayton is basically accusing me of lying by omission. This is pretty rich coming from a community that consistently elides the history I relate above, and never addresses the question raised by MOND’s predictive power.
Intellectual honesty is very important to me – being honest that MOND predicted what I saw in low surface brightness where my own prediction was wrong is what got me into this mess in the first place. It would have been vastly more convenient to pretend that I never heard of MOND (at first I hadn’t7) and act like that never happened. That would be an lie of omission. It would be a large lie, a lie that denies an important aspect of how the world works (what we’re supposed to uncover through science), the sort of lie that cleric Paul Gerhardt may have had in mind when he said
When a man lies, he murders some part of the world.
Clayton is, in essence, accusing me of exactly that by failing to mention the CMB in talks he has seen. That might be true – I give a lot of talks. He hasn’t been to most of them, and I usually talk about things I’ve done more recently than 2004. I’ve commented explicitly on this complaint before –
There’s only so much you can address in a half hour talk. [This is a recurring problem. No matter what I say, there always seems to be someone who asks “why didn’t you address X?” where X is usually that person’s pet topic. Usually I could do so, but not in the time allotted.]
– so you may appreciate my exasperation at being accused of dishonesty by someone whose complaint is so predictable that I’ve complained before about people who make this complaint. I’m only human – I can’t cover all subjects for all audiences every time all the time. Moreover, I do tend to choose to discuss subjects that may be news to an audience, not simply reprise the greatest hits they want to hear. Clayton obviously knows about the third peak; he doesn’t need to hear about it from me. This is the scientific equivalent of shouting Freebird!at a concert.
It isn’t like I haven’t talked about it. I have been rigorously honest about the CMB, and certainly have not omitted mention of the third peak. Here is a comment from February 2003 when the third peak was only tentatively detected:
Page et al. (2003) do not offer a WMAP measurement of the third peak. They do quote a compilation of other experiments by Wang et al. (2003). Taking this number at face value, the second to third peak amplitude ratio is A2:3 = 1.03 +/- 0.20. The LCDM expectation value for this quantity was 1.1, while the No-CDM expectation was 1.9. By this measure, LCDM is clearly preferable, in contradiction to the better measured first-to-second peak ratio.
the Boomerang data and the last credible point in the 3-year WMAP data both have power that is clearly in excess of the no-CDM prediction. The most natural interpretation of this observation is forcing by a mass component that does not interact with photons, such as non-baryonic cold dark matter.
There are lots like this, including my review for CJP and this talk given at KITP where I had been asked to explicitly take the side of MOND in a debate format for an audience of largely particle physicists. The CMB, including the third peak, appears on the fourth slide, which is right up front, not being elided at all. In the first slide, I tried to encapsulate the attitudes of both sides:
I did the same at a meeting in Stony Brook where I got a weird vibe from the audience; they seemed to think I was lying about the history of the second peak that I recount above. It will be hard to agree on an interpretation if we can’t agree on documented historical facts.
More recently, this image appears on slide 9 of this lecture from the cosmology course I just taught (Fall 2022):
I recognize this slide from talks I’ve given over the past five plus years; this class is the most recent place I’ve used it, not the first. On some occasions I wrote “The 3rd peak is the best evidence for CDM.” I do not recall which all talks I used this in; many of them were likely colloquia for physics departments where one has more time to cover things than in a typical conference talk. Regardless, these apparently were not the talks that Clayton attended. Rather than it being the case that I never address this subject, the more conservative interpretation of the experience he relates would be that I happened not to address it in the small subset of talks that he happened to attend.
But do go off, dude: tell everyone how I never address this issue and evade questions about it.
I have been extraordinarily patient with this sort of thing, but I confess to a great deal of exasperation at the perpetual whataboutism that many scientists engage in. It is used reflexively to shut down discussion of alternatives: dark matter has to be right for this reason (here the CMB); nothing else matters (galaxy dynamics), so we should forbid discussion of MOND. Even if dark matter proves to be correct, the CMB is being used an excuse to not address the question of the century: why does MOND get so many predictions right? Any scientist with a decent physical intuition who takes the time to rub two brain cells together in contemplation of this question will realize that there is something important going on that simply invoking dark matter does not address.
In fairness to McGaugh, he pointed out some very interesting features of galactic DM distributions that do deserve answers. But it turns out that there are a plurality of possibilities, from complex DM physics (self interactions) to unmodelable SM physics (stellar feedback, galaxy-galaxy interactions). There are no such alternatives to CDM to explain the CMB power spectrum.
Thanks. This is nice, and why I say it would be easier to just pretend to never have heard of MOND. Indeed, this succinctly describes the trajectory I was on before I became aware of MOND. I would prefer to be recognized for my own work – of whichthereisplenty – than an association with a theory that is not my own – an association that is born of honestly reporting a surprising observation. I find my reception to be more favorable if I just talk about the data, but what is the point of taking data if we don’t test the hypotheses?
I have gone to great extremes to consider all the possibilities. There is not a plurality of viable possibilities; most of these things do not work. The specific ideas that are cited here are known not work. SIDM apears to work because it has more free parameters than are required to describe the data. This is a common failing of dark matter models that simply fit some functional form to observed rotation curves. They can be made to fit the data, but they cannot be used to predict the way MOND can.
Feedback is even worse. Never mind the details of specific feedback models, and think about what is being said here: the observations are to be explained by “unmodelable [standard model] physics.” This is a way of saying that dark matter claims to explain the phenomena while declining to make a prediction. Don’t worry – it’ll work out! How can that be considered better than or even equivalent to MOND when many of the problems we invoke feedback to solve are caused by the predictions of MOND coming true? We’re just invoking unmodelable physics as a deus ex machina to make dark matter models look like something they are not. Are physicists straight-up asserting that it is better to have a theory that is unmodelable than one that makes predictions that come true?
Returning to the CMB, are there no “alternatives to CDM to explain the CMB power spectrum”? I certainly do not know how to explain the third peak with the No-CDM ansatz. For that we need a relativistic theory, like Beklenstein‘s TeVeS. This initially seemed promising, as it solved the long-standing problem of gravitational lensing in MOND. However, it quickly became clear that it did not work for the CMB. Nevertheless, I learned from this that there could be more to the CMB oscillations than allowed by the simple No-CDM ansatz. The scalar field (an entity theorists love to introduce) in TeVeS-like theories could play a role analogous to cold dark matter in the oscillation equations. That means that what I thought was a killer argument against MOND – the exact same argument Clayton is making – is not as absolute as I had thought.
Writing down a new relativistic theory is not trivial. It is not what I do. I am an observational astronomer. I only play at theory when I can’t get telescope time.
So in the mid-00’s, I decided to let theorists do theory and started the first steps in what would ultimately become the SPARC database (it took a decade and a lot of effort by Jim Schombert and Federico Lelli in addition to myself). On the theoretical side, it also took a long time to make progress because it is a hard problem. Thanks to work by Skordis & Zlosnik on a theory they [now] call AeST8, it is possible to fit the acoustic power spectrum of the CMB:
This fit is indistinguishable from that of LCDM.
I consider this to be a demonstration, not necessarily the last word on the correct theory, but hopefully an iteration towards one. The point here is that it is possible to fit the CMB. That’s all that matters for our current discussion: contrary to the steady insistence of cosmologists over the past 15 years, CDM is not the only way to fit the CMB. There may be other possibilities that we have yet to figure out. Perhaps even a plurality of possibilities. This is hard work and to make progress we need a critical mass of people contributing to the effort, not shouting rubbish from the peanut gallery.
As I’ve done before, I like to take the language used in favor of dark matter, and see if it also fits when I put on a MOND hat:
That is stronger language than I would ordinarily permit myself. I do so entirely to show the danger of being so darn sure. I actually agree with clayton’s perspective in his quote; I’m just showing what it looks like if we adopt the same attitude with a different perspective. The problems pointed out for each theory are genuine, and the supposed solutions are not obviously viable (in either case). Sometimes I feel like we’re up the proverbial creek without a paddle. I do not know what the right answer is, and you should be skeptical of anyone who is sure that he does. Being sure is the sure road to stagnation.
1It may surprise some advocates of dark matter that I barely touch on MOND in this course, only getting to it at the end of the semester, if at all. It really is evidence-based, with a focus on the dynamical evidence as there is a lot more to this than seems to be appreciated by most physicists*. We also teach a course on cosmology, where students get the material that physicists seem to be more familiar with.
*I once had a colleague who was is a physics department ask how to deal with opposition to developing a course on galaxy dynamics. Apparently, some of the physicists there thought it was not a rigorous subject worthy of an entire semester course – an attitude that is all too common. I suggested that she pointedly drop the textbook of Binney & Tremaine on their desks. She reported back that this technique proved effective.
2I do not know who clayton is; that screen name does not suffice as an identifier. He claims to have been in contact with me at some point, which is certainly possible: I talk to a lot of people about these issues. He is welcome to contact me again, though he may wish to consider opening with an apology.
3One of the hardest realizations I ever had as a scientist was that both of the reasons (1) and (2) that I believed to absolutely require CDM assumed that gravity was normal. If one drops that assumption, as one must to contemplate MOND, then these reasons don’t require CDM so much as they highlight that something is very wrong with the universe. That something could be MOND instead of CDM, both of which are in the category of who ordered that?
4In the early days (late ’90s) when I first started asking why MOND gets any predictions right, one of the people I asked was Joe Silk. He dismissed the rotation curve fits of MOND as a fluke. There were 80 galaxies that had been fit at the time, which seemed like a lot of flukes. I mention this because one of the persistent myths of the subject is that MOND is somehow guaranteed to magically fit rotation curves. Erwin de Blok and I explicitly showed that this was not true in a 1998 paper.
5I sometimes hear cosmologists speak in awe of the thousands of observed CMB modes that are fit by half a dozen LCDM parameters. This is impressive, but we’re fitting a damped and driven oscillation – those thousands of modes are not all physically independent. Moreover, as can be seen in the figure from Dodelson & Hu, some free parameters provide more flexibility than others: there is plenty of flexibility in a model with dark matter to fit the CMB data. Only with the Planck data do minortensions arise, the reaction to which is generally to add more free parameters, like decoupling the primordial helium abundance from that of deuterium, which is anathema to standard BBN so is sometimes portrayed as exciting, potentially new physics.
For some reason, I never hear the same people speak in equal awe of the hundreds of galaxy rotation curves that can be fit by MOND with a universal acceleration scale and a single physical free parameter, the mass-to-light ratio. Such fits are over-constrained, and every single galaxy is an independent test. Indeed, MOND can predict rotation curves parameter-free in cases where gas dominates so that the stellar mass-to-light ratio is irrelevant.
How should we weigh the relative merit of these very different lines of evidence?
6On a number of memorable occasions, people shouted “No you didn’t!” On smaller number of those occasions (exactly two), they bothered to look up the prediction in the literature and then wrote to apologize and agree that I had indeed predicted that.
7If you read this paper, part of what you will see is me being confused about how low surface brightness galaxies could adhere so tightly to the Tully-Fisher relation. They should not. In retrospect, one can see that this was a MOND prediction coming true, but at the time I didn’t know about that; all I could see was that the result made no sense in the conventional dark matter picture.
Some while after we published that paper, Bob Sanders, who was at the same institute as my collaborators, related to me that Milgrom had written to him and asked “Do you know these guys?”
8Initially they had called it RelMOND, or just RMOND. AeST stands for Aether-Scalar-Tensor, and is clearly a step along the lines that Bekenstein made with TeVeS.
In addition to fitting the CMB, AeST retains the virtues of TeVeS in terms of providing a lensing signal consistent with the kinematics. However, it is not obvious that it works in detail – Tobias Mistele has a brand new paper testing it, and it doesn’t look good at extremely low accelerations. With that caveat, it significantly outperforms extant dark matter models.
There is an oft-repeated fallacy that comes up any time a MOND-related theory has a problem: “MOND doesn’t work therefore it has to be dark matter.” This only ever seems to hold when you don’t bother to check what dark matter predicts. In this case, we should but don’t detect the edge of dark matter halos at higher accelerations than where AeST runs into trouble.
9Another question I’ve posed for over a quarter century now is what would falsify CDM? The first person to give a straight answer to this question was Simon White, who said that cusps in dark matter halos were an ironclad prediction; they had to be there. Many years later, it is clear that they are not, but does anyone still believe this is an ironclad prediction? If it is, then CDM is already falsified. If it is not, then what would be? It seems like the paradigm can fit any surprising result, no matter how unlikely a priori. This is not a strength, it is a weakness. We can, and do, add epicycle upon epicycle to save the phenomenon. This has been my concern for CDM for a long time now: not that it gets some predictions wrong, but that it can apparently never get a prediction so wrong that we can’t patch it up, so we can never come to doubt it if it happens to be wrong.
That’s the question of the year, and perhaps of the century. I’ve been asking it since before this century began, and I have yet to hear a satisfactory answer. Most of the relevant scientific community has aggressively failed to engage with it. Even if MOND is wrong for [insert favorite reason], this does not relieve us of the burden to understand why it gets many predictions right – predictions that have repeatedly come as a surprise to the community that has declined to engage, preferring to ignore the elephant in the room.
It is not good enough to explain MOND phenomenology post facto with some contrived LCDM model. That’s mostly1 what is on offer, being born of the attitude that we’re sure LCDM is right, so somehow MOND phenomenology must emerge from it. We could just as [un]reasonably adopt the attitude that MOND is correct, so surely LCDM phenomenology happens as a result of trying to fit the standard cosmological model to some deeper, subtly different theory.
A basic tenet of the scientific method is that if a theory has its predictions come true, we are obliged to acknowledge its efficacy. This is how we know when to change our minds. This holds even if we don’t like said theory – especially if we don’t like it.
That was my experience with MOND. It correctly predicted the kinematics of the low surface brightness galaxies I was interested in. Dark matter did not. The data falsified all the models available at the time, including my own dark matter-based hypothesis. The only successful a priori predictions were those made by Milgrom. So what am I to conclude2 from this? That he was wrong?
I understand the reluctance to engage. It really ticked me off that my own model was falsified. How could this stupid theory of Milgrom’s do better for my galaxies? Indeed, how could it get anything right? I had no answer to this, nor does the wider community. It is not for lack of trying on my part; I’ve spent a lot of time3 building conventional dark matter models. They don’t work. Most of the models made by others that I’ve seen are just variations on models I had already considered and rejected as obviously unworkable. They might look workable from one angle, but they inevitably fail from some other, solving one problem at the expense of another.
Predictive success does not guarantee that a theory is right, but it does make it better than competing theories that fail for the same prediction. This is where MOND and LCDM are difficult to compare, as the relevant data are largely incommensurate. Where one is eloquent, the other tends to be muddled. However, it has been my experience that MOND more frequently reproduces the successes of dark matter than vice-versa. I expect this statement comes as a surprise to some, as it certainly did to me (see the comment line of astro-ph/9801102). The people who say the opposite clearly haven’t bothered to check2 as I have, or even to give MOND a real chance. If you come to a problem sure you know the answer, no data will change your mind. Hence:
A challenge: What would falsify the existence of dark matter?
If LCDM is a scientific theory, it should be falsifiable4. Dark matter, by itself, is a concept, not a theory: mass that is invisible. So how can we tell if it’s not there? Once we have convinced ourselves that the universe is full of invisible stuff that we can’t see or (so far) detect any other way, how do we disabuse ourselves of this notion, should it happen to be wrong? If it is correct, we can in principle find it in the lab, so its existence can be confirmed. But is it falsifiable? How?
That is my challenge to the dark matter community: what would convince you that the dark matter picture is wrong? Answers will vary, as it is up to each individual to decide for themself how to answer. But there has to be an answer. To leave this basic question unaddressed is to abandon the scientific method.
I’ll go first. Starting in 1985 when I was first presented evidence in a class taught by Scott Tremaine, I was as much of a believer in dark matter as anyone. I was even a vigorous advocate, for a time. What convinced me to first doubt the dark matter picture was the fine-tuning I had to engage in to salvage it. It was only after that experience that I realized that the problems I was encountering were caused by the data doing what MOND had predicted – something that really shouldn’t happen if dark matter is running the show. But the MOND part came after; I had already become dubious about dark matter in its own context.
Falsifiability is a question every scientist who works on dark matter needs to face. What would cause you to doubt the existence of dark matter? Nothing is not a scientific answer. Neither is it correct to assert that the evidence for dark matter is already overwhelming. That is a misstatement: the evidence for acceleration discrepancies is overwhelming, but these can be interpreted as evidence for either dark matter or MOND.
This important thing is to establish criteria by which you would change your mind. I changed my mind before: I am no longer convinced that the solution the acceleration discrepancy has to be non-baryonic dark matter. I will change my mind again if the evidence warrants. Let me state, yet again, what would cause me to doubt that MOND is a critical element of said solution. There are lots of possibilities, as MOND is readily falsifiable. Three important ones are:
MOND getting a fundamental prediction wrong;
Detecting dark matter;
Answering the question of the year.
None of these have happened yet. Just shouting MOND is falsified already! doesn’t make it so: the evidence has to be both clear and satisfactory. For example,
MOND might be falsified by cluster data, but it’s apparent failure is not fundamental. There is a residual missing mass problem in the richest clusters, but there’s nothing in MOND that says we have to have detected all the baryons by now. Indeed, LCDM doesn’t fare better, just differently, with both theories suffering a missing baryon problem. The chief difference is that we’re willing to give LCDM endless mulligans but MOND none at all. Where the problem for MOND in clusters comes up all the time, the analogous problem in LCDM is barely discussed, and is not even recognized as a problem.
A detection of dark matter would certainly help. To be satisfactory, it can’t be an isolated signal in a lone experiment that no one else can reproduce. If a new particle is detected, its properties have to be correct (e.g, it has the right mass density, etc.). As always, we must be wary of some standard model event masquerading as dark matter. WIMP detectors will soon reach the neutrino background accumulated from all the nuclear emissions of stars over the course of cosmic history, at which time they will start detecting weakly interacting particles as intended: neutrinos. Those aren’t the dark matter, but what are the odds that the first of those neutrino detections will be eagerly misinterpreted as dark matter?
Finally, the question of the year: why does MOND get any prediction right? To provide a satisfactory answer to this, one must come up with a physical model that provides a compelling explanation for the phenomena and has the same ability as MOND to make novel predictions. Just building a post-hoc model to match the data, which is the most common approach, doesn’t provide a satisfactory, let alone a compelling, explanation for the phenomenon, and provides no predictive power at all. If it did, we could have predicted MOND-like phenomenology and wouldn’t have to build these models after the fact.
So far, none of these three things have been clearly satisfied. The greatest danger to MOND comes from MOND itself: the residual mass discrepancy in clusters, the tension in Galactic data (some of which favor MOND, other of which don’t), and the apparent absence of dark matter in some galaxies. While these are real problems, they are also of the scale that is expected in the normal course of science: there are always tensions and misleading tidbits of information; I personally worry the most about the Galactic data. But even if my first point is satisfied and MOND fails on its own merits, that does not make dark matter better.
A large segment of the scientific community seems to suffer a common logical fallacy: any problem with MOND is seen as a success for dark matter. That’s silly. One has to evaluate the predictions of dark matter for the same observation to see how it fares. My experience has been that observations that are problematic for MOND are also problematic for dark matter. The latter often survives by not making a prediction at all, which is hardly a point in its favor.
Other situations are just plain weird. For example, it is popular these days to cite the absence of dark matter in some ultradiffuse galaxies as a challenge to MOND, which they are. But neither does it make sense to have galaxies without dark matter in a universe made of dark matter. Such a situation can be arranged, but the circumstances are rather contrived and usually involve some non-equilibrium dynamics. That’s fine; that can happen on rare occasions, but disequilibrium situations can happen in MOND too (the claims of falsification inevitably assume equilibrium). We can’t have it both ways, permitting special circumstances for one theory but not for the other. Worse, some examples of galaxies that are claimed to be devoid of dark matter are as much a problem for LCDM as for MOND. A disk galaxy devoid of either can’t happen; we need something to stabilize disks.
So where do we go from here? Who knows! There are fundamental questions that remain unanswered, and that’s a good thing. There is real science yet to be done. We can make progress if we stick to the scientific method. There is more to be done than measuring cosmological parameters to the sixth place of decimals. But we have to start by setting standards for falsification. If there is no observation or experimental result that would disabuse you of your current belief system, then that belief system is more akin to religion than to science.
2There is a common refrain that “MOND fits rotation curves and nothing else.” This is a myth, plain and simple. A good, old-fashioned falsehood sustained by the echo chamber effect. (That’s what I heard!) Seriously: if you are a scientist who thinks this, what is your source? Did it come from a review of MOND, or from idle chit-chat? How many MOND papers have you read? What do you actually know about it? Ignorance is not a strong position from which to draw a scientific conclusion.
3Like most of the community, I have invested considerably more effort in dark matter than in MOND. Where I differ from much of the galaxy formation community* is in admitting when those efforts fail. There is a temptation to slap some lipstick on the dark matter pig and claim success just to go along to get along, but what is the point of science if that is what we do when we encounter an inconvenient result? For me, MOND has been an incredibly inconvenient result. I would love to be able to falsify it, but so far intellectual honesty forbids.
*There is a widespread ethos of toxic positivity in the galaxy formation literature, which habitually puts a more positive spin on results than is objectively warranted. I’m aware of at least one prominent school where students are taught “to be optimistic” and omit mention of caveats that might detract from the a model’s reception. This is effective in a careerist sense, but antithetical to the scientific endeavor.
4The word “falsification” carries a lot of philosophical baggage that I don’t care to get into here. The point is that there must be a way to tell if a theory is wrong. If there is not, we might as well be debating the number of angels that can dance on the head of a pin.
I would like to write something positive to close out the year. Apparently, it is not in my nature, as I am finding it difficult to do so. I try not to say anything if I can’t say anything nice, and as a consequence I have said little here for weeks at a time.
Still, there are good things that happened this year. JWST launched a year ago. The predictions I made for it at that time have since been realized. There have been some bumps along the way, with some of the photometric redshifts for very high z galaxies turning out to be wrong. They have not all turned out to be wrong, and the current consensus seems to be converging towards acceptance of there existing a good number of relatively bright galaxies at z > 10. Some of these have been ‘confirmed’ by spectroscopy.
I remain skeptical of some of the spectra as well as the photometric redshifts. There isn’t much spectrum to see at these rest frame ultraviolet wavelengths. There aren’t a lot of obvious, distinctive features in the spectra that make for definitive line identifications, and the universe is rather opaque to the UV photons blueward of the Lyman break. Here is an example from the JADES survey:
Despite the lack of distinctive spectral lines, there is a clear shape that is ramping up towards the blue until hitting a sharp edge. This is consistent with the spectrum of a star forming galaxy with young stars that make a lot of UV light: the upward bend is expected for such a population, and hard to explain otherwise. The edge is cause by opacity: intervening gas and dust gobbles up those photons, few of which are likely to even escape their host galaxy, much less survive the billions of light-years to be traversed between there-then and here-now. So I concur that the most obvious interpretation of these spectra is that of high-z galaxies even if we don’t have the satisfaction of seeing blatantly obvious emission lines like C IV or Mg II (ionized species of carbon and magnesium that are frequently seen in the spectra of quasars). [The obscure nomenclature dates back to nineteenth century laboratory spectroscopy. Mg I is neutral, Mg II singly ionized, C IV triply ionized.]
Even if we seem headed towards consensus on the reality of big galaxies at high redshift, the same cannot yet be said about their interpretation. This certainly came as a huge surprise to astronomers not me. The obvious interpretation is the theory that predicted this observation in advance, no?
As I was trying to explain on twitter that individually high mass galaxies had not been expected in LCDM, someone popped into my feed to assert that they had multiple simulations with galaxies that massive. That certainly had not been the case all along, so this just tells me that LCDM doesn’t really make a prediction here that can’t be fudged (crank up the star formation efficiency!). This is worse than no prediction at all: you can never know that you’re wrong, as you can fix any failing. Worse, it has been my experience that there is always someone willing to play the role of fixer, usually some ambitious young person eager to gain credit for saving the most favored theory. It works – I can point to many Ivy league careers that followed this approach. They don’t even have to work hard at it, as the community is predisposed to believe what they want to hear.
These are all reasons why predictions made in advance of the relevant observation are the most valuable.
That MOND has consistently predicted, in advance, results that were surprising to LCDM is a fact that the community apparently remains unaware of. Communication is inefficient, so for a long time I thought this sufficed as an explanation. That is no longer the case; the only explanation that fits the sociological observations is that the ignorance is willful.
“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
We have been spoiled. The last 400 years has given us the impression that science progresses steadily and irresistibly forward. This is in no way guaranteed. Science progresses in fits and starts; it only looks continuous when the highlights are viewed in retrospective soft focus. Progress can halt and even regress, as happened abruptly with the many engineering feats of the Romans with the fall of their empire. Science is a human endeavor subject to human folly, and we might just as easily have a thousand years of belief in invisible mass as we did in epicycles.
Despite all this, I remain guardedly optimistic that we can and will progress. I don’t know what the right answer is. The first step is to let go of being sure that we do.
I’ll end with a quote pointed out to me by David Merritt that seems to apply today as it did centuries ago:
“The scepticism of that generation was the most uncompromising that the world has known; for it did not even trouble to deny: it simply ignored. It presented a blank wall of perfect indifference alike to the mysteries of the universe and to the solutions of them.”
We are visual animals. What we see informs our perception of the world, so it often helps to make a sketch to help conceptualize difficult material. When first confronted with MOND phenomenology in galaxies that I had been sure were dark matter dominated, I made a sketch to help organize my thoughts. Here is a scan of the original dark matter tree that I drew on a transparency (pre-powerpoint!) in 1995:
At the bottom are the roots of the problem: the astronomical evidence for mass discrepancies. From these grow the trunk, which splits into categories of possible solutions, which in turn branch into ever more specific possibilities. Most of these items were already old news at the time: I was categorizing, not inventing. Indeed, some things have been rebranded over time without changing all that much, with strange nuggets now being known as macros (a generalization to describe dark matter candidates of nuclear density) and asymmetric gravity becoming MOG. The more things change, the more they stay the same.
I’ve used this picture many times in talks, both public and scientific. It helps to focus the mind. I updated it for the 2012 review Benoit Famaey wrote (see our Fig. 1), but I don’t think I really improved on the older version, which Don Lincoln had adapted for the cover illustration of an issue of Physics Teacher (circa 2013), with some embellishment by their graphic artists. That’s pretty good, but I prefer my original.
Though there are no lack of buds on the tree, there have certainly been more ideas for dark matter candidates over the past thirty years, so I went looking to see if someone had attempted a similar exercise to categorize or at least corral all the ideas people have considered. Tim Tait made one such figure, but you have to already be an expert to make any sense of it, it being a sort of Venn diagram of the large conceptual playground that is theoretical particle physics.
This is nice: well organized and pleasantly symmetric, and making good use of color to distinguish different types of possibilities. One can recognize many of the same names from the original tree like MACHOs and MOND, along with newer, related entities like Macros and TeVeS. Interestingly, WIMPs are not mentioned, despite dominating the history of the field. They are subsumed under supersymmetry, which is now itself just a sub-branch of weak-scale possibilities rather than the grand unified theory of manifest inevitability that it was once considered to be. It is a sign of how far we have come that the number one candidate, the one that remains the focus of dozens of large experiments, doesn’t even come up by name. It is also a sign of how far we have yet to go that it seems preferable to many to invent new dark matter candidates than take seriously alternatives that have had much greater predictive success.
A challenge one faces in doing this exercise is to decide which candidates deserve mention, and which are just specific details that should be grouped under some more major branch. As a practical matter, it is impossible to wedge everything in, nor does every wild idea we’ve ever thought up deserve equal mention: Kaluza-Klein dark matter is not a coequal peer to WIMPs. But how do we be fair about making that call? It may not be possible.
I wanted to see how the new diagram mapped to the old tree, so I chopped it up and grafted each piece onto the appropriate branch of the original tree:
This works pretty well. It looks like the tree has blossomed with more ideas, which it has. There are more possibilities along well-established branches, and entirely new branches that I could only anticipate with question marks that allowed for the possibility of things we had not yet thought up. The tree is getting bushy.
Ultimately, the goal is not to have an ever bushier tree, but rather the opposite: we want to find the right answer. As an experimentalist, one wants to either detect or exclude specific dark matter candidates. As an scientist, I want to apply the wealth of observational knowledge we have accumulated like a chainsaw in the hands of an overzealous gardener to hack off misleading branches until the tree has been pruned down to a single branch, the one (and hopefully only one) correct answer.
As much as I like Bertone & Tait’s hexagonal image, it is very focused on ideas in particle physics. Five of the six branches are various forms of dark matter, while the possibility of modified gravity is grudgingly acknowledged in only one. It is illustrated as a dull grey that is unlike the bright, cheerful colors granted to the various flavors of dark matter candidates. To be sure, there are more ideas for solutions to the mass discrepancy problem from the particle physics than anywhere else, but that doesn’t mean they all deserve equal mention. One looking at this diagram might get the impression that the odds of dark matter:modified gravity are 5:1, which seems at once both biased against the latter and yet considerably more generous than its authors likely intended.
There is no mention at all of the data at the roots of the problem. That is all subsumed in the central DARK MATTER, as if we’re looking down at the top of the tree and recognize that it must have a central trunk, but cannot see its roots. This is indeed an apt depiction of the division between physics and astronomy. Proposed candidates for dark matter have emerged primarily from the particle physics community, which is what the hexagon categorizes. It takes for granted the evidence for dark matter, which is entirely astronomical in nature. This is not a trivial point; I’ve often encountered particle physicists who are mystified that astronomers have the temerity of think they can contribute to the dark matter debate despite 100% (not 90%, nor 99%, nor even 99.9%, but 100%) of the evidence for mass discrepancies stemming from observations of the sky. Apparently, our job was done when we told them we needed something unseen, and we should remain politely quiet while the Big Brains figure it out.
For a categorization of solutions, I suppose it is tolerable if dangerous divorced from the origins of the problem to leave off the evidence. There is another problem with placing DARK MATTER at the center. This is a linguistic problem that raises deep epistemological issues that most scientists working in the field rarely bother to engage with. Words matter; the names we use frame how we think about the problem. By calling it the dark matter problem, we presuppose the answer. A more appropriate term might be mass discrepancy, which was in use for a while by more careful-minded people, but it seems to have fallen into disuse. Dark matter is easier to say and sounds way more cool.
Jacob Bekenstein pointed out that an even better term would be acceleration discrepancy. That’s what we measure, after all. The centripetal acceleration in spiral galaxies exceeds that predicted by the observed distribution of visible matter. Mass is an inference, and a sloppy one at that: dynamical data only constrain the mass enclosed by the last measured point. The total mass of a dark matter halo depends on how far it extends, which we never observe because the darn stuff is invisible. And of course we only infer the existence of dark matter by assuming that the force law is correct. That gravity as taught to us by Einstein and Newton should apply to galaxies seems like a pretty darn good assumption, but it is just that. By calling it the dark matter problem, we make it all about unseen mass and neglect the possibility that the inference might go astray with that first, basic assumption.
So I’ve made a new picture, placing the acceleration discrepancy at the center where it belongs. The astronomical observations that inform the problem are on the vertical axis while the logical possibilities for physics solutions are on the horizontal axis. I’ve been very spare in filling in both: I’m trying to trace the logical possibilities with a minimum of bias and clutter, so I’ve retained some ideas that are pretty well excluded.
For example, on the dark matter side, MACHOs are pretty well excluded at this point, as are most (all?) dark matter candidates composed of Standard Model particles. Normal matter just doesn’t cut it, but I’ve left that sector in as a logical possibility that was considered historically and shouldn’t be forgotten. On the dynamical side, one of the first thoughts is that galaxies are big so perhaps the force law changes at some appropriate scale much large than the solar system. At this juncture, we have excluded all modifications to the force law that are made at a specific length scale.
There are too many lines of observational evidence to do justice to here. I’ve lumped an enormous amount of it into a small number of categorical bins. This is not ideal, but some key points are at least mentioned. I invite the reader to try doing the exercise with pencil and paper. There are serious limits imposed by what you can physically display in a font the eye can read with a complexity limited to that which does not make the head explode. I fear I may already be pushing both.
I have made a split between dynamical and cosmological evidence. These tend to push the interpretation one way or the other, as hinted by the colors. Which way one goes depends entirely on how one weighs rather disparate lines of evidence.
I’ve also placed the things that were known from the outset of the modern dark matter paradigm closer to the center than those that were not. That galaxies and clusters of galaxies needed something more than meets the eye was known, and informed the need for dark matter. That the dynamics of galaxies over a huge range of mass, size, surface brightness, gas fraction, and morphology are organized by a few simple empirical relations was not yet known. The Baryonic Tully-Fisher Relation (BTFR) and the Radial Acceleration Relation (RAR) are critical pieces of evidence that did not inform the construction of the current paradigm, and are not satisfactorily explained by it.
Similarly for cosmology, the non-baryonic cold dark matter paradigm was launched by the observation that the dynamical mass density apparently exceeds that allowed for normal matter by primordial nucleosynthesis. This, together with the need to grow the observed large scale structure from the very smooth initial condition indicated by the cosmic microwave background (CMB), convinced nearly everyone (including myself) that there must be some new form of non-baryonic dark matter particle outside the realm of the Standard Model. Detailed observations of the power spectra of both galaxies and the CMB are important corroborating observations that did not yet exist at the time the idea took hold. We also got our predictions for these things very wrong initially, hence the need to change from Standard CDM to Lambda CDM.
Most of the people I have met who work on dark matter candidates seem to be well informed of cosmological constraints. In contrast, their knowledge of galaxy dynamics often seems to start and end with “rotation curves are flat.” There is quite a lot more to it than that. But, by and large, they stopped listening at “therefore we need dark matter” and were off and running with ideas for what it could be. There is a need to reassess the viability of these ideas in the light of the BTFR and the RAR.
People who work on galaxy dynamics are concerned with the obvious connections between dynamics and the observed stars and are inclined to be suspicious of the cosmological inference requiring non-baryonic dark matter. Over the years, I have repeatedly been approached by eminent dynamicists who have related in hushed tones, less the cosmologists overhear, that the dark matter must be baryonic. I can understand their reticence, since I was, originally, one of those people who they didn’t want to have overhear. Baryonic dark mater was crazy – we need more mass than is allowed by big bang nucleosynthesis! I usually refrained from raising this issue, as I have plenty of reasons to sympathize, and try to be a sympathetic ear even when I don’t. I did bring it up in an extended conversation with Vera Rubin once, who scoffed that the theorists were too clever by half. She reckoned that if she could demonstrate that Ωm = 1 in baryons one day, that they would have somehow fixed nucleosynthesis by the next. Her attitude was well-grounded in experience.
A common attitude among advocates of non-baryonic dark matter is that the power spectrum of the CMB requires its existence. Fits to the data require a non-baryonic component at something like 100 sigma. That’s pretty significant evidence.
The problem with this attitude is that it assumes General Relativity (GR). That’s the theory in which the fits are made. There is, indeed, no doubt that the existence of cold dark matter is required in order to make the fits in the context of GR: it does not work without it. To take this as proof of the existence of cold dark mater is entirely circular logic. Indeed, that we have to invent dark matter as a tooth fairy to save GR might be interpreted as evidence against it, or at least as an indication that there might exist a still more general theory.
Nevertheless, I do have sympathy for the attitude that any idea that is going to work has to explain all the data – including both dynamical and cosmological evidence. Where one has to be careful is to assume that the explanation we currently have is unique – so unique that no other theory could ever conceivably explain it. By that logic, MOND is the only theory that uniquely predicted both the BTFR and the RAR. So if we’re being even-handed, cold dark matter is ruled out by the dynamical relations identified after its invention at least as much as its competitors are excluded by the detailed, later measurement of the power spectrum of the CMB.
If we believe all the data, and hold all theories to the same high standard, none survive. Not a single one. A common approach seems to be to hold one’s favorite theory to a lower standard. I will not dignify that with a repudiation. The challenge with data both astronomical and cosmological, is figuring out what to believe. It has gotten better, but you can’t rely on every measurement being right, or – harder to bear in mind – actually measure what you want it to measure. Do the orbits of gas clouds in spiral galaxies trace the geodesics of test particles in perfectly circular motion? Does the assumption of hydrostatic equilibrium in the intracluster medium (ICM) of clusters of galaxies provide the same tracer of the gravitational potential as dynamics? There is an annoying offset in the acceleration scale measured by the two distinct methods. Is that real, or some systematic? It seems to be real, but it is also suspicious for appearing exactly where the change in method occurs.
One will go mad trying to track down every conceivable systematic. Trust me, I’ve done the experiment. So an exercise I like to do is to ask what theory minimizes the amount of data I have to ignore. I spent several years reviewing all the data in order to do this exercise when I first got interested in this problem. To my surprise, it was MOND that did best by this measure, not dark matter. To this date, clusters of galaxies remain the most problematic for MOND in having a discrepant acceleration scale – a real problem that we would not hesitate to sweep under the rug if dark matter suffered it. For example, the offset the EAGLE simulation requires to [sort of] match the RAR is almost exactly the same amplitude as what MOND needs to match clusters. Rather than considering this to be a problem, they apply the required offset and call it natural to have missed by this much.
Most of the things we call evidence for dark matter are really evidence for the acceleration discrepancy. A mental hang up I had when I first came to the problem was that there’s so much evidence for dark matter. That is a misstatement stemming from the linguistic bias I noted earlier. There’s so much evidence for the acceleration discrepancy. I still see professionals struggle with this, often citing results as being contradictory to MOND that actually support it. They seem not to have bothered to check, as I have, and are content to repeat what they heard someone else assert. I sometimes wonder if the most lasting contribution to science made by the dark matter paradigm is as one giant Asch conformity experiment.
If we repeat today the exercise of minimizing the amount of data we have to disbelieve, the theory that fares best is the Aether Scalar Tensor (AeST) theory of Skordis & Zlosnik. It contains MOND in the appropriate limit while also providing an excellent fit to the power spectrum of galaxies and the CMB (see also the updated plots in their paper). Hybrid models struggle to do both while the traditional approach of simply adding mass in new particles does not provide a satisfactory explanation of the MOND phenomenology. They can be excluded unless we indulge in the special pleading that invokes feedback or other ad hoc auxiliary hypotheses. Similarly, more elaborate ideas like self-interacting dark matter were dead on arrival for providing a mechanism to solve the wrong problem: the cores inferred in dark matter halos are merely a symptom of the more general MONDian phenomenology; the proposed solution addresses the underlying disease about as much as a band-aid helps an amputation.
Does that mean AeST is the correct theory? Only in the sense that MOND was the best theory when I first did this exercise in the previous century. The needle has swung back and forth since then, so it might swing again. But I do hope that it is a step in a better direction.
The dominant paradigm for dark matter has long been the weakly interacting massive particle (WIMP). WIMPs are hypothetical particles motivated by supersymmetry. This is well-posed scientific hypothesis insofar as it makes a testable prediction: the cold dark matter thought to dominate the cosmic mass budget should be composed of a particle with a mass in the neighborhood of 100 GeV that interacts via the weak nuclear force – hence the name.
That WIMPs couple to the weak nuclear force as well as to gravity is what gives us a window to detect them in the laboratory. They should scatter off of nuclei of comparable mass, albeit only on the rare occasions dictated by the weak force. If we build big enough detectors, we should see it happen. This is what a whole host of massive, underground experiments have been looking for. So far, these experiments have succeeded in failing to detect WIMPs: if WIMPs existed with the properties we predicted them to have, they would have been detected by now.
The failure to find WIMPs has led to the consideration of a myriad of other possibilities. Few of these are as well motivated as the original WIMP. Some have nifty properties that might help with the phenomenology of galaxies. Most are woefully uninformed by such astrophysical considerations, as it is hard enough to do the particle physics without violating some basic constraint.
One possibility that most of us have been reluctant to contemplate is a particle that doesn’t interact at all via strong, weak, or electromagnetic forces. We already know that dark matter cannot interact via electromagnetism, as it wouldn’t be dark. It is similarly difficult to hide a particle that responds to the strong force (though people have of course tried, with strange nuggets in the ’80s and their modern reincarnation, the macro). But why should a particle have to interact at least through the weak force, as WIMPs do? No reason. So what if there is a particle that has zero interaction with standard model particles? It has mass and therefore gravity, but otherwise interacts with the rest of the universe not at all. Let’s call this the Angel Particle, because it will never reveal itself, no matter how much we pray for divine intervention.
I first heard this idea mooted in a talk by Tom Shutt in the early teens. He is a leader in the search for WIMPs, and has been since the outset. So to suggest that the dark matter is something that simply cannot be detected in the laboratory was anathema. A logical possibility to be noted, but only in passing with a shudder of existential dread: the legions of experimentalists looking for dark matter are wasting their time if there is no conceivable signal to detect.
Flash forward a decade, and what was anathema then seems reasonable now that WIMPs remain AWOL. I hear some theorists saying “why not?” with a straight face. “Why shouldn’t there be a particle that doesn’t interact with anything else?”
One the one hand, it’s true. As long as we’re making up particles outside the boundaries of known physics, I know of nothing that precludes us from inventing one that has zero interactions. On the other hand, how would we ever know? We would just give up on laboratory searches, and accept on faith that “gravitational detection” from astronomical evidence is adequate – and indeed, the only possible evidence for invisible mass.
Experimentalists go home! Your services are not required.
To me, this is not physics. There is no way to falsify this hypothesis, or even test it. I was already concerned that WIMPs are not strictly falsifiable. They can be confirmed if found in the laboratory, but if they are not found, we can always tweak the prediction – all the way to this limit of zero interaction, a situation I’ve previously described as the express elevator to hell.
If there is no way to test a hypothesis to destruction, it is metaphysics, not physics. Entertaining the existence of a particle with zero interaction cross-section is a logical possibility, but it is also a form of magical thinking. It provides a way to avoid confronting the many problems with the current paradigm. Indeed, it provides an excuse to never have to deal with them. This way lies madness, and the end of scientific rationalism. We might just as well imagine that angels are responsible for moving objects about.
Indeed, the only virtue of this hypothesis that springs to mind is to address the age-old question: how many angels can dance on the head of a pin? We know from astronomical data that the local density of angel particles must be about 1/4 GeV cm-3. Let’s say the typical pin head is a cylinder with a diameter of 2.5 mm and a thickness of 1 mm, giving it a volume of 10 mm3. Doing a few unit conversions, this means a dark mass of 1 MeV* per pin head, so exactly one angel can occupy the head of a pin if the mass of the Angel particle is 1 MeV.
Of course, we have no idea what the mass of the Angel particle is, so we’ve really only established a limit: 1 MeV is the upper limit for the mass of an angel that can fit on the head of a pin. If it weighs more than 1 MeV, the answer is zero: an angel is too fat to fit on the head of a pin. If angels weighs less than 1 MeV, then they can fit numbers in inverse proportion to their mass. If it is as small as 1 eV, then a million angels can party on the vast dance floor that is the head of a pin.
So I guess we still haven’t answered the age old question, and it looks like we never will.
*An electron is about half an MeV, so it is tempting to imagine dark matter composed of positronium. This does not work for many reasons, not least of which is that a mass of 1 MeV is a coincidence of the volume of the head of a pin that I made up for ease of calculation without bothering to measure the size of an actual pin – not to mention that the size of pins has nothing whatever to do with the dark matter problem. Another reason is that, being composed of an electron and its antiparticle the positron, positronium is unstable and self-annihilates into gamma rays in less than a nanosecond – rather less than the Hubble time that we require for dark matter to still be around at this juncture. Consequently, this hypothesis is immediately off by a factor of 1028, which is the sort of thing that tends to happen when you try to construct dark matter from known particles – hence the need to make up entirely new stuff.
God forbid we contemplate that maybe the force law might be broken. How crazy would that be?
I’ve reached the point in the semester teaching cosmology where we I’ve gone through the details of what we call the three empirical pillars of the hot big bang:
Primordial [Big Bang] Nucleosynthesis (BBN)
Relic Radiation (aka the Cosmic Microwave Background; CMB)
These form an interlocking set of evidence and consistency checks that leave little room for doubt that we live in an expanding universe that passed through an early, hot phase that bequeathed us with the isotopes of the light elements (mostly hydrogen and helium with a dash of lithium) and left us bathing in the relic radiation that we perceive all across the sky as the CMB, the redshifted epoch of last scattering. While I worry about everything, as any good scientist does, I do not seriously doubt that this basic picture is essentially correct.
This basic picture is rather general. Many people seem to conflate it with one specific realization, namely Lambda Cold Dark Matter (LCDM). That’s understandable, because LCDM is the only model that remains viable within the framework of General Relativity (GR). However, that does not inevitably mean it must be so; one can imagine more general theories than GR that contain all the usual early universe results. Indeed, it is hard to imagine otherwise, since such a theory – should it exist – has to reproduce all the successes of GR just as GR had to reproduce all the successes of Newton.
Writing a theory that generalizes GR is a very tall order, so how would we know if we should even attempt such a daunting enterprise? This is not an easy question to answer. I’ve been posing it to myself an others for a quarter century. Answers received range from Why would you even ask that, you fool? to Obviously GR needs to be supplanted by a quantum theory of gravity.
One red flag that a theory might be in trouble is when one has to invoke tooth fairies to preserve it. These are what the philosophers of science more properly call auxiliary hypotheses: unexpected elements that are not part of the original theory that we have been obliged to add in order to preserve it. Modern cosmology requires two:
Non-baryonic cold dark matter
Lambda (or its generalization, dark energy)
LCDM. The tooth fairies are right there in the name.
Lambda and CDM are in no way required by the original big bang hypothesis, and indeed, both came as a tremendous surprise. They are auxiliary hypotheses forced on us by interpreting the data strictly within the framework of GR. If we restrict ourselves to this framework, they are absolute requirements. That doesn’t guarantee they exist; hence the need to conduct laboratory experiments to detect them. If we permit ourselves to question the framework, then we say, gee, who ordered this?
Let me be clear that the data are absolutely clear that something is wrong. There is no doubt of the need for dark matter in the conventional framework of GR. I teach an entire semester course on the many and various empirical manifestations of mass discrepancies in the universe. There is no doubt that the acceleration discrepancy (as Bekenstein called it) is a real set of observed phenomena. At issue is the interpretation: does this indicate literal invisible mass, or is it an indication of the failings of current theory?
Similarly for Lambda. Here is a nice plot of the expansion history of the universe by Saul Perlmutter. The colors delineate the region of possible models in which the expansion either decelerates or accelerates. There is no doubt that the data fall on the accelerating side.
I’m old enough to remember when the blue (accelerating) region of this diagram was forbidden. Couldn’t happen. Data falling in that portion of the diagram would falsify cosmology. The only reason it didn’t is because we could invoke Einstein’s greatest blunder as an auxiliary hypothesis to patch up our hypothesis. That we had to do so is why the whole dark energy thing is such a big deal. Ironically, one can find many theoretical physicists eagerly pursuing modified theories of gravity to explain the need for Lambda without for a moment considering whether this might also apply to the dark matter problem.
When and where one enters the field matters. At the turn of the century, dark energy was the hot, new, interesting problem, and many people chose to work on it. Dark matter was already well established. So much so that students of that era (who are now faculty and science commentators) understandably confuse the empirical dark matter problem with its widely accepted if still hypothetical solution in the form of some as-yet undiscovered particle. Indeed, overcoming this mindset in myself was the hardest challenge I have faced in an entire career full of enormous challenges.
Another issue with dark matter, as commonly conceptualized, is that it cannot be normal matter that happens not to shine as stars. It is very reasonable to image that there are dark baryons, and it is pretty clear that there are. Early on (circa 1980), it seemed like this might suffice. It does not. However, it helped the notion of dark matter transition from an obvious affront to the scientific method to a plausible if somewhat outlandish hypothesis to an inevitable requirement for some entirely new form of particle. That last part is key: we don’t just need ordinary mass that is hard to see, we need some form of non-baryonic entity that is completely invisible and resides entirely outside the well-established boundaries of the standard model of particle physics and that has persistently evaded laboratory signals where predicted.
One becomes concerned about a theory when it becomes too complicated. In the case of cosmology, it isn’t just the Lambda and the cold dark matter. These are just a part of a much larger balancing act. The Hubble tension is a late comer to a long list of tensions among independent observations that have been mounting for so long that I reproduce here a transparency I made to illustrate the situation. That’s right, a transparency, because this was already an issue before end of the twentieth century.
The details have changed, but the situation remains the same. The chief thing that has changed is the advent of precision cosmology. Fits to CMB data are now so accurate that we’ve lost our historical perspective on the slop traditionally associated with cosmological observables. CMB fits are of course made under the assumption of GR+Lambda+CDM. Rather than question these assumptions when some independent line of evidence disagrees, we assume that the independent line of evidence is wrong. The opportunities for confirmation bias are rife.
I hope that it is obvious to everyone that Lambda and CDM are auxiliary hypotheses. I took the time to spell it out because most scientists have subsumed them so deeply into their belief systems that they forget that’s what they are. It is easy to find examples of people criticizing MOND as a tooth fairy as if dark matter is not itself the biggest, most flexible, literally invisible tooth fairy you can imagine. We expected none of this!
I wish to highlight here one other tooth fairy: feedback. It is less obvious that this is a tooth fairy, since it is a very real physical effect. Indeed, it is a whole suite of distinct physical effects, each with very different mechanisms and modes of operation. There are, for example, stellar winds, UV radiation from massive stars, supernova when those stars explode, X-rays from compact sources like neutron stars, and relativistic jets from supermassive black holes at the centers of galactic nuclei. The mechanisms that drive these effects occur on scales that are impossibly tiny from the perspective of cosmology, as they cannot be modeled directly in cosmological simulations. The only computer that has both the size and the resolution to do this calculation is the universe itself.
To account for effects below their resolution limit, simulators have come up with a number of schemes to account for this “sub-grid physics.” Therein lies the rub. There are many different approaches to this, and they do not all produce the same results. We do not understand feedback well enough to model it accurately as subgrid physics. Simulators usually invoke supernova feedback as the primary effect in dwarf galaxies, while observers tell us that stellar winds do most of the damage on the scale of star forming regions – a scale that is much smaller than the scale simulators are concerned with, that of entire galaxies. What the two communities mean by the word feedback is not the same.
On the one hand, it is normal in the course of the progress of science to need to keep working on something like how best to model feedback. On the other hand, feedback has become the go-to explanation for any observation that does not conform to the predictions of LCDM. In that application, it becomes an auxiliary hypothesis. Many plausible implementations of feedback have been rejected for doing the wrong thing in simulations. Only maybe one of those was the right implementation, and the underlying theory is wrong? How can we tell when we keep iterating the implementation to get the right answer?
Bear in mind that there are many forms of feedback. That one word upon which our entire cosmology has become dependent is not a single auxiliary hypothesis. It is more like a Russian nesting doll of multiple tooth fairies, one inside another. Imagining that these different, complicated effects must necessarily add up to just the right outcome is dangerous: anything we get wrong we can just blame on some unknown imperfection in the feedback prescription. Indeed, most of the papers on this topic that I see aren’t even addressing the right problem. Often they claim to fix the cusp-core problem without addressing the fact that this is merely one symptom of the observed MOND phenomenology in galaxies. This is like putting a bandage on an amputation and pretending like the treatment is complete.
The universe is weirder than we know, and perhaps weirder than we can know. This provides boundless opportunity for self-delusion.
It has been two months since my last post. Sorry for the extended silence, but I do have a real job. It is not coincidental that my last post precedes the start of the semester. It has been the best of semesters, but mostly the worst of semesters.
On the positive side, I’m teaching our upper level cosmology course. The students are great, really interested and interactive. Interest has always run high, going back to the first time I taught it (in 1999) as a graduate course at the University of Maryland. Aficionados of web history may marvel at the old course website, which was one of the first of its kind, as was the class – prior to that, graduate level cosmology was often taught as part of extragalactic astronomy. Being a new member of the faculty, it was an obvious gap to fill. I also remember with bemusement receiving Mike A’Hearn (comet expert and PI of Deep Impact) as an envoy from the serious-minded planetary scientists, who wondered if there was enough legitimate substance to the historically flaky subject of cosmology to teach a full three credit graduate course on the subject. Being both an expert and a skeptic, it was easy to reassure him: yes.
That class was large for a graduate level course, being taken in equal numbers by both astronomy and physics students. The astronomers were shocked and horrified that I went so deeply into the background theory to frame the course from the outset, and frequently asked “what’s a metric?” while the physicists loved that part. When we got to observational constraints, you could see the astronomers’ eyes glaze – not the distance scale again – while the physicists desperately asked “what’s a distance modulus?” This dichotomy persists.
This semester’s course is the largest it has ever been, up 70% from previous already-large enrollments. This is consistent with the explosive growth of the field. Interest in the field has never been higher. The number of astronomy majors has doubled over the past decade, having doubled already in the preceding decade.
That’s the good news. The bad news is that over the past four years, our department has been allowed to whither. In 2018, we were the smallest astronomy department in the country, with five tenured professors and an observatory manager who functioned as research faculty. The inevitable retirements that we had warned our administration were coming arrived, and we were allowed to fall off the demographic cliff (a common problem here and at many institutions). Despite the clear demand and the depth, breadth, and diversity of the available talent pool, the only faculty hire we have made in the past decade was an instructor (a rank that differs from a professor in having no research obligations), so now we are a department of two tenured professors and one instructor. I thought we were already small! It boggles the mind when you realize that the three of us are obliged to cover literally the entire universe in our curriculum.
Though always a small department, we managed. Now we don’t manage so much as cling to the edge of the cliff by our fingernails. We can barely cover the required courses for our majors. During the peak of concern about the Covid pandemic, we Chairs were asked to provide a plan for covering courses should one or some of our faculty become ill for an extended period. What a joke. The only “plan” I could offer was “don’t get sick.”
We did at least get along, which is not the case with faculty in all departments. The only minor tension we sometimes encountered was the distribution of research students. A Capstone (basically a senior thesis) is required here, and some faculty wound up with a higher supervisory load than others. That is baked-in now, as we have fewer faculty but more students to supervise.
We have reached a breaking point. The only way to address the problems we face is to hire new faculty. So the solution proffered by the dean is to merge our department into Physics.
Regardless of any other pros and cons, a merger does nothing to address the fundamental problem: we need astronomers to teach the astronomy curriculum. We need astronomers to conduct astronomy research, and to have a critical mass for a viable research community. In short, we need astronomers to do astronomy.
I have been Chair of the CWRU Department of Astronomy for over seven years now. Prof. Mihos served in this capacity for six years before that. No sane faculty member wants to be Chair; it is a service obligation we take on because there are tasks that need doing to serve our students and enable our research. Though necessary, these tasks are a drain on the person doing them, and detract from our ability to help our students and conduct research. Having sustained the department for this long to be told we needn’t have bothered is a deep and profound betrayal. I did not come here to turn out the lights.
Dark matter remains undetected in the laboratory. This has been true for forever, so I don’t know what drivesthe timing of the recent spate of articles encouraging us to keep the faith, that dark matter is still a better idea than anything else. This depends on how we define “better.”
There is a long-standing debate in the philosophy of science about the relative merits of accommodation and prediction. A scientific theory should have predictive power. It should also explain all the relevant data. To do the latter almost inevitably requires some flexibility in order to accommodate things that didn’t turn out exactly as predicted. What is the right mix? Do we lean more towards prediction, or accommodation? The answer to that defines “better” in this context.
One of the recent articles is titled “The dark matter hypothesis isn’t perfect, but the alternatives are worse” by Paul Sutter. This perfectly encapsulates the choice one has to make in what is unavoidably a value judgement. Is it better to accommodate, or to predict (see the Spergel Principle)? Dr. Sutter comes down on the side of accommodation. He notes a couple of failed predictions of dark matter, but mentions no specific predictions of MOND (successful or not) while concluding that dark matter is better because it explains more.
One important principle in science is objectivity. We should be even-handed in the evaluation of evidence for and against a theory. In practice, that is very difficult. As I’ve written before, it made me angry when the predictions of MOND came true in my data for low surface brightness galaxies. I wanted dark matter to be right. I felt sure that it had to be. So why did this stupid MOND theory have any of its predictions come true?
One way to check your objectivity is to look at it from both sides. If I put on a dark matter hat, then I largely agree with what Dr. Sutter says. To quote one example:
The dark matter hypothesis isn’t perfect. But then again, no scientific hypothesis is. When evaluating competing hypotheses, scientists can’t just go with their guts, or pick one that sounds cooler or seems simpler. We have to follow the evidence, wherever it leads. In almost 50 years, nobody has come up with a MOND-like theory that can explain the wealth of data we have about the universe. That doesn’t make MOND wrong, but it does make it a far weaker alternative to dark matter.
OK, so now let’s put on a MOND hat. Can I make the same statement?
The MOND hypothesis isn’t perfect. But then again, no scientific hypothesis is. When evaluating competing hypotheses, scientists can’t just go with their guts, or pick one that sounds cooler or seems simpler. We have to follow the evidence, wherever it leads. In almost 50 years, nobody has detected dark matter, nor come up with a dark matter-based theory with the predictive power of MOND. That doesn’t make dark matter wrong, but it does make it a far weaker alternative to MOND.
So, which of these statements is true? Well, both of them. How do we weigh the various lines of evidence? Is it more important to explain a large variety of the data, or to be able to predict some of it? This is one of the great challenges when comparing dark matter and MOND. They are incommensurate: the set of relevant data is not the same for both. MOND makes no pretense to provide a theory of cosmology, so it doesn’t even attempt to explain much of the data so beloved by cosmologists. Dark matter explains everything, but, broadly defined, it is not a theory so much as an inference – assuming gravitational dynamics are inviolate, we need more mass than meets the eye. It’s a classic case of comparing apples and oranges.
While dark matter is a vague concept in general, one can build specific theories of dark matter that are predictive. Simulations with generic cold dark matter particles predict cuspy dark matter halos. Galaxies are thought to reside in these halos, which dominate their dynamics. This overlaps with the predictions of MOND, which follow from the observed distribution of normal matter. So, do galaxies look like tracer particles orbiting in cuspy halos? Or do their dynamics follow from the observed distribution of light via Milgrom’s strange formula? The relevant subset of the data very clearly indicate the latter. When head-to-head comparisons like this can be made, the a priori predictions of MOND win, hands down, over and over again. [If this statement sounds wrong, try reading the relevant scientificliterature. Being an expert on dark matter does not automatically make one an expert on MOND. To be qualified to comment, one should know what predictive successes MOND has had. People who say variations of “MOND only fits rotation curves” are proudly proclaiming that they lack this knowledge.]
It boils down to this: if you want to explain extragalactic phenomena, use dark matter. If you want to make a prediction – in advance! – that will come true, use MOND.
A lot of the debate comes down to claims that anything MOND can do, dark matter can do better. Or at least as well. Or, if not as well, good enough. This is why conventionalists are always harping about feedback: it is the deus ex machina they invoke in any situation where they need to explain why their prediction failed. This does nothing to explain why MOND succeeded where they failed.
This post-hoc reasoning is profoundly unsatisfactory. Dark matter, being invisible, allows us lots of freedom to cook up an explanation for pretty much anything. My long-standing concern for the dark matter paradigm is not the failure of any particular prediction, but that, like epicycles, it has too much explanatory power. We could use it to explain pretty much anything. Rotation curves flat when they should be falling? Add some dark matter. No such need? No dark matter. Rising rotation curves? Sure, we could explain that too: add more dark matter. Only we don’t, because that situation doesn’t arise in nature. But we could if we had to. (See, e.g., Fig. 6 of de Blok & McGaugh 1998.)
There is no requirement in dark matter that rotation curves be as flat as they are. If we start from the prior knowledge that they are, then of course that’s what we get. If instead we independently try to build models of galactic disks in dark matter halos, very few of them wind up with realistic looking rotation curves. This shouldn’t be surprising: there are, in principle, an uncountably infinite number of combinations of galaxies and dark matter halos. Even if we impose some sensible restrictions (e.g., scaling the mass of one component with that of the other), we still don’t get it right. That’s one reason that we have to add feedback, which suffices according to some, and not according to others.
In contrast, the predictions of MOND are unique. The kinematics of an object follow from its observed mass distribution. The two are tied together by the hypothesized force law. There is a one-to-one relation between what you see and what you get.
From the perspective of building dark matter models, it’s like the proverbial needle in the haystack: the haystack is the volume of possible baryonic disk plus dark matter halo combinations; the one that “looks like” MOND is the needle. Somehow nature plucks the MOND-like needle out of the dark matter haystack every time it makes a galaxy.
Dr. Sutter says that we shouldn’t go with our gut. That’s exactly what I wanted to do, long ago, to maintain my preference for dark matter. I’d love to do that now so that I could stop having this argument with otherwise reasonable people.
Instead of going with my gut, I’m making a probabilistic statement. In Bayesian terms, the odds of observing MONDian behavior given the prior that we live in a universe made of dark matter are practically zero. In MOND, observing MONDian behavior is the only thing that can happen. That’s what we observe in galaxies, over and over again. Any information criterion shows a strong quantitative preference for MOND when dynamical evidence is considered. That does not happen when cosmological data are considered because MOND makes no prediction there. Concluding that dark matter is better overlooks the practical impossibility that MOND-like phenomenolgy is observed at all. Of course, once one knows this is what the data show, it seems a lot more likely, and I can see that effect in the literature over the long arc of scientific history. This is why, to me, predictive power is more important than accommodation: what we predict before we know the answer is more important than whatever we make up once the answer is known.
The successes of MOND are sometimes minimized by lumping all galaxies into a single category. That’s not correct. Every galaxy has a unique mass distribution; each one is an independent test. The data for galaxies extend over a large dynamic range, from dwarfs to giants, from low to high surface brightness, from gas to star dominated cases. Dismissing this by saying “MOND only explains rotation curves” is like dismissing Newton for only explaining planets – as if every planet, moon, comet, and asteroid aren’t independent tests of Newton’s inverse square law.
MOND does explain more that rotation curves. That was the first thing I checked. I spent several years looking at all of the data, and have reviewed the situation many times since. What I found surprising is how much MOND explains, if you let it. More disturbing was how often I came across claims in the literature that MOND was falsified by X only to try the analysis myself and find that, no, if you bother to do it right, that’s pretty much just what it predicts. Not in every case, of course – no hypothesis is perfect – but I stopped bothering after several hundred cases. Literally hundreds. I can’t keep up with every new claim, and it isn’t my job to do so. My experience has been that as the data improve, so too does its agreement with MOND.
Dr. Sutter’s article goes farther, repeating a common misconception that “the tweaking of gravity under MOND is explicitly designed to explain the motions of stars within galaxies.” This is an overstatement so strong as to be factually wrong. MOND was explicitly designed to produce flat rotation curves – as was dark matter. However, there is a lot more to it than that. Once we write down the force law, we’re stuck with it. It has lots of other unavoidable consequences that lead to genuine predictions. Milgrom explicitly laid out what these consequences would be, and basically all of them have subsequently been observed. I include a partial table in my last review; it only ends where it does because I had to stop somewhere. These were genuine, successful, a priori predictions – the gold standard in science. Some of them can be explained with dark matter, but many cannot: they make no sense, and dark matter can only accommodate them thanks to its epic flexibility.
Dr. Sutter makes a number of other interesting points. He says we shouldn’t “pick [a hypothesis] that sounds cooler or seems simpler.” I’m not sure which seems cooler here – a universe pervaded by a mysterious invisible mass that we can’t [yet] detect in the laboratory but nevertheless controls most of what goes on out there seems pretty cool to me. That there might also be some fundamental aspect of the basic theory of gravitational dynamics that we’re missing also seems like a pretty cool possibility. Those are purely value judgments.
Simplicity, however, is a scientific value known as Occam’s razor. The simpler of competing theories is to be preferred. That’s clearly MOND: we make one adjustment to the force law, and that’s it. What we lack is a widely accepted, more general theory that encapsulates both MOND and General Relativity.
In dark matter, we multiply entities unnecessarily – there is extra mass composed of unknown particles that have no place in the Standard Model of particle physics (which is quite full up) so we have to imagine physics beyond the standard model and perhaps an entire dark sector because why just one particle when 85% of the mass is dark? and there could also be dark photons to exchange forces that are only active in the dark sector as well as entire hierarchies of dark particles that maybe have their own ecosystem of dark stars, dark planets, and maybe even dark people. We, being part of the “normal” matter, are just a minority constituent of this dark universe; a negligible bit of flotsam compared to the dark sector. Doesn’t it make sense to imagine that the dark sector has as rich and diverse a set of phenomena as the “normal” sector? Sure – if you don’t mind abandoning Occam’s razor. Note that I didn’t make any of this stuff up; everything I said in that breathless run-on sentence I’ve heard said by earnest scientists enthusiastic about how cool the dark sector could be. Bugger Occam.
There is also the matter of timescales. Dr. Sutter mentions that “In almost 50 years, nobody has come up with a MOND-like theory” that does all that we need it to do. That’s true, but for the typo. Next year (2023) will mark the 40th anniversary of Milgrom’s first publications on MOND, so it hasn’t been half a century yet. But I’ve heard recurring complaints to this effect before, that finding the deeper theory is taking too long. Let’s examine that, shall we?
First, remember some history. When Newton introduced his inverse square law of universal gravity, it was promptly criticized as a form of magical thinking: How, Sir, can you have action at a distance? The conception at the time was that you had to be in physical contact with an object to exert a force on it. For the sun to exert a force on the earth, or the earth on the moon, seemed outright magical. Leibnitz famously accused Newton of introducing ‘occult’ forces. As a consequence, Newton was careful to preface his description of universal gravity as everything happening as if the force was his famous inverse square law. The “as if” is doing a lot of work here, basically saying, in modern parlance “OK, I don’t get how this is possible, I know it seems really weird, but that’s what it looks like.” I say the same about MOND: galaxies behave as if MOND is the effective force law. The question is why.
As near as I can tell from reading the history around this, and I don’t know how clear this is, but it looks like it took about 20 years for Newton to realize that there was a good geometric reason for the inverse square law. We expect our freshman physics students to see that immediately. Obviously Newton was smarter than the average freshman, so why’d it take so long? Was he, perhaps, preoccupied with the legitimate-seeming criticisms of action at a distance? It is hard to see past a fundamental stumbling block like that, and I wonder if the situation now is analogous. Perhaps we are missing something now that will seems obvious in retrospect, distracted by criticisms that will seem absurd in the future.
Many famous scientists built on the dynamics introduced by Newton. The Poisson equation isn’t named the Newton equation because Newton didn’t come up with it even though it is fundamental to Newtonian dynamics. Same for the Lagrangian. And the classical Hamiltonian. These developments came many decades after Newton himself, and required the efforts of many brilliant scientists integrated over a lot of time. By that standard, forty years seems pretty short: one doesn’t arrive at a theory of everything overnight.
What is the right measure? The integrated effort of the scientific community is more relevant than absolute time. Over the past forty years, I’ve seen a lot of push back against even considering MOND as a legitimate theory. Don’t talk about that! This isn’t exactly encouraging, so not many people have worked on it. I can count on my fingers the number of people who have made important contributions to the theoretical development of MOND. (I am not one of them. I am an observer following the evidence, wherever it leads, even against my gut feeling and to the manifest detriment of my career.) It is hard to make progress without a critical mass of people working on a problem.
Of course, people have been looking for dark matter for those same 40 years. More, really – if you want to go back to Oort and Zwicky, it has been 90 years. But for the first half century of dark matter, no one was looking hard for it – it took that long to gel as a serious problem. These things take time.
Nevertheless, for several decades now there has been an enormous amount of effort put into all aspects of the search for dark matter: experimental, observational, and theoretical. There is and has been a critical mass of people working on it for a long time. There have been thousands of talented scientists who have contributed to direct detection experiments in dozens of vast underground laboratories, who have combed through data from X-ray and gamma-ray observatories looking for the telltale signs of dark matter decay or annihilation, who have checked for the direct production of dark matter particles in the LHC; even theorists who continue to hypothesize what the heck the dark matter could be and how we might go about detecting it. This research has been well funded, with billions of dollars having been spent in the quest for dark matter. And what do we have to show for it?
Zero. Nada. Zilch. Squat. A whole lot of nothing.
This is equal to the amount of funding that goes to support research on MOND. There is no faster way to get a grant proposal rejected than to say nice things about MOND. So one the one hand, we have a small number of people working on the proverbial shoestring, while on the other, we have a huge community that has poured vast resources into the attempt to detect dark matter. If we really believe it is taking too long, perhaps we should try funding MOND as generously as we do dark matter.
I noted last time that in the rush to analyze the first of the JWST data, that “some of these candidate high redshift galaxies will fall by the wayside.” As Maurice Aabe notes in the comments there, this has already happened.
I was concerned because of previous work with Jay Franck in which we found that photometric redshifts were simply not adequately precise to identify the clusters and protoclusters we were looking for. Consequently, we made it a selection criterion when constructing the CCPC to require spectroscopic redshifts. The issue then was that it wasn’t good enough to have a rough idea of the redshift, as the photometric method often provides (what exactly it provides depends in a complicated way on the redshift range, the stellar population modeling, and the wavelength range covered by the observational data that is available). To identify a candidate protocluster, you want to know that all the potential member galaxies are really at the same redshift.
This requirement is somewhat relaxed for the field population, in which a common approach is to ask broader questions of the data like “how many galaxies are at z ~ 6? z ~ 7?” etc. Photometric redshifts, when done properly, ought to suffice for this. However, I had noticed in Jay’s work that there were times when apparently reasonable photometric redshift estimates went badly wrong. So it made the ganglia twitch when I noticed that in early JWST work – specifically Table 2 of the first version of a paper by Adams et al. – there were seven objects with candidate photometric redshifts, and three already had a preexisting spectroscopic redshift. The photometric redshifts were mostly around z ~ 9.7, but the three spectroscopic redshifts were all smaller: two z ~ 7.6, one 8.5.
Three objects are not enough to infer a systematic bias, so I made a mental note and moved on. But given our previous experience, it did not inspire confidence that all the available cases disagreed, and that all the spectroscopic redshifts were lower than the photometric estimates. These things combined to give this observer a serious case of “the heebie-jeebies.”
Adams et al have now posted a revised analysis in which many (not all) redshifts change, and change by a lot. Here is their new Table 4:
There are some cases here that appear to confirm and improve the initial estimate of a high redshift. For example, SMACS-z11e had a very uncertain initial redshift estimate. In the revised analysis, it is still at z~11, but with much higher confidence.
That said, it is hard to put a positive spin on these numbers. 23 of 31 redshifts change, and many change drastically. Those that change all become smaller. The highest surviving redshift estimate is z ~ 15 for SMACS-z16b. Among the objects with very high candidate redshifts, some are practically local (e.g., SMACS-z12a, F150DB-075, F150DA-058).
So… I had expected that this could go wrong, but I didn’t think it would go this wrong. I was concerned about the photometric redshift method – how well we can model stellar populations, especially at young ages dominated by short lived stars that in the early universe are presumably lower metallicity than well-studied nearby examples, the degeneracies between galaxies at very different redshifts but presenting similar colors over a finite range of observed passbands, dust (the eternal scourge of observational astronomy, expected to be an especially severe affliction in the ultraviolet that gets redshifted into the near-IR for high-z objects, both because dust is very efficient at scattering UV photons and because this efficiency varies a lot with metallicity and the exact gran size distribution of the dust), when is a dropout really a dropout indicating the location of the Lyman break and when is it just a lousy upper limit of a shabby detection, etc. – I could go on, but I think I already have. It will take time to sort these things out, even in the best of worlds.
We do not live in the best of worlds.
It appears that a big part of the current uncertainty is a calibration error. There is a pipeline for handling JWST data that has an in-built calibration for how many counts in a JWST image correspond to what astronomical magnitude. The JWST instrument team warned us that the initial estimate of this calibration would “improve as we go deeper into Cycle 1” – see slide 13 of Jane Rigby’s AAS presentation.
I was not previously aware of this caveat, though I’m certainly not surprised by it. This is how these things work – one makes an initial estimate based on the available data, and one improves it as more data become available. Apparently, JWST is outperforming its specs, so it is seeing as much as 0.3 magnitudes deeper than anticipated. This means that people were inferring objects to be that much too bright, hence the appearance of lots of galaxies that seem to be brighter than expected, and an apparent systematic bias to high z for photometric redshift estimators.
I was not at the AAS meeting, let alone Dr. Rigby’s presentation there. Even if I had been, I’m not sure I would have appreciated the potential impact of that last bullet point on nearly the last slide. So I’m not the least bit surprised that this error has propagated into the literature. This is unfortunate, but at least this time it didn’t lead to something as bad as the Challenger space shuttle disaster in which the relevant warning from the engineers was reputed to have been buried in an obscure bullet point list.
So now we need to take a deep breath and do things right. I understand the urgency to get the first exciting results out, and they are still exciting. There are still some interesting high z candidate galaxies, and lots of empirical evidence predating JWST indicating that galaxies may have become too big too soon. However, we can only begin to argue about the interpretation of this once we agree to what the facts are. At this juncture, it is more important to get the numbers right than to post early, potentially ill-advised takes on arXiv.
That said, I’d like to go back to writing my own ill-advised take to post on arXiv now.