Paradigm Shifts in Modern Astrophysics

Paradigm Shifts in Modern Astrophysics

I see that I’ve been posting once a month so far in 2026. I’ve lots to say but no time to say it. Some of it good, some of it bad, maybe sometime I’ll get around to it. No guarantees. On the good side, I’ve been working on a big project or two; may have something to say about those soon. I’ve also been meaning to write about the Planet 9 anomaly for months stretching into years now. Fascinating stuff related to MOND but not something I’ve worked on myself. On the bad side, I’ve been obliged to waste yet more time on my university administration’s insistence on merging our department into physics based on a snap decision made by a disinterested leader who employed all the forethought typically reserved for bombing a random country in the Middle East.

So I have had no time for novel posts lately, and today is no different. However, I thought readers of this blog would appreciate the post Paradigm Shifts in Modern Astrophysics: Applying Thomas Kuhn’s The Structure of Scientific Revolutions to Dark Matter at Heritage Diner that was pointed out to me by Moti Milgrom. Since I wouldn’t have seen it had he not mentioned it, perhaps that’s the case for you as well. I’m not gonna re-post it verbatim – you can read it there yourself – but I am going to offer a running commentary with a few observations, both personal and historical. So bring it up in a separate browser window and let’s read along…

This post riffs off of Kuhn’s The Structure of Scientific Revolutions as it pertains to dark matter and MOND. If you’re not familiar with it, Kuhn’s work on the philosophy of science is foundational to the way in which a lot of physical scientists approach their field (whether they realize it or not). Philosophers of science have done a lot more since then, but I’m not going to attempt to go there. I will look back to Popper* to note that I’ve heard Kuhn depicted as being some sort of antithesis to Popper. I don’t see it that way. To be pithy, Popper tells us how science should be done while Kuhn tells us how it is done. Who could have imagined that a human endeavor would be messy in practice and not always live up to its ideal?

I’m not sure how to do this; I guess I’ll excerpt relevant quotes and riff off those. The basic thesis is that dark matter is on the brink of a Kuhnian paradigm shift.

We are living through exactly that moment in modern astrophysics.

I certainly hope so! This moment in the history of science is taking a long damn time. A century ago, we went from “classical physics explains everything” to “quantum mechanics, WTF?’ in the space of about a decade. I’ve been working on matters related to MOND for over thirty years now, dark matter longer than that, and of course Milgrom started more than a decade before I did.

The essay discusses the “cartography of collapse,” which includes crisis and revolution:

The third stage is crisis — triggered when anomalies accumulate beyond the paradigm’s absorptive capacity. And the fourth is revolution, in which a new framework displaces the old not through incremental persuasion but through a gestalt shift, what Kuhn famously described as seeing the same duck-rabbit drawing and suddenly recognizing a rabbit where you had always seen a duck.

This resonated with me because I had exactly this experience. I started my career as much a believer in dark matter as anyone. I was barely aware that MOND existed (this seems to remain a common condition). But it reared its ugly head in my own data for low surface brightness galaxies. Try as I might – and I tried mighty hard, for a long time – I could not reconcile how the shapes of rotation curves depended on surface brightness as they should according to Newton while simultaneously lying exactly on the Tully-Fisher relation without any hint of dependence on surface brightness+. I could explain one or another, but not both simultaneously – at least, not without engaging in some form of tautology that made it so. I came up with a lot of those, and that has been a full-time occupation for many theorists ever since.

For me, this gradually became a genuine crisis. I pounded my head against the wall for months. Then, as I was wrestling with this problem, I happened to attend a talk by Milgrom. I almost didn’t go. I remember thinking “modified gravity? Who wants to hear about that?” But I did, and in a few short lines on the board, Milgrom derived from MOND exactly the result I found so confusing in terms of dark matter. This chance meeting in Middle Earth (Cambridge, UK) changed how I saw the universe. The change wasn’t immediate – it had to ferment a while – but ultimately I found myself asking myself over and over how this stupid theory could have its predictions come true when there was so much evidence for dark matter. Finally I realized that the evidence for dark matter assumes that gravity is normal; really it was just evidence of a discrepancy, and it could be that the assumption was at fault. That realization was sudden: where I’d always seen a duck, suddenly I could also see a rabbit.

Most scientists have not had this experience. What constitutes a crisis serious enough to contemplate a paradigm change is a highly personal matter of judgement. It happened in my data, so I took it seriously, but others didn’t care. So I made predictions for their data. Some of those came true, but they rejected the evidence of their own data. It just could not be so! At what point does a mere problem amount to a true anomaly?

Part of the sociological issue is that the dark matter paradigm has been in a constant state of crisis since its inception. The reasons vary over time. Sometimes valid solutions have been found to the crisis du jour, other times we’ve chosen to just live with it. It is much easier to live with a bad solution than to rethink one’s entire world view.

The problem with being in a constant state of crisis makes is that it seems like nothing can ever be a genuine crisis. Every foundational change is just another new normal. We complain, say it can’t be so, argue, offer bad ideas, reject them, get used to them, then eventually accept that one of them maybe isn’t so bad, so that must be what is going on. After a few years It is Known and people convince themselves that we expected just that all along.

It takes a lot of evidentiary weight for a paradigm to change, and it takes a lot of time for that to accumulate. But, as Kuhn recognized, mere facts are not enough. Humans and their attitudes matter. As Feyerabend noted,

The normal component [i.e. the accepted paradigm and its adherents] is large and well entrenched. Hence, a change of the normal component is very noticeable. So is the resistance of the normal component to change. This resistance becomes especially strong and noticeable in periods where a change seems to be imminent.

P. Feyerabend in Criticism and Growth of Knowledge

The post correctly points out that dark matter itself was an anomaly going back to Zwicky in 1933. This is often depicted^ as the first detection of dark matter, but it was also noted by Oort in 1932. Zwicky was aware of Oort’s work and cited him, but they’re very different results. Oort was worried about a factor of ~2 discrepancy in stellar dynamics in our local chunk of the Milky Way; Zwicky discovered a discrepancy of a factor of ~1000 in the Coma cluster of galaxies. These both imply the need for unseen mass, but the results are not at all the same. In retrospect, Oort’s discrepancy is a subtle detection of a flat rotation curve while Zwicky’s discrepancy was (at least) two distinct discrepancies: what we now consider the usual cosmic dark matter, but also missing baryons: most of the normal matter in clusters is in the hot, diffuse intracluster medium, not in the stars in the galaxies that Zwicky could see and account for. The modern discrepancy is only a factor of ~6, which is rather less than 1,000. (The distance scale also played a role in exaggerating Zwicky’s result.)

This all seemed crazy in the 1930s, even in the immediate aftermath of the quantum revolution. Consequently, Zwicky’s work was mostly ignored$. The subject of dark matter didn’t really take off until the 1970s. Considerable credit goes (rightly) to Vera Rubin, though many others made essential contributions – just on the subject of rotation curves, Albert Bosma, Mort Roberts, and Seth Shostak all made important contributions, the relative importance of which depends on who you ask.

An important aspect of scientific revolutions is persistence. Vera was persistent. She was fond of relating the story of showing her first (1970) flat rotation curve of Andromeda to Alan Sandage, only to have him dismiss it as “the effect of looking at a bright galaxy.” What the heck did that mean? Nothing, of course – it is the sort of stupid thing that smart people say when confronted with the inconceivable. So Vera persisted, and by the end of the decade had shown that flat rotation curves were the rule, not some strange exception. They became accepted as a de facto law of nature, and the dark matter interpretation was solidly in place by 1980.

The scientific community absorbed this anomaly not by questioning Newtonian gravity or Einstein’s general relativity, but by proposing an invisible scaffolding — a halo of non-luminous, non-interacting matter surrounding every galaxy. Dark matter became not a crisis but a patch.

Indeed, this seemed the most appropriate (scientifically conservative) course of action at the time, as summarized in this exchange (also from the early 1980s):

To emphasize the essence of what is said here:

Tohline: I might be so bold as to suggest that the validity of Newton’s law should now be seriously questioned.

Rubin: The point you raise is worth keeping in mind although I believe most of us would rather alter Newtonian gravitational theory only as a last resort.

This was a very reasonable attitude, at the time. But I’ve heard the phrase “only as a last resort” many times now over the course of many years from many different scientists. At what point have we reached the last resort? In the case of dark matter, once we’ve convinced ourselves that invisible mass has to exist, how can we possibly disabuse ourselves of that notion, should it happen to be wrong?

In Kuhnian terms the last resort is reached when the weight of anomalies in the standard paradigm become too great to sustain. But that point is never reached for many die-hard adherents. Whatever the right answer about dark matter turns out to be, I’m sure many brilliant people will go to their graves in denial. Hence the more cynical phrase

Science progresses one funeral at a time.%

But does it? What if the adherents of an ingrained but incorrect paradigm breed faster than they go away? I’ve seen True Believers train graduate students who’ve gone on to train students of their own. Each generation seems to accept without serious examination the inadequate explanations for the anomalies made by their antecedents, so the weight of the anomalies doesn’t accumulate; instead, each one gets swept separately under the proverbial rug and forgotten. Forgetting is important: when new anomalies come to light, hands are waved and new explanations are promulgated; no one chekcs if the new explanations contradict the previous generation of explanations. What passed before is a solved problem, and we need never speak of it again.

This is not a recipe for a scientific revolution, but for a thousand years of dark epicycles.

Returning to the post,

By the late 1980s and early 1990s, dark matter had been formally incorporated into the reigning cosmological framework. Lambda-CDM — where Lambda refers to the cosmological constant (a proxy for dark energy) and CDM stands for Cold Dark Matter — became the standard model of cosmology.

The essence of this statement is correct but some of the details are not. Dark matter was widely accepted by 1980. That’s still a little before my time, but my impression is that the magnitude of the discrepancy was at first a factor of two, so it could simply have been normal baryons that were hard to see. However, the discrepancy rapidly snowballed to an order of magnitude, so we needed something non-baryonic. This was happening simultaneously with talk of supersymmetry and grand unified theories in particle physics that could readily provide new particles to be candidates for the dark matter, leading to the shotgun marriage of particle physics and cosmology, two communities that had had little to do with each other before then, and which still make an odd couple. Cosmology as traditionally practiced by astronomers needed dark matter but didn’t much care what it was; particle physics was all about the possibility of new particles but didn’t care about the details of the astronomical evidence.

To rephrase the above quote, I think it is fair to say that “by the late 1980s and early 1990s, cold dark matter had been formally incorporated into the reigning cosmological framework.” But that framework was not yet LCDM, it was Ωm = 1 SCDM. The Lambda only came to prominence by the end of the 1990s, as I’ve related elsewhere. This process is depicted by many scientists as a revolution in itself, and in many regards it was. The cosmological constant had been very far out of favor; rehabilitating it was a grueling experience and no trivial matter. But it wasn’t really a scientific revolution in the sense that Kuhn meant: our picture didn’t fundamentally change, we just learned to accept a parameter& that was already there but that we didn’t like.

The post goes on to note the absence of dark matter detections:

This silence is itself an anomaly… as the silence deepens, the null result itself becomes harder to dismiss.

This is correct, and yet… Physicists have built many experiments that have achieved extraordinary sensitivities. If cold dark matter was composed of WIMPs as originally hypothesized, we would have detected them long ago. Initially, the reaction was to modify WIMPs. Did we say the cross-section would be 10-39 cm2? We meant 10-44 cm2. When that was excluded, we slid the cross section still lower, but people also started giving themselves permission to think the unthinkable. By unthinkable I mean a particle that can’t be detected, not modified gravity. That’s more unthinkable. So the anomaly isn’t dismissed, but it is treated with less gravity than it should be, and certainly with less import than a positive detection would have been granted. Did we say WIMPs? We didn’t mean just WIMPs. It could be anything. (They damn well meant WIMPs and only WIMPs#. Anyone who tells you otherwise is gaslighting*% you, and probably themselves.)

The post goes on to talk about MOND. It gives me too much credit for the gravitational lensing work. This was done by Tobias Mistele, and our work is based on that of Brouwer et al. But it is correct to note that these data are a problem for the dark matter paradigm. Rotation curves remain flat beyond where dark matter halos should end. If correct, this is a genuine anomaly. Perhaps in some distant future it will be recognized** as such in retrospect; at present it seems mostly to be ignored.

It goes on to talk about the JWST observations. Yeah, that part is correct. The community seems to be in the usual process of gaslighting itself into denial of the anomaly. For the first two years after JWST started returning images of the deep universe, people were aghast. How can this be so? It was all anyone could talk about. But then the unexpected became the new normal. Hands were waved, star formation was accepted to be absurdly efficient, and people accepted the impossible. I no longer hear the talk of how problematic the JWST observations are; this chatter simply stopped.

Anomalies don’t weigh a paradigm down if we don’t accept that they’re anomalies. But I’ve lived through the revolution, it’s hard to see a positive outcome while it is still ongoing. For it is certainly true that

What waits on the other side of the dark matter revolution — if that is what is coming — we cannot yet know.

The future is the unknown territory. We don’t know, and can never know, if dark matter doesn’t exist – it is impossible to prove the negative. But we do know MOND works much better than it should in a universe made of dark matter. That demands a scientific explanation that is still wanting. But MOND by itself is not a complete answer, so we are like the parable of the blind men and the elephant, each sensing a different part of reality but as yet unable to see the whole.

Still, there is reason for optimism. The article closes by noting that

Kuhn’s deepest insight was not that science changes. It is that the change, when it comes, is never merely technical. It is a reorganization of the world itself — the universe seen suddenly whole in a configuration it has always had, but that we had simply lacked the paradigm to perceive.

Not knowing how things ultimately work out is good, actually. One way or the other, there is still fundamental science to be done. We have not reached the stage of looking for our discoveries in the sixth place of decimals.


*Trivia I just learned looking at Popper’s wikipedia page: he was spending his last days in London around the same time I was a postdoc in Cambridge just starting to struggle with the scientific and philosophical implications of the dark matter-MOND miasma.

Unrelated trivia: I was at a workshop in Jerusalem early in the century but missed the opportunity to meet Jacob Bekenstein because I was too shy to bother the great man.

+If you do not find this confusing, you are not thinking clearly.

^A nice, brief summary of this early history is related by Einasto. This is the first place I’ve seen the citation to Opik (1915) written out. I’ve only heard mentioned verbally before, so I’ll have to try looking that up later.

The full story is way more complicated than this sounds, and still gets debated off and on. The amplitude of the Oort discrepancy is much smaller today. Locally, the 3D density of mass seems to be accounted for by known stars, gas, and stellar remnants (which were still a new thing in the 1930s). So this Oort limit shows no discrepancy. There remains a modest discrepancy in the 2D dynamical surface density. It appears to me to boil down to the vertical restoring force having a (sometimes ignored) term that depends on the gradient of the rotation curve. Were that falling in a normal Newtonian way, there would be no discrepancy. But it isn’t; this deviation from Newton in the radial direction leads to the Oort discrepancy in the vertical direction. Instead of being as negative as Newton predicts, dV/dR is close to zero, hence my description of this as an indirect detection of a[n almost] flat rotation curve. (dV/dR = -1.7 km/s/kpc, so not exactly zero, but a lot closer to zero than Newton without dark matter would have it be.) The vertical discrepancy is nevertheless much reduced, now being well below a factor of two.

$To his apparently great embitterment. He had some choice things to say about astronomers of his time. I am inclined to suspect that those who praise Zwicky the loudest today would have been among those he had reason to complain about had they been contemporaries.

%This is attributed to Planck, but he had a lot more nuanced things to say about it in his Nobel Prize lecture.

&Einstein disavowed the cosmological constant as his “greatest blunder,” so one argument against it was (for a long time) that it should never have been a part of the theory of General Relativity in the first place. I wonder how things might have gone had that been the case – that he had never introduced Lambda. Perhaps then the data that led to us accepting Lambda would have required a genuine revolution, but it isn’t obvious that we would have accepted it (we might still be debating it), nor is it apparent that LCDM is what comes out of such a revolution. But we don’t get to do that experiment: the Great Man had suggested Lambda, so it was OK to bring it back: we weren’t wrecking his theory by introducing a crazy new entity, we were just admitting an unlikely (antigravity-like) component thereof.

#Or axions! Or warm or self-interacting dark matter. Or macros nee strange nuggets! Or or or… Sure, there have been lots of ideas for what the dark matter could be. But when we say that “by the late 1980s and early 1990s, cold dark matter had been formally incorporated into the reigning cosmological framework” what the vast majority of scientists working on the topic (including myself) meant was that CDM == WIMPs. We were aggressively derisive of other ideas, and these are only dredged up again now because of the experimental non-detection of WIMPs. WIMPs are still a better dark matter candidate than the others for the same reasons that we were derisive of the others back in the day. We haven’t been looking as hard for the others, so comparable experimental limits do not yet exist. To quote myself,

The concept of dark matter is not falsifiable. If we exclude one candidate, we are free to make up another one. After WIMPs, the next obvious candidate is axions. Should those be falsified, we invent something else. (Particle physicists love to do this. The literature is littered with half-baked dark matter candidates invented for dubious reasons, often to explain phenomena with obvious astrophysical causes. The ludicrous uproar over the ATIC and PAMELA cosmic ray experiments is a good example.)

McGaugh (2008)

*%An easy way to deflate such gaslighting is to ask why so many experiments have been built to search for WIMPs but not all these other allegedly great dark matter candidates. After a pause and dismayed stare, you’ll probably get an answer about “looking under the lamp post” because that’s where it is possible to make detections. That’s sorta true, but it isn’t the real reason. The real reason is that we all drank the Kool-Aid of the WIMP miracle, so genuinely believed that the dark matter had to be WIMPS, not merely that they were a convenient experimental target. (I did not chug the kool-aid as hard as the people who based entire careers on building WIMP detection experiments, but I did buy into the idea to the exclusion of other possibilities for dark matter – as did most everyone else.)

**In retrospect, Galileo’s observations of the angular size and phases of Venus were utterly fatal to the geocentric paradigm. That’s easy to say now; at the time it was just another piece of evidence.

Has dark matter been detected in the Milky Way?

Has dark matter been detected in the Milky Way?

If a title is posed as a question, the answer is usually

No.

There has been a little bit of noise that dark matter might have been detected near the center of the Milky Way. The chatter seems to have died down quickly, for, as usual, this claim is greatly exaggerated. Indeed, the claim isn’t even made in the actual paper so much as in the scuttlebutt# related to it. The scientific claim that is made is that

The halo excess spectrum can be fitted by annihilation with a particle mass mχ 0.5–0.8 TeV and cross section συ (5–8)×1025cm3s1 for the bb¯ channel.

Totani (2025)

What the heck does that mean?

First, the “excess spectrum” refers to a portion of the gamma ray emission detected by the Fermi telescope that exceeds that from known astrophysical sources. This signal might be from a WIMP with a mass in the range of 500 – 800 GeV. That’s a bit heavier than originally anticipated (~100 GeV), but not ridiculous. The cross-section is the probability for an interaction with bottom quarks and anti-quarks. (The Higgs boson can decay into b quarks.)

Astrophysical sources at the Galactic center

There is a long-running issue with the interpretation of excess signals as dark matter. Most of the detected emission is from known astrophysical sources, hence the term “excess.” There being an excess implies that we understand all the sources. There are a lot of astrophysical sources at the Galactic center:

The center of the Milky Way as seen by the South African MeerKAT radio telescope with a close up from JWST. Image credit: NASA, ESA, CSA, STScI, SARAO, S. Crowe (UVA), J. Bally (CU), R. Fedriani (IAA-CSIC), I. Heywood (Oxford).

As you can see, the center of the Galaxy is a busy place. It is literally the busiest place in the Galaxy. Attributing any “excess” to non-baryonic dark matter is contingent on understanding all of the astrophysical sources so that they can be correctly subtracted off. Looking at the complexity of the image above, that’s a big if, which we’ll come back to later. But first, how does dark matter even come unto a discussion of emission from the Galactic center?

Indirect WIMP detection

Dark matter does not emit light – not directly, anyway. But WIMP dark matter is hypothesized to interact with Standard Model particles through the weak nuclear force, which is what provides a window to detect it in the laboratory. So how does that work? Here is the notional Feynman diagram:

Conceivable Interactions between WIMPs (X) and standard model particles (q). The diagram can be read left to right to represent WIMPs scattering off of atomic nuclei, top to bottom to represent WIMPs annihilating into standard model particles, or bottom to top to represent the production of dark matter particles in high energy collisions.

The devious brilliance of this Feynman diagram is that we don’t need to know how the interaction works. There are many possibilities, but that’s a detail – that central circle is where the magic happens; what exactly that magic is can remain TBD. All that matters is that it can happen (with some probability quantified by the interaction cross-section), so all the pathways illustrated above should be possible.

Direct detection experiments look for scattering of WIMPs off of nuclei in underground detectors. They have not seen anything. In principle, WIMPs could be created in sufficiently high-energy collisions of Standard Model particles. The LHC has more than adequate energy to produce dark matter particles in this way, but no such signal has been seen$. The potential signal we’re discussing here is an example of indirect detection. There are a number of possibilities for this, but the most obvious^ one follows from WIMPs being their own anti-particles, so they occasionally meet in space and annihilate into Standard Model particles.

The most obvious product of WIMP annihilations is a pair of gamma rays, hence the potential for the Fermi gamma ray telescope to detect their decay products. Here is a simulated image of the gamma ray sky resulting from dark matter annihilations:

Simulated image from the via Lactea II simultion (Fig. 1 of Kuhlen et al. 2008).

The dark regions are the brightest, where the dark matter density is highest. That includes the center of the Milky Way (white circle) and also sub-halos that might contain dwarf satellite galaxies.

Since we don’t really know how the magic interaction happens, but have plenty of theoretical variations, many other things are also possible, some of which might be cosmic rays:

Fig. 3 of Topchiev et al. (2017) illustrating possible decay channels for WIMP annihilations. Gamma rays are one inevitable product, but other particles might also be produced. These would be born with energies much higher than their rest masses (~100 GeV, while electrons and positrons have masses of 0.5 MeV) so would be moving near the speed of light. In effect, dark matter could be a source of cosmic rays.

The upshot of all this is that the detection of an “excess” of unexpected but normal particles might be a sign of dark matter.

Sociology: different perspectives from different communities

A lot hinges on the confidence with which we can disentangle expected from unexpected. Once we’ve accounted for the sources we already knew about, there are always new sources to be discovered. That’s astronomy. So initially, the communal attitude was that we shouldn’t claim a signal was due to dark matter until all astrophysical signals had been thoroughly excluded. That never happened: we just kept discovering new astrophysical sources. But at some point, the communal attitude transformed into one of eager credulity. It was no longer embarrassing to make a wrong claim; instead, marginal and dubious claims were made eagerly in the hopes of claiming a Nobel prize. If it didn’t work out, oh well, just try again. And again and again and again. There is apparently no shame in claiming to see the invisible when you’re completely convinced it is there to be seen.

This switch in sociology happened in the mid to late ’00s as people calling themselves astroparticle& physicists became numerous. These people were remarkably uninterested in astrophysics or astrophysical sources in their own right but very interested in dark matter. They were quick to claim that any and every quirk in data was a sign of dark matter. I can’t help but wonder if this behavior is inherited from the long drought in interesting particle collider results, which gradually evolved into a propensity for high energy particle phenomenologists to leap on every two-sigma blip as a sign of new physics, dumping hundreds of preprints on arXiv after each signal of marginal significance was announced. It is always a sprint to exercise the mental model-building muscles and make up some shit in the brief weeks before the signal inevitably goes away again.

Let’s review a few examples of previous indirect dark matter detection claims.

Cosmic rays from Kaluza-Klein dark matter – or not

This topic has a long and sordid history. In the late ’00s, there were numerous claims of an excess in cosmic raysATIC saw too many electrons for the astrophysical background, and and PAMELA saw an apparent rise in the positron fraction, perhaps indicating a source with a peak energy around 620 GeV. (If the signal is from dark matter, the rest mass of the WIMP is imprinted in the energy spectrum of its decay products.) The combination of excess electrons and extra positrons seemed fishy enough* to some to point to new physics: dark matter. There were of course more sober analyses, for example:

Fig. 3 from Aharonian et al. (2009): The energy spectrum E3 dN/dE of cosmic-ray electrons measured by H.E.S.S. and balloon experiments. Also shown are calculations for a Kaluza-Klein signature in the H.E.S.S. data with a mass of 620 GeV and a flux as determined from the ATIC data (dashed-dotted line), the background model fitted to low-energy ATIC and high-energy H.E.S.S. data (dashed line) and the sum of the two contributions (solid line). The shaded regions represent the approximate systematic error as in Fig. 2.

A few things to note about this plot: first, the data are noisy – science is hard. The ATIC and H.E.S.S. data are not really consistent – one shows an excess, the other does not. The excess is over a background model that is overly simplistic – the high energy astrophysicists I knew were shouting that the apparent signal could easily be caused by a nearby pulsar##. The advocates for a detection in the astroparticle community simply ignored this point, or if pressed, asserted that it seemed unlikely.

One problem that arose with the dark matter interpretation was that there wasn’t enough of it. Space is big and the dark matter density is low, so it is hard to get WIMPs together to annihilate. Indeed, the expected signal scales as the square of the WIMP density, so is very sensitive to just how much dark matter is lurking about. The average density in the solar neighborhood needed to explain astronomical data is around 0.3 to 0.4 GeV cm-3; this falls short of producing the observed signal (if real) by a factor of ~500.

An ordinary scientist might have taken this setback as a sign that he$$ was barking up the wrong tree. Not to be discouraged, the extraordinary astroparticle physicists started talking about the “boost factor.” If there is a region of enhanced dark matter density, then the gamma ray/cosmic ray signal would be boosted, potentially by a lot given the density-squared dependence. This is not quite as crazy as it sounds, as cold dark matter halos are predicted to be lumpy: there should be lots of sub-halos within each halo (and many sub-sub halos within those, right the way down). So, what are the odds that we happen to live near enough to a subhalo that could result in the required boost factor?

The odds are small but nonzero. I saw someone at a conference in 2009 make a completely theoretical attempt to derive those odds. He took a merger tree from some simulation and calculated the chance that we’d be near one of these lumps. Then he expanded that to include a spectrum of plausible merger trees for Milky Way-mass dark matter halos. The noisier merger histories gave higher probabilities, as halos with more recent mergers tend to be lumpier, having had a fresh injection of subhalos that haven’t had time to erode away through dynamical friction into the larger central halo.

This was all very sensible sounding, in theory – and only in theory. We don’t live in any random galaxy. We live in the Milky Way and we know quite a bit about it. One of those things is that it has had a rather quiet merger history by the standards of simulated merger trees. To be sure, there have been some mergers, like the Gaia-Enceladus Sausage. But these are few and far between compared to the expectations of the simulations our theorist was considering. Moreover, we’d know if it weren’t, because mergers tend to heat the stellar disk and puff up its thickness. The spiral disk of the Milky Way is pretty cold dynamically, which places limits on how much mass has merged and when. Indeed, there is a whole subfield dedicated to the study of the thick disk, which seems to have been puffed up in an ancient event ~8 Gyr ago. Since then it has been pretty quiet, though more subtle things can and do happen.

The speaker did not mention any of that. He had a completely theoretical depiction of the probabilities unsullied by observational evidence, and was succeeding in persuading those who wanted to believe that the small probability he came up with was nevertheless reasonable. It was a mixed audience: along with the astroparticle physicists were astronomers like myself, including one of the world’s experts on the thick disk, Rosy Wyse. However, she was too polite to call this out, so after watching the discussion devolve towards accepting the unlikely as probable, I raise my hand to comment: “We know the Milky Way’s merger history isn’t as busy as the models that give a high probability.” This was met with utter incredulity. How could astronomy teach us anything about dark matter? It’s not like the evidence is 100% astronomical in nature, or… wait, it is. But no, no waiting or self-reflection was involved. It rapidly became clear that the majority of people calling themselves astroparticle physicists were ignorant of some relevant astrophysics that any astronomy grad student would be expected to know. It just wasn’t in their training or knowledge base. Consequently, it was strange and shocking&& for them to learn about it this way. So the discussion trended towards denial, at which point Rosy spoke up to say yes, we know this. Duh. (I paraphrase.)

The interpretation of the excess cosmic ray signal as dark matter persisted a few years, but gradually cooler heads prevailed and the pulsar interpretation became widely accepted to be more plausible – as it always had been. Indeed, claiming cosmic rays were from dark matter became almost disreputable, as it richly deserved to be. So much so that when the AMS cosmic ray experiment joined the party late, it had essentially zero impact. I didn’t hear anyone advocating for it, even in whispers at workshops. It seemed more like its Nobel laureate PI just wanted a second Nobel prize, please and thank you, and even the astroparticle community felt embarrassed for him.

This didn’t preclude the same story from playing out repeatedly.

Gamma rays from WIMPs – or not

In the lead-up to a conference on dark matter hosted at Harvard in 2014, there were claims that the Fermi telescope – the same one that is again in the news – had seen a gamma ray line around 126 GeV that was attributed to dark matter. This claim had many red flags. The mass was close to the Higgs particle mass, which was kinda weird. The signal was primarily seen on the limb of the Earth, which is exactly where you’d expect garbage noise to creep in. Most telling, the Fermi team itself was not making this claim. It came from others who were analyzing their data. I am no fan of science by big teams – they tend to become bureaucratic behemoths that create red tape for their participants and often suppress internal dissent** – but one thing they do not do is leave Nobel prizes unanalyzed in their data. The Fermi team’s silence in this matter was deafening.

In short, this first claim of gamma rays from dark matter looked to be very much on the same trajectory as that from cosmic rays. So I was somewhat surprised when I saw the draft program for the Harvard conference, as it had an entire afternoon session devoted to this topic. I wrote the organizers to politely ask if they really thought this would still be a thing by the time the conference happened. One of them was an enthusiastic proponent, so yes.

Narrator: it was not.

By the time the conference happened, the related claims had all collapsed, and all the scientists invited to speak about it talked instead about something completely different, as if it had never been a thing at all.

X-rays from sterile neutrinos – or not

Later, there was the 3.5 keV line. If one squinted really hard at X-ray data, it looked like there might sorta kinda be an unidentified line. This didn’t look particularly convincing, and there are instances when new lines have been discovered in astronomical data rather than laboratory data (e.g., helium was first recognized in the spectrum of the sun, hence the name; also nebulium, which was later recognized to be ionized oxygen), so again, one needed to consider the astrophysical possibilities.

Of course, it was much more exciting to claim it was dark matter. Never mind that it was a silly energy scale, being far too low mass to be cold dark matter (people seem to have forgotten*# the Lee-Weinberg limit, which requires mX > 2 GeV); a few keV is rather less than a few GeV. No matter, we can always come up with an appropriate particle – in this case, sterile neutrinos*$.

If you’ve read this far, you can see how this was going to pan out.

Gamma rays from WIMPs again, maybe maybe

So now we have a renewed claim that the Fermi excess is dark matter. Given the history related above, the reader may appreciate that my first reaction was Really? Are we doing this again?

“Many people have speculated that if we knew exactly why the bowl of petunias had thought that we would know a lot more about the nature of the Universe than we do now.”

― Douglas Adams, The Hitchhiker’s Guide to the Galaxy

This is different from the claim a decade ago. The claimed mass is different, and the signal is real, being part of the mess of emission from the Galactic center. The trick, as so often the case, is disentangling the dark matter signal from the plausible astrophysical sources.

Indeed, the signal is not new, only this particular fit with WIMP dark matter is. There had, of course, been discussion of all this before, but it faded out when it became clear that the Fermi signal was well explained by a population of millisecond pulsars. Astrophysics was again the more obvious interpretation*%. Or perhaps not: I suppose if you’re part of a community convinced that dark matter exists who is spending an enormous amount of time and resources looking for a signal from dark matter and whose basic knowledge of astrophysics extends little beyond “astronomical data show dark matter exists but are messy so there’s always room to play” then maybe invoking an invisible agent from an unknown dark sector seems just as plausible as an obvious astrophysical source. Hmmm… that would have sounded crazy to me even back when, like them, I was sure that dark matter had to exist and be made of WIMPs, but here we are.

Looking around in the literature, I see there is still a somewhat active series of papers on this subject. They split between no way and maybe.

For example, Manconi et al. (2025) show that the excess signal has the same distribution on the sky as the light from old stars in the Galaxy. The distribution of stars is asymmetrical thanks to the Galactic bar, which we see at an angle somewhere around ~30 degrees, so one end is nearer to us than the other, creating a classic “X/peanut” shape seen in other edge-on barred spiral galaxies. So not only is the spectrum of the signal consistent with millisecond pulsars, it has the same distribution on the sky as the stars from which millisecond pulsars are born. So no way is this dark matter: it is clearly an astrophysical signal.

Not to be dissuaded by such a completely devastating combination of observations, Muru et al. (2025) argue that sure, the signal looks like the stars, but the dark matter could have exactly the same distribution as the stars. They cite the Hestia simulations of the Local Group as an example where this happens. Looking at those, they’re not as unrealistic as many simulations, but they appear to suffer the common affliction of too much dark mass near the center. That leaves the dark matter more room to be non-spherical so maybe be lumpy in the same was as the stars, and also provide a higher annihilation signal from the high density of dark matter. So they say maybe, calling the pulsar and dark matter interpretations “equally compelling.”

Returning to Totani’s sort-of claimed detection, he also says

This cross section is larger than the upper limits from dwarf galaxies and the canonical thermal relic value, but considering various uncertainties, especially the density profile of the MW halo, the dark matter interpretation of the 20 GeV “Fermi halo” remains feasible.

Totani (2025)

OK, so there’s a lot to break down in this one sentence.

The canonical thermal relic value is kinda central to the whole WIMP paradigm, so needing a value higher than that is a red flag reminiscent of the need for a boost factor for the cosmic ray signal. There aren’t really enough WIMPs there to do the job unless we juice their effectiveness at making gamma rays. The juice factor is an order of magnitude here: Steigman et al. (2012) give 2.2 x 10-26 cm3s-1 for what the thermal cross-section should be vs. the (5-8) x 10-25 cm3s-1 suggested by Totani (2025).

It is also worth noting that one point of Steigman’s paper is that as a well-posed hypothesis, the WIMP cross section can be calculated; it isn’t a free parameter to play with, so needing the cross-section to be larger than the upper limits from dwarf galaxies is another red flag. If this is indeed a dark matter signal from the Galactic center, then the subhalos in which dwarf satellites reside should also be visible, as in the simulated image from via Lactea above. They are not, despite having fewer messy astrophysical signals to compete with.

So “remains feasible” is doing a lot of work here. That’s the scientific way of saying “almost certainly wrong, but maybe? Because I’d really like for it to work out that way.”

The dark matter distribution in the Milky Way

One of the critical things here is the density of dark matter near the Galactic center, as the signal scales as the square of the density. Totani (2025) simply adopts the via Lactea simulation to represent the dark matter halo of the Galaxy in his calculations. This is a reasonable choice from a purely theoretical perspective, but it is not a conservative choice for the problem at hand.

What do we know empirically? The via Lactea simulation was dark matter only. There is no stellar disk, just a dark matter halo appropriate to the Milky Way. So let’s add that halo to a baryonic mass model of the Galaxy:

The rotation curve of the via Lactea dark matter halo (red curve) combined with the Milky Way baryon distribution (light blue line). The total rotation (dark blue line) overshoots the data.

The important part for the Galactic center signal is the region at small radius – the first kpc or two. Like most simulations, via Lactea has a cuspy central region of high dark matter density that is inconsistent with data. This overshoots the equivalent circular velocity curve from observed stellar motions. I could fix the fit above by reducing the stellar mass, but that’s not really an option in the Milky Way – we need a maximal stellar disk to explain the microlensing rate towards the center of the Galaxy. The “various uncertainties, especially the density profile of the MW halo” statement elides this inconvenient fact. Astronomical uncertainties are ever-present, but do not favor a dark matter signal here.

We can subtract the baryonic mass model from the rotation curve data to infer what the dark matter distribution needs to be. This is done in the plot below, where it is compared to the via Lactea halo:

The empirical dark matter halo density profile of the Milky Way (blue line) compared to the via Lactea simulation (red line).

The empirical dark matter density profile of the Milky Way does not continue to rise inwards as steeply as the simulation predicts. It shows the same proclivity for a shallower core as pretty much every other galaxy in the sky. This reduced density of dark matter in the central couple of kpc means the signal from WIMP annihilation should be much lower than calculated from the simulated distribution. Remember – the WIMP annihilation signal scales as the square of the dark matter density, so the turn-down seen at small radii in the log-log plot above is brutal. There isn’t enough dark matter there to do what it is claimed to be doing.

Cry wolf

There have now been so many claims to detect dark matter that have come and gone that it is getting to be like the fable of the boy who cried wolf. A long series of unpersuasive claims does not inspire confidence that the next will be correct. Indeed, it has the opposite effect: it is going to be really hard to take future claims seriously.

It’s almost as if this invisible dark matter stuff doesn’t exist.


Note added: Jeff Grube points out in the comments that Wang & Duan (2025) have a recent paper showing that the dark matter signal discussed here also predicts an antiproton signal that is already excluded by AMS data. While I find this unsurprising, it is an excellent check. Indeed, it would have caused me to think again had the antiproton signal been there: independent corroboration from a separate experiment is how science is supposed to work.


#It has become a pattern for advocates of dark matter to write a speculative paper for the journals that is fairly restrained in its claims, then hype it as an actual detection to the press. It’s like “Even I think this is probably wrong, but let’s make the claim on the off chance it pans out.”

$Ironically, a detection from a particle collider would be a non-detection. The signature of dark matter produced in a collision would be an imbalance between the mass-energy that goes into the collision and that measured in detected particles coming out of it. The mass-energy converted into WIMPs would escape the detector undetected. This is analogous to how neutrinos were first identified, though Fermi was reluctant to make up an invisible, potentially undetectable particle – a conservative value system that modern particle physicists have abandoned. The 13,000 GeV collision energy of the LHC is more than adequate to make ~100 GeV WIMPs, so the failure of this detection mode is telling.

^A less obvious possibility is spontaneous decay. This would happen if WIMPs are unstable and decay with a finite half-life. The shorter the half-life, the more decays, and the stronger the resulting signal. This implies some fine-tuning in the half-life – if it is much longer than a Hubble time, then it happens so seldom it is irrelevant; if it is shorter than a Hubble time, then dark matter halos evaporate and stable galaxies don’t exist.

&Astroparticle physics, also known as particle astrophysics, is a relatively new field. It is also an oxymoron, being a branch of particle physics with only aspirational delusions of relevance to astrophysics. I say that to be rude to people who are rude to astronomers, but it is also true. Astrophysics is the physics of objects in the sky, and as such, requires all of physics. Physics is a broad field, so some aspects are more relevant than others. When I teach a survey course, it touches on gravity, electromagnetism, atomic and molecular quantum mechanics, nuclear physics, and with the discovery of exoplanets, increasingly on geophysics. Particle physics doesn’t come up. It’s just not relevant, except where it overlaps with nuclear physics. (As poorly as particle physicists think of astronomers, they seem to think even less of nuclear physicists, whom they consider to be failed particle physicists (if only they were smart enough!) and nuclear physicists hate them in return.) This new field of astroparticle physics seems to be all about dark matter as driven by early universe cosmology, with contempt for everything that happens in the 13 billion years following the production of the relic radiation seen as the microwave background. Anything later is dismissed as mere “gastrophysics” that is too complicated to understand so cannot possibly inform fundamental physics. I guess that’s true if one chooses to remain ignorant of it.

*Fishy results can also indicate something fishy with the data. I had a conversation with an instrument builder at the time who pointed out that PAMELA had chosen to fly without a particular discriminator in order to save weight; he suggested that its absence could explain the apparent upturn in positrons.

##There is a relatively nearby pulsar that fits the bill. It has a name: Geminga. This illustrates the human tendency to see what we’re looking for. The astroparticle community was looking for dark matter, so that’s what many of them saw in the excess cosmic ray signal. High energy astrophysicists work on neutron stars, so the obvious interpretation to them was a pulsar. One I recall being particularly scornful of the dark matter interpretation when there was an obvious astrophysical source. I also remember the astroparticle people being quick to dismiss the pulsar interpretation because it seemed unlikely to them for one to be so close but really they hadn’t thought about it before: that pulsars could do this was news to them, and many preferred to believe the dark matter interpretation.

$$All the people barking were men.

&&This experience opened my eyes to the existence of an entire community of scientists who were working on dark matter in somewhat gratuitous ignorance of the astronomical evidence for dark matter. To them, the existence of the stuff had already been demonstrated; the interesting thing now was to find the responsible particle. But they were clearly missing many important ingredients – another example is disk stability, a foundational reason to invoke dark matter that seems to routinely come as a surprise to particle physicists. This disconnect is part of what motivated me to develop an entire semester course on dark matter, which I’ve taught every other year since 2013 and will teach again this coming semester. The first time I taught it, I worried that there wasn’t enough material for a whole semester. Now a semester isn’t enough time.

**I had a college friend (sadly now deceased) who was part of the team that discovered the Higgs. That was big business, to the extent that there were two experiments – one to claim the detection, and another on the same beam to do the confirmation. The first experiment exceeded the arbitrary 5σ threshold to claim a 5.2σ detection, but the second only reached 4.9σ. So, in all appropriateness, he asked in a meeting if they could/should really announce a detection. A Nobel prize was on the line, so the answer was straightforward: Do you want a detection or not? (His words.)

*#Rather than forget, some choose to fiddle ways around the Lee-Weinberg limit. This has led to the sub-genre of “light dark matter” which means lightweight, not luminous. I’d say this was the worst name ever, but the same people talk about dark photons with a straight face, so irony continues to bleed out.

*$Ironically, a sterile neutrino has also been invoked to address problems in MOND.

*%I was amused once to see one of the more rabid advocates of dark matter signals of this type give an entire talk hyping the various possibilities only to mention pulsars at the end with a sigh, admitting that the Fermi signal looked exactly like that.

Non-equilibrium dynamics in galaxies that appear to lack dark matter: ultradiffuse galaxies

Non-equilibrium dynamics in galaxies that appear to lack dark matter: ultradiffuse galaxies

Previously, we discussed non-equilibrium dynamics in tidal dwarf galaxies. These are the result of interactions between giant galaxies that are manifestly a departure from equilibrium, a circumstance that makes TDGs potentially a decisive test to distinguish between dark matter and MOND, and simultaneously precludes confident application of that test. There are other galaxies for which I suspect non-equilibrium dynamics may play a role, among them some (not all) of the so-called ultradiffuse galaxies (UDGs).

UDGs

The term UDG has been adopted for galaxies below a certain surface brightness threshold with a size (half-light radius) in excess of 1.5 kpc (van Dokkum et al. 2015). I find the stipulation about the size to be redundant, as surface brightness* is already a measure of diffuseness. But OK, whatever, these things are really spread out. That means they should be good tests of MOND like low surface brightness galaxies before them: their low stellar surface densities mean** that they should be in the regime of low acceleration and evince large mass discrepancies when isolated. It also makes them susceptible to the external field effect (EFE) in MOND when they are not isolated, and perhaps also to tidal disruption.

To give some context, here is a plot of the size-mass relation for Local Group dwarf spheroidals. Typically they have masses comparable to globular clusters, but much large sizes – a few hundred parsecs instead of just a few. As with more massive galaxies, these pressure supported dwarfs are all over the place – at a give mass, some are large while others are relatively compact. All but the one most massive galaxy in this plot are in the MOND regime. For convenience, I’ll refer to the black points labelled with names as UDGs+.

The size (radius encompassing half of the total light) and stellar mass of Local Group dwarf spheroidals (green points selected by McGaugh et al. 2021 to be relatively safe from external perturbation) along with two more Local Group dwarfs that are subject to the EFE (Crater 2 and Antlia 2) and the two UDGs NGC 1052-DF2 and DF4. Dotted lines show loci of constant surface density. For reference, the solar neighborhood has ~40 M pc-2; the centers of high surface brightness galaxies frequently exceed 1,000 M pc-2.

The UDGs are big and diffuse. This makes them susceptible to the EFE and tidal effects. The lower the density of a system, the easier it is for external systems to mess with it. The ultimate example is something gets so close to a dominant central mass that it gets tidally disrupted. That can happen conventionally; the stronger effective force of MOND increases tidal effects. Indeed, there is only a fairly narrow regime between the isolated case and tidally-induced disequilibrium where the EFE modifies the internal dynamics in a quasi-static way.

The trouble is the s-word: static. In order to test theories, we assume that the dynamical systems we observe are in equilibrium. Though often a good assumption, it doesn’t always hold. If we forget we made the assumption, we might think we’ve falsified a theory when all we’ve done is discover a system that is out of equilibrium. The universe is a very dynamic place – the whole thing is expanding, after all – so we need to be wary of static thinking.

Equilibrium MOND formulae

That said, let’s indulge in some static thinking. An isolated, pressure supported galaxy in the MOND regime will have an equilibrium velocity dispersion

where M is the mass (the stellar mass in the case of a gas-free dwarf spheroidal), G is Newton’s constant, and a0 is Milgrom’s acceleration constant. The number 4/81 is a geometrical factor that assumes we’re observing a spherical system with isotropic orbits, neither of which is guaranteed even in the equilibrium case, and deviations from this idealized situation are noticeable. Still, this is as simple as it gets: if you know the mass, you can predict the characteristic speed at which stars move. Mass is all that matters: we don’t care about the radius as we must with Newton (v2 = GM/r); the only other quantities are constants of nature.

But what do we mean by isolated? In MOND, it is that the internal acceleration of the system, gin, exceeds that from external sources, gex: gingex. For a pressure supported dwarf, gin ≈ 3σ2/r (so here the size of the dwarf does matter, as does the location of a star within it), while the external field from a giant host galaxy would be gex = Vf2/D where Vf is the flat rotation speed stipulated by the baryonic mass of the host and D is the distance from the host to the dwarf satellite. The distance is not a static quantity. As a dwarf orbits its host, D will vary by an amount that depends on the eccentricity of the orbit, and the external field will vary with it, so it is possible to have an orbit in which a dwarf satellite dips in and out of the EFE regime. Many Local Group dwarfs straddle the line gingex, and it takes time to equilibrate, so static thinking can go awry.

It is possible to define a sample of Local Group dwarfs that have sufficiently high internal accelerations (but also in the MOND regime with gexgin ≪ a0) that we can pretend they are isolated, and the above equation applies. Such dwarfs should& fall on the BTFR, which they do:

The baryonic Tully-Fisher relation (BTFR) including pressure supported dwarfs (green points) with their measured velocity dispersions matched to the flat rotation speeds of rotationally supported galaxies (blue points) via the prescription of McGaugh et al. (2021). The large blue points are rotators in the Local Group (with Andromeda and the Milky Way up near the top); smaller points are spirals with direct distance measurements (Schombert et al. 2020). The Local Group dwarfs assessed to be safe from external perturbation are on the BTFR (for Vf = 2σ); Crater 2 and the UDGs near NGC 1052 are not.

In contrast, three of the four the UDGs considered here do not fall on the BTFR. Should they?

Conventionally, in terms of dark matter, probably they should. There is no reason for them to deviate from whatever story we make up to explain the BTFR for everything else. That they do means we have to make up a separate story for them. I don’t want to go deeply into this here since the cold dark matter model doesn’t really explain the observed BTFR in the first place. But even accepting that it does so after invoking feedback (or whatever), does it tolerate deviants? In a broad sense, yes: since it doesn’t require the particular form of the BTFR that’s observed, it is no problem to deviate from it. In a more serious sense, no: if one comes up with a model that explains the small scatter of the BTFR, it is hard to make that same model defy said small scatter. I know, I’ve tried. Lots. One winds up with some form of special pleading in pretty much any flavor of dark matter theory on top of whatever special pleading we invoked to explain the BTFR in the first place. This is bad, but perhaps not as bad as it seems once one realizes that not everything has to be in equilibrium all the time.

In MOND, the BTFR is absolute – for isolated systems in equilibrium. In the EFE regime, galaxies can and should deviate from it even if they are in equilibrium. This always goes in the sense of having a lower characteristic velocity for a given mass, so below the line in the plot. To get above the line would require being out of equilibrium through some process that inflates velocities (if systematic errors are not to blame, which also sometimes happens.)

The velocity dispersion in the EFE regime (gingex ≪ a0) is slightly more complicated than this isolated case:

This is just like Newton except the effective value of the gravitational constant is modified. It gets a boost^ by how far the system is in the MOND regime: GeffG(a0/gex). An easy way to tell which regime an object is in is to calculate both velocity dispersions σiso and σefe: the smaller one is the one that applies#. An upshot of this is that systems in the EFE regime should deviate from the BTFR to the low velocity side. The amplitude of the deviation depends on the system and the EFE: both the size and mass matter, as does gex. Indeed, if an object is on an eccentric orbit, then the velocity dispersion can vary with the EFE as the distance of the satellite from its host varies, so over time the object would trace out some variable path in the BTFR plane.

Three of the four UDGs fall off the BTFR, so that sounds mostly right, qualitatively. Is it? Yes, for Crater 2, but but not really for the others. Even for Crater 2 it is only a partial answer, as non-equilibrium effects may play a role. This gets involved for Crater 2, then more so for the others, so let’s start with Crater 2.

Crater 2 – the velocity dispersion

The velocity dispersion of Crater 2 was correctly predicted a priori by the formula for σefe above. It is a tiny number, 2 km/s, and that’s what was subsequently observed. Crater 2 is very low mass, ~3 x 105 M, which is barely a globular cluster, but it is even more spread out than the typical dwarf spheroidal, having an effective surface density of only ~0.05 Mpc-2. If it were isolated, MOND predicts that it would have a higher velocity dispersion – all of 4 km/s. That’s what it would take to put it on the BTFR above. The seemingly modest difference between 2 and 4 km/s makes for a clear offset. But despite its substantial current distance from the Milky Way (~ 120 kpc), Crater 2 is so low surface density that it is still subject to the external field effect, which lowers its equilibrium velocity dispersion. Unlike isolated galaxies, it should be offset from the BTFR according to MOND.

LCDM struggles to explain the low mass end of the BTFR because it predicts a halo mass-circular speed relation Mhalo ~ Vhalo3 that differs from the observed Mb ~ Vf4. A couple of decades ago, it looked like massive galaxies might be consistent with the lower power-law, but that anticipates higher velocities for small systems. The low velocity dispersion of Crater 2 is thus doubly weird in LCDM. It’s internal velocities are too small not just once – the BTFR is already lower than was expected – but twice, being below even that.

An object with a large radial extent like Crater 2 probes far out into its notional dark matter halo, making the nominal prediction$ of LCDM around ~17 km/s, albeit with a huge expected scatter. Even if we can explain the low mass end of the BTFR and its unnaturally low scatter in LCDM, we now have to explain this exception to it – an exception that is natural in MOND, but is on the wrong side of the probability distribution for LCDM. That’s one of the troubles with tuning LCDM to mimic MOND: if you succeed in explaining the first thing, you still fail to anticipate the other. There is no EFE% in LCDM, no reason to anticipate that σefe applies rather than σiso, and no reason to expect via feedback that this distinction has anything to do with the dynamical accelerations gin and gex.

But wait – this is a post about non-equilibrium dynamics. That can happen in LCDM too. Indeed, one expects that satellite galaxies suffer tidal effects in the field of their giant host. The primary effect is that the dark matter subhalos in which dwarf satellites reside are stripped from the outside in. Their dark matter becomes part of the large halo of the host. But the stars are well-cocooned in the inner cusp of the NFW halo which is more robust than the outskirts of the subhalo, so the observable velocity dispersion barely evolves until most of the dark mass has been stripped away. Eventually, the stars too get stripped, forming tidal streams. Most of the damage occurs during pericenter passage when satellites are closest to their host. What’s left is no longer in equilibrium, with the details depending on the initial conditions of the dwarf on infall, the orbit, the number of pericenter passages, etc., etc.

What does not come out of this process is Crater 2 – at least not naturally. It has stars very far out – these should get stripped outright if the subhalo has been eviscerated to the point where its velocity dispersion is only 2 km/s. This tidal limitation has been noted by Errani et al.: “the large size of kinematically cold ‘feeble giant’ satellites like Crater 2 or Antlia 2 cannot be explained as due to tidal effects alone in the Lambda Cold Dark Matter scenario.” To save LCDM, we need something extra, some additional special pleading on top of non-equilibrium tidal effects, which is why I previously referred to Crater 2 as the Bullet Cluster of LCDM: an observation so problematic that it amounts to a falsification.

Crater 2 – the orbit

We held a workshop on dwarf galaxies on CWRU’s campus in 2017 where issues pertaining to both dark matter and MOND discussed. The case of Crater 2 was one of the things discussed, and it was included in the list of further tests for both theories (see above links). Basically the expectation in LCDM is that most subhalo orbits are radial (highly eccentric), so that is likely to be the case for Crater 2. In contrast, the ultradiffuse blob that is Crater 2 would not survive a close passage by the Milky Way given the strong tidal force exerted by MOND, so the expectation was for a more tangential (quasi-circular) orbit that keeps it at a safe distance.

Subsequently, it became possible to constrain orbits with Gaia data. The exact orbit depends on the gravitational potential of the Milky Way, which isn’t perfectly known. However, several plausible choices of the global potential give an an eccentricity around 0.6. That’s not exactly radial, but it’s pretty far from circular, placing the pericenter around 30 kpc. That’s much closer than its current distance, and well into the regime where it should be tidally disrupted in MOND. No way it survives such a close passage!

So which is it? MOND predicted the correct velocity dispersion, which LCDM struggles to explain. Yet the orbit is reasonable in LCDM, but incompatible with MOND.

Simulations of dwarf satellites

It occurs to me that we might be falling victim to static thinking somewhere. We talked about the impact of tides on dark matter halos a bit above. What should we expect in MOND?

The first numerical simulations of dwarf galaxies orbiting a giant host were conducted by Brada & Milgrom (2000). Their work is specific to the Aquadratic Lagrangian (AQUAL) theory proposed by Bekenstein & Milgrom (1984). This was the first demonstration that it was possible to write a version of MOND that conserved momentum and energy. Since then, a number of different approaches have been demonstrated. These can be subtly different, so it is challenging to know which (if any) is correct. Sorting that out is well beyond the scope of this post, so let’s stick to what we can learn from Brada & Milgrom.

Brada & Milgrom followed the evolution of low surface density dwarfs of a range of masses as they orbited a giant host galaxy. One thing they found was that the behavior of the numerical model could deviate from the analytic expectation of quasi-equilibrium enshrined in the equations above. For an eccentric orbit, the external field varies with distance from the host. If there is enough time to respond to this, the change can be adiabatic (reversible), and the static approximation may be close enough. However, as the external field varies more rapidly and/or the dwarf is more fragile, the numerical solution departs from the simple analytic approximation. For example:

Fig. 2 of Brada & Milgrom (2000): showing the numerically calculated (dotted line) variation of radius (left) and characteristic velocity (right) for a dwarf on a mildly eccentric orbit (peri- and apocenter of roughly 60 and 90 kpc, respectively, for a Milky Way-like host). Also shown is the variation in the EFE as the dwarf’s distance from the host varies (solid line). Dwarfs go through a breathing mode of increasing/decreasing size and decreasing/increasing velocity dispersion in phase with the orbit. If this process is adiabatic, it tracks the solid line and the static EFE approximation holds. This is not always the case in the simulation, so applying our usual assumption of dynamical equilibrium will result in an error stipulated by the difference between the dotted and solid lines. The amplitude of this error depends on the size, mass, and orbital history of each and every dwarf satellite.

As long as the behavior is adiabatic, the dwarf can be stable indefinitely even as it goes through periodic expansion and contraction in phase with the orbit. Departure from adiabaticity means that every passage will be different. Some damage will be done on the first passage, more on the second, and so on. As a consequence, reality will depart from our simple analytic expectations.

I was aware of this when I made the prediction for the velocity dispersion of Crater 2, and hedged appropriately. Indeed, I worried that Crater 2 should already be out of equilibrium. Nevertheless, I took solace in two things: first, the orbital timescale is long, over a Gyr, so departures from the equilibrium prediction might not have had time to make a dramatic difference. Second, this expectation is consistent with the slow evolution of the characteristic velocity for the most Crater 2-like, m=1 model of Brada & Milgrom (bottom track in the right panel below):

Fig. 4 of Brada & Milgrom (2000): The variation of the size and characteristic velocity of dwarf models of different mass. The more massive models approximate the adiabatic limit, which gradually breaks down for the lowest mass models. In this example, the m = 1 and 2 models explode, with the scale size growing gradually without recovering.

What about the size? That is not constant except for the most massive (m=16) model. The m=3 and 4 models recover, albeit not adiabatically. The m=4 model almost returns to its original size, but the m=3 model has puffed up after one orbit. The m=1 and 2 models explode.

One can see this by eye. The continuous growth in radii of the lower mass models is obvious. If one looks closely, one can also see the expansion then contraction of the heavier models.

Fig. 5 of Brada & Milgrom (2000): AQUAL numerical simulations dwarf satellites orbiting a more massive host galaxy. The parameter m describes the mass and effective surface density of the satellite; all the satellites are in the MOND regime and subject to the external field of the host galaxy, which exceeds their internal accelerations. In dimensionless simulation units, m = 5 x 10-5, which for a satellite of the Milky Way corresponds roughly to a stellar mass of 3 x 106 M. For real dwarf satellite galaxies, the scale size is also relevant, but the sequence of m above suffices to illustrate the increasingly severe effects of the external field as m decreases.

The current size of Crater 2 is unusual. It is very extended for its mass. If the current version of Crater 2 has a close passage with the Milky Way, it won’t survive. But we know it already had a close passage, so it should be expanding now as a result. (I did discuss the potential for non-equilibrium effects.) Knowing now that there was a pericenter passage in the (not exactly recent) past, we need to imagine running back the clock on the simulations. It would have been smaller in the past, so maybe it started with a normal size, and now appears so large because of its pericenter passage. The dynamics predict something like that; it is static thinking to assume it was always thus.

The dotted line shows a possible evolutionary track for Crater 2 as it expands after pericenter passage. Its initial condition would have been amongst the other dwarf spheroidals. It could also have lost some mass in the process, so any of the green low-mass dwarfs might be similar to the progenitor.

This is a good example of a phenomena I’ve encountered repeatedly with MOND. It predicts something right, but seems to get something else wrong. If we’re already sure it is wrong, we stop there and never think further. But when one bothers to follow through on what the theory really predicts, more often than not the apparently problematic observation is in fact what we should have expected in the first place.

DF2 and DF4

DF2 and DF4 are two UDGs in the vicinity of the giant galaxy NGC 1052. They have very similar properties, and are practically identical in terms of having the same size and mass within the errors. They are similar to Crater 2 in that they are larger than other galaxies of the same mass.

When it was first discovered, NGC 1052-DF2 was portrayed as a falsification of MOND. On closer examination, had I known about it, I could have used MOND to correctly predict its velocity dispersion, just like the dwarfs of Andromeda. This seemed like yet another case where the initial interpretation contrary to MOND melted away to actually be a confirmation. At this point, I’ve seen literally hundreds^^of cases like that. Indeed, this particular incident made me realize that there would always be new cases like that, so I decided to stop spending my time addressing every single case.

Since then, DF2 has been the target of many intensive observing campaigns. Apparently it is easier to get lots of telescope time to observe a single object that might have the capacity to falsify MOND than it is to get a more modest amount to study everything else in the universe. That speaks volumes about community priorities and the biases that inform them. At any rate, there is now lots more data on this one object. In some sense there is too much – there has been an active debate in the literature over the best distance determination (which affects the mass) and the most accurate velocity dispersion. Some of these combinations are fine with MOND, but others are not. Let’s consider the worst case scenario.

In the worst case scenario, both DF2 and DF4 are too far from NGC 1052 for its current EFE to have much impact, and they have relatively low velocity dispersions for their luminosity, around 8 km/s, so they fall below the BTFR. Worse for MOND is that this is about what one expects from Newton for the stars alone. Consequently, these galaxies are sometimes referred to as being “dark matter free.” That’s a problem for MOND, which predicts a larger velocity dispersion for systems in equilibrium.

Perhaps we are falling prey to static thinking, and these objects are not in equilibrium. While their proximity to neighboring galaxies and the EFE to which they are presently exposed depends on the distance, which is disputed, it is clear that they live in a rough neighborhood with lots of more massive galaxies that could have bullied them in a close passage at some point in the past. Looking at Fig. 4 of Brada & Milgrom above, I see that galaxies whacked out of equilibrium not only expand in radius, potentially explaining the unusually large sizes of these UDGs, but they also experience a period during which their velocity dispersion is below the equilibrium value. The amplitude of the dip in these simulations is about right to explain the appearance of being dark-matter-free.

It is thus conceivable that DF2 and DF4 (the two are nearly identical in the relevant respects) suffered some sort of interaction that perturbed them into their current state. Their apparent absence of a mass discrepancy and the apparent falsification of MOND that follows therefrom might simply be a chimera of static thinking.

Make no mistake: this is a form of special pleading. The period of depressed velocity dispersion does not last indefinitely, so we have to catch them at a somewhat special time. How special depends on the nature of the interaction and its timescale. This can be long in intergalactic space (Gyrs), so it may not be crazy special, but we don’t really know how special. To say more, we would have to do detailed simulations to map out the large parameter space of possibilities for these objects.

I’d be embarrassed for MOND to have to make this kind of special pleading if we didn’t also have to do it for LCDM. A dwarf galaxy being dark matter free in LCDM shouldn’t happen. Galaxies form in dark matter halos; it is very hard to get rid of the dark matter while keeping the galaxy. The most obvious way to do it, in rare cases, is through tidal disruption, though one can come up with other possibilities. These amount to the same sort of special pleading we’re contemplating on behalf of MOND.

Recently, Tang et al. (2024) argue that DF2 and DF4 are “part of a large linear substructure of dwarf galaxies that could have been formed from a high-velocity head-on encounter of two gas-rich galaxies” which might have stripped the dark matter while leaving the galactic material. That sounds… unlikely. Whether it is more or less unlikely than what it would take to preserve MOND is hard to judge. It appears that we have to indulge in some sort of special pleading no matter what: it simply isn’t natural for galaxies to lack dark matter in a universe made of dark matter, just as it is unnatural for low acceleration systems to not manifest a mass discrepancy in MOND. There is no world model in which these objects make sense.

Tang et al. (2024) also consider a number of other possibilities, which they conveniently tabulate:

Table 3 from Tang et al. (2024).

There are many variations on awkward hypotheses for how these particular UDGs came to be in LCDM. They’re all forms of special pleading. Even putting on my dark matter hat, most sound like crazy talk to me. (Stellar feedback? Really? Is there anything it cannot do?) It feels like special pleading on top of special pleading; it’s special pleading all the way down. All we have left to debate is which form of special pleading seems less unlikely than the others.

I don’t find this debate particularly engaging. Something weird happened here. What that might be is certainly of interest, but I don’t see how we can hope to extract from it a definitive test of world models.

Antlia 2

The last of the UDGs in the first plot above is Antlia 2, which I now regret including – not because it isn’t interesting, but because this post is getting exhausting. Certainly to write, perhaps to read.

Antlia 2 is on the BTFR, which is ordinarily normal. In this case it is weird in MOND, as the EFE should put it off the BTFR. The observed velocity dispersion is 6 km/s, but the static EFE formula predicts it should only be 3 km/s. This case should be like Crater 2.

First, I’d like to point out that, as an observer, it is amazing to me that we can seriously discuss the difference between 3 and 6 km/s. These are tiny numbers by the standard of the field. The more strident advocates of cold dark matter used to routinely assume that our rotation curve observations suffered much larger systematic errors than that in order to (often blithely) assert that everything was OK with cuspy halos so who are you going to believe, our big, beautiful simulations or those lying data?

I’m not like that, so I do take the difference seriously. My next question, whenever MOND is a bit off like this, is what does LCDM predict?

I’ll wait.

Well, no, I won’t, because I’ve been waiting for thirty years, and the answer, when there is one, keeps changing. The nominal answer, as best I can tell, is ~20 km/s. As with Crater 2, the large scale size of this dwarf means it should sample a large portion of its dark matter halo, so the expected characteristic speed is much higher than 6 km/s. So while the static MOND prediction may be somewhat off here, the static LCDM expectation fares even worse.

This happens a lot. Whenever I come across a case that doesn’t make sense in MOND, it usually doesn’t make sense in dark matter either.

In this case, the failure of the static-case prediction is apparently caused by tidal perturbation. Like Crater 2, Antlia 2 may have a large half-light radius because it is expanding in the way seen in the simulations of Brada & Milgrom. But it appears to be a bit further down that path, with member stars stretched out along the orbital path. They start to trace a small portion of a much deeper gravitational potential, so the apparent velocity dispersion goes up in excess of the static prediction.

Fig. 9 from Ji et al. (2021) showing tidal features in Antlia 2 considering the effects of the Milky Way alone (left panel) and of the Milky Way and the Large Magellanic Cloud together (central panel) along with the position-velocity diagram from individual stars (right panel). The object is clearly not the isotropic, spherical cow presumed by the static equation for the velocity dispersion. Indeed, it is elongated as would be expected from tidal effects, with individual member stars apparently leaking out.

This is essentially what I inferred must be happening in the ultrafaint dwarfs of the Milky Way. There is no way that these tiny objects deep in the potential well of the Milky Way escape tidal perturbation%% in MOND. They may be stripped of their stars and their velocity dispersions mage get tidally stirred up. Indeed, Antlia 2 looks very much like the MOND prediction for the formation of tidal streams from such dwarfs made by McGaugh & Wolf (2010). Unlike dark matter models in which stars are first protected, then lost in pulses during pericenter passages, the stronger tides of MOND combined with the absence of a protective dark matter cocoon means that stars leak out gradually all along the orbit of the dwarf. The rate is faster when the external field is stronger at pericenter passage, but the mass loss is more continuous. This is a good way to make long stellar streams, which are ubiquitous in the stellar halo of the Milky Way.

So… so what?

It appears that aspects of the observations of the UDGs discussed here that seem problematic for MOND may not be as bad for the theory as they at first seem. Indeed, it appears that the noted problems may instead be a consequence of the static assumptions we usually adopt to do the analysis. The universe is a dynamic place, so we know this assumption does not always hold. One has to judge each case individually to assess whether this is reasonable or not.

In the cases of Crater 2 and Antlia 2, yes, the stranger aspects of the observations fit well with non-equilibrium effects. Indeed, the unusually large half-light radii of these low mass dwarfs may well be a result of expansion after tidal perturbation. That this might happen was specifically anticipated for Crater 2, and Antlia 2 fits the bill described by McGaugh & Wolf (2010) as anticipated by the simulations of Brada & Milgrom (2000) even though it was unknown at the time.

In the cases of DF2 and DF4, it is less clear what is going on. I’m not sure which data to believe, and I want to refrain from cherry-picking, so I’ve discussed the worst-case scenario above. But the data don’t make a heck of a lot of sense in any world view; the many hypotheses made in the dark matter context seem just as contrived and unlikely as a tidally-induced, temporary dip in the velocity dispersion that might happen in MOND. I don’t find any of these scenarios to be satisfactory.

This is a long post, and we have only discussed four galaxies. We should bear in mind that the vast majority of galaxies do as predicted by MOND; a few discrepant cases are always to be expected in astronomy. That MOND works at all is a problem for the dark matter paradigm: that it would do so was not anticipated by any flavor of dark matter theory, and there remains no satisfactory explanation of why MOND appears to happen in a universe made of dark matter. These four galaxies are interesting cases, but they may be an example of missing the forest for the trees.


*As it happens, the surface brightness threshold adopted in the definition of UDGs is exactly the same as I suggested for VLSBGs (very low surface brightness galaxies: McGaugh 1996), once the filter conversions have been made. At the time, this was the threshold of our knowledge, and I and other early pioneers of LSB galaxies were struggling to convince the community that such things might exist. Up until that time, the balance of opinion was that they did not, so it is gratifying to see that they do.

**This expectation is specific to MOND; it doesn’t necessarily hold in dark matter where the acceleration in the central regions of diffuse galaxies can be dominated by the cusp of the dark matter halo. These were predicted to exceed what is observed, hence the cusp-core problem.

+Measuring by surface brightness, Crater 2 and Antlia 2 are two orders of magnitude more diffuse than the prototypical ultradiffuse galaxies DF2 and DF4. Crater 2 is not quite large enough to count as a UDG by the adopted size definition, but Antlia 2 is. So does that make it super-ultra diffuse? Would it even be astronomy without terrible nomenclature?

&I didn’t want to use a MOND-specific criterion in McGaugh et al. (2021) because I was making a more general point, so the green points are overly conservative from the perspective of the MOND isolation criterion: there are more dwarfs for which this works. Indeed, we had great success in predicting velocity dispersions in exactly this fashion in McGaugh & Milgrom (2013a, 2013b). And XXVIII was a case not included above that we highlighted as a great test of MOND, being low mass (~4×105 M) but still qualifying as isolated, and its dispersion came in (6.6+2.9-2.1 km/s in one measurement, 4.9 ± 1.6 km/s in another) as predicted a priori (4.3+0.8-0.7 km/s). Hopefully the Rubin Observatory will discover many more similar objects that are truly isolated; these will be great additional tests, though one wonders how much more piling-on needs to be done.

^This is an approximation that is reasonable for the small accelerations involved. More generally we have Geff = G/μ(|gex+gin|/a0) where μ is the MOND interpolation function and one takes the vector sum of all relevant accelerations.

#This follows because the boost from MOND is limited by how far into the low acceleration regime an object is in. If the EFE is important, the boost will be less than in the isolated case. As we said in 2013, “the case that reports the lower velocity dispersion is always the formally correct one.” I mention it again here because apparently people are good at scraping equations from papers without reading the associated instructions, so one gets statements likethe theory does not specify precisely when the EFE formula should replace the isolated MOND prediction.” Yes it does. We told you precisely when the EFE formula should replace the isolated formula. It is when it reports the lower velocity dispersion. We also noted this as the reason for not giving σefe in the tables in cases it didn’t apply, so there were multiple flags. It took half a dozen coauthors to not read that. I’d hate to see how their Ikea furniture turned out.

$As often happens with LCDM, there are many nominal predictions. One common theme is that “Despite spanning four decades in luminosity, dSphs appear to inhabit halos of comparable peak circular velocity.” So nominally, one would expect a faint galaxy like Crater 2 to have a similar velocity dispersion to a much brighter one like Fornax, and the luminosity would have practically no power to predict the velocity dispersion, contrary to what we observe in the BTFR.

%There is the 2-halo term – once you get far enough from the center of a dark matter halo (the 1-halo term), there are other halos out there. These provide additional unseen mass, so can boost the velocity. The EFE in MOND has the opposite effect, and occurs for completely different physical reasons, so they’re not at all the same.

^^For arbitrary reasons of human psychology, the threshold many physicists set for “always happens” is around 100 times. That is, if a phenomenon is repeated 100 times, it is widely presumed to be a general rule. That was the threshold Vera Rubin hit when convincing the community that flat rotation curves were the general rule, not just some peculiar cases. That threshold has also been hit and exceeded by detailed MOND fits to rotation curves, and it seems to be widely accepted that this is the general rule even if many people deny the obvious implications. By now, it is also the case for apparent exceptions to MOND ceasing to be exceptions as the data improve. Unfortunately, people tend to stop listening at what they want to hear (in this case, “falsifies MOND”) and fail to pay attention to further developments.

%%It is conceivable that the ultrafaint dwarfs might elude tidal disruption in dark matter models if they reside in sufficiently dense dark matter halos. This seems unlikely given the obvious tidal effects on much more massive systems like the Sagittarius dwarf and the Magellanic Clouds, but it could in principle happen. Indeed, if one calculates the mass density from the observed velocity dispersion, one infers that they do reside in dense dark matter halos. In order to do this calculation, we are obliged to assume that the objects are in equilibrium. This is, of course, a form of static thinking: the possibility of tidal stirring that enhances the velocity dispersion above the equilibrium value is excluded by assumption. The assumption of equilibrium is so basic that it is easy to unwittingly engage in circular reasoning. I know, as I did exactly that myself to begin with.

The fault in our stars: blame them, not the dark matter!

The fault in our stars: blame them, not the dark matter!

As discussed in recent posts, the appearance of massive galaxies in the early universe was predicted a priori by MOND (Sanders 1998, Sanders 2008, Eappen et al. 2022). This is problematic for LCDM. How problematic? That’s always the rub.

The data follow the evolutionary track of a monolithic model (purple line) rather than the track of the largest progenitor predicted by hierarchical LCDM (dotted lines leading to different final masses).

The problem that JWST observations pose for LCDM is that there is a population of galaxies in the high redshift universe that appear to evolve as giant monoliths rather than assembling hierarchically. Put that way, it is a fatal flaw: hierarchical assembly of mass is fundamental to the paradigm. But we don’t observe mass, we observe light. So the obvious “fix” is to adjust the mapping of observed light to predicted dark halo mass in order to match the observations. How plausible is this?

Merger trees from the Illustris-TNG50 simulation showing the hierarchical assembly of L* galaxies. The dotted lines in the preceding plot show the stellar mass growth of the largest progenitor, which is on the left of each merger tree. All progenitors were predicted to be tiny at z > 3, well short of what we observe.

Before trying to wriggle out of the basic result, note that doing so is not plausible from the outset. We need to make the curve of growth of the largest progenitors “look like” the monolithic model. They shouldn’t, by construction, so everything that follows is a fudge to try to avoid the obvious conclusion. But this sort of fudging has been done so many times before in so many ways (the “Frenk Principle” was coined nearly thirty years ago) that many scientists in the field have known nothing else. They seem to think that this is how science is supposed to work. This in turn feeds a convenient attitude that evades the duty to acknowledge that a theory is in trouble when it persistently has to be adjusted to make itself look like a competitor.

That noted, let’s wriggle!

Observational dodges

The first dodge is denial: somehow the JWST data are wrong or misleading. Early on, there were plausible concerns about the validity of some (some) photometric redshifts. There are enough spectroscopic redshifts now that this point is moot.

A related concern is that we “got lucky” with where we pointed JWST to start with, and the results so far are not typical of the universe at large. This is not quite as crazy as it sounds: the field of view of JWST is tiny, so there is no guarantee that the first snapshot will be representative. Moreover, a number of the first pointings intentionally targeted rich fields containing massive clusters, i.e., regions known to be atypical. However, as observations have accumulated, I have seen no indications of a reversal of our first impression, but rather lots of corroboration. So this hedge also now borders on reality denial.

A third observational concern that we worried a lot about in Franck & McGaugh (2017) is contamination by active galactic nuclei (AGN). Luminosity produced by accretion onto supermassive black holes (e.g., quasars) was more common in the early universe. Perhaps some of the light we are attributing to stars is actually produced by AGN. That’s a real concern, but long story short, AGN contamination isn’t enough to explain everything else away. Indeed, the AGN themselves are a problem in their own right: how do we make the supermassive black holes that power AGN so rapidly that they appear already in the early universe? Like the galaxies they inhabit, the black holes that power AGN should take a long time to assemble in the absence of the heavy seeds naturally provided by MOND but not dark matter.

An evergreen concern in astronomy is extinction by dust. Dust could play a role (Ferrara et al. 2023), but this would be a weird effect for it to have. Dust is made by stars, so we naively expect it to build up along with them. In order to explain high redshift JWST data with dust we have to do the opposite: make a lot of dust very early without a lot of stars, then eject it systematically from galaxies so that the net extinction declines with time – a galactic reveal sort of like a cosmic version of the dance of the seven veils. The rate of ejection for all galaxies must necessarily be fine-tuned to balance the barely evolving UV luminosity function with the rapidly evolving dark matter halo mass function. This evolution of the extinction has to coordinate with the dark matter evolution over a rather small window of cosmic time, there being only ∼108 yr between z = 14 and 11. This seems like an implausible way to explain an unchanging luminosity density, which is more naturally explained by simply having stars form and be there for their natural lifetimes.

Figure 5 from McGaugh et al. (2024): The UV luminosity function (left) observed by Donnan et al. (2024; points) compared to that predicted for ΛCDM by Yung et al. (2023; lines) as a function of redshift. Lines and points are color coded by redshift, with dark blue, light blue, green, orange, and red corresponding to z = 9, 10, 11, 12, and 14, respectively. There is a clear excess in the number density of galaxies that becomes more pronounced with redshift, ranging from a factor of ∼2 at z = 9 to an order of magnitude at z ≥ 11 (right). This excess occurs because the predicted number of sources declines with redshift while the observed numbers remain nearly constant with the data at z = 9, 10, and 11being right on top of each other.

The basic observation is that there is too much UV light produced by galaxies at all redshifts z > 9. What we’d rather have is the stellar mass function. JWST was designed to see optical light at the redshift of galaxy formation, but the universe surprised us and formed so many stars so early that we are stuck making inferences with the UV anyway. The relation of UV light to mass is dodgy, providing a knob to twist. So up next is the physics of light production.

In our discussion to this point, we have assumed that we know how to compute the luminosity evolution of a stellar population given a prescription for its star formation history. This is no small feat. This subject has a rich history with plenty of ups and downs, like most of astronomy. I’m not going to attempt to review all that here. I think we have this figured out well enough to do what we need to do for the purposes of our discussion here, but there are some obvious knobs to turn, so let’s turn ’em.

Blame the stars!

As noted above, we predict mass but observe light. So the program now is to squeeze more light out of less mass. Early dark matter halos too small? No problem; just make them brighter. More specifically, we need to make models in which the small dark matter halos that form first are better at producing photons from the small amount of baryons that they possess than are their low-redshift descendants. We have observational constraints on the latter; local star formation is inefficient, but maybe that wasn’t always the case. So the first obvious thing to try is to make star formation more efficient.

Super Efficient Star Formation

First, note that stellar populations evolve pretty much as we expect for stars, so this is a bit tricky. We have to retain the evolution we understand well for most of cosmic time while giving a big boost at early times. One way to do that is to have two distinct modes of star formation: the one we think of as normal that persists to this day, and an additional mode of super-efficient star formation (SEFS) at play in the early universe. This way we retain the usual results while potentially giving us the extra boost that we need to explain the JWST data. We argue that this is the least implausible path to preserving LCDM. We’re trying to make it work, and anticipate the arguments Dr. Z would make.

This SESF mode of star formation needs to be very efficient indeed, as there are galaxies that appear to have converted essentially all of their available baryons into stars. Let’s pause to observe that this is pretty silly. Space is very empty; it is hard to get enough mass together to form stars at all: there’s good reason that it is inefficient locally! The early universe is a bit denser by virtue of being smaller; at z = 9 the expansion factor is only 1/(1+z) = 0.1 of what it is now, so the density is (1+z)3 = 1,000 times greater. ON AVERAGE. That’s not really a big boost when it comes to forming structures like stars since the initial condition was extraordinarily uniform. The lack of early structure by far outweighs the difference in density; that is precisely why we’re having a problem. Still, I can at least imagine that there are regions that experience a cascade of violent relaxation and SESF once some threshold in gas density is exceeded that differentiates the normal model of star formation from SESF. Why a threshold in the gas? Because there’s not anything obvious in the dark matter picture to distinguish the galaxies that result from one or the other mode. CDM itself is scale free, after all, so we have to imagine a scale set by baryons that funnels protogalaxies into one mode or the other. Why, physically, is there a particular gas density that makes that happen? That’s a great question.

There have been observational indications that local star formation is related to a gas surface density threshold, so maybe there’s another threshold that kicks it up another notch. That’s just a plausibility argument, but that’s the straw I’m clutching at to justify SESF as the least implausible option. We know there’s at least one way in which a surface density scale might matter to star formation.

Writing out the (1+z)3 argument for the density above tickled the memory that I’d seen something similar claimed elsewhere. Looking it up, indeed Boylan-Kolchin (2024) does this, getting an extra (1+z)3 [for a total of (1+z)6] by invoking a surface density Σ that follows from an acceleration scale g: Σ=g/(πG). Very MONDish, that. At any rate, the extra boost is claimed to lift a corner of dark matter halo parameter space into the realm of viability. So, sure. Why not make that step two.

However we do it, making stars super-efficiently is what the data appear to require – if we confine our consideration to the mass predicted by LCDM. It’s a way of covering the lack of mass with an surplus of stars. Any mechanism that makes stars more efficiently will boost the dotted lines in the M*-z diagram above in the right direction. Do they map into the data (and the monolithic model) as needed? Unclear! All we’ve done so far is offer plausibility arguments that maybe it could be so, not demonstrate a model that works without fine-tuning that woulda coulda shoulda made the right prediction in the first place.

The ideas become less plausible from here.

Blame the IMF!

The next obvious idea after making more stars in total is to just make more of the high mass stars that produce UV photons. The IMF is a classic boogeyman to accomplish this. I discussed this briefly before, and it came up in a related discussion in which it was suggested that “in the end what will probably happen is that the IMF will be found to be highly redshift dependent.”

OK, so, first, what is the IMF? The Initial Mass Function is the spectrum of masses with which stars form: how many stars of each mass, ranging from the brown dwarf limit (0.08 M) to the most massive stars formed (around 100 M). The numbers of stars formed in any star forming event is a strong function of mass: low mass stars are common, high mass stars are rare. Here, though, is the rub: integrating over the whole population, low mass stars contain most of the mass, but high mass stars produce most of the light. This makes the conversion of mass to light quite sensitive to the IMF.

The number of UV photons produced by a stellar population is especially sensitive to the IMF as only the most massive and short-lived O and B stars produce them. This is low-hanging fruit for the desperate theorist: just a few more of those UV-bright, short-lived stars, please! If we adjust the IMF to produce more of these high mass stars, then they crank out lots more UV photons (which goes in the direction we need) but they don’t contribute much to the total mass. Better yet, they don’t live long. They’re like icicles as murder weapons in mystery stories: they do their damage then melt away, leaving no further evidence. (Strictly speaking that’s not true: they leave corpses in the form of neutron stars or stellar mass black holes, but those are practically invisible. They also explode as supernovae, boosting the production of metals, but the amount is uncertain enough to get away with murder.)

There is a good plausibility argument for a variable IMF. To form a star, gravity has to overcome gas pressure to induce collapse. Gas pressure depends on temperature, and interstellar gas can cool more efficiently when it contains some metals (here I mean metals in the astronomy sense, which is everything in the periodic table that’s not hydrogen or helium). It doesn’t take much; a little oxygen (one of the first products of supernova explosions) goes a long way to make cooling more efficient than a primordial gas composed of only hydrogen and helium. Consequently, low metallicity regions have higher gas temperatures, so it makes sense that gas clouds would need more gravity to collapse, leading to higher mass stars. The early universe started with zero metals, and it takes time for stars to make them and to return them to the interstellar medium, so voila: metallicity varies with time so the IMF varies with redshift.

This sound physical argument is simple enough to make that it can be done in a small part of a blog post. This has helped it persist in our collective astronomical awareness for many decades. Unfortunately, it appears to have bugger-all to do with reality.

If metalliticy plays a strong role in determining the IMF, we would expect to see it in stellar populations of different metallicity. We measure the IMF for solar metallicity stars in the solar neighborhood. Globular clusters are composed of stars formed shortly after the Big Bang and have low metallicities. So following this line of argument, we anticipate that they would have a different IMF. There is no evidence that this is the case. Still, we only really need to tweak the high-mass end of the IMF, and those stars died a long time ago, so maybe this argument applies for them if not for the long-lived, low-mass stars that we observe today.

In addition to counting individual stars, we can get a constraint on the galaxy-wide average IMF from the scatter in the Tully-Fisher relation. The physical relation depends on mass, but we rely on light to trace that. So if the IMF varies wildly from galaxy to galaxy, it will induce scatter in Tully-Fisher. This is not observed; the amount of intrinsic scatter that we see is consistent with that expected for stochastic variations in the star formation history for a fixed IMF. That’s a pretty strong constraint, as it doesn’t take much variation in the IMF to cause a lot of scatter that we don’t see. This constraint applies to entire galaxies, so it tolerates variations in the IMF in individual star forming events, but whatever is setting the IMF apparently tends to the same result when averaged over the many star forming events it takes to build a galaxy.

Variation in the IMF has come up repeatedly over the years because it provides so much convenient flexibility. Early in my career, it was commonly invoked to explain the variation in spectral hardness with metallicity. If one looks at the spectra of HII regions (interstellar gas ionized by hot young stars), there is a trend for lower metallicity HII regions to be ionized by hotter stars. The argument above was invoked: clearly the IMF tended to have more high mass stars in low metallicity environments. However, the light emitted by stars also depends on metallicity; low metallicity stars are bluer than their high metallicity equivalents because there are few UV absorption lines from iron in their atmospheres. Taking care to treat the stars and interstellar gas self-consistentlty and integrating over a fixed IMF, I showed that the observed variation in spectral hardness was entirely explained by the variation in metallicity. There didn’t need to be more high mass stars in low metallicity regions, the stars were just hotter because that’s what happens in low metallicity stars. (I didn’t set out to do this; I was just trying to calibrate an abundance indicator that I would need for my thesis.)

Another example where excess high mass stars were invoked was to explain the apparently high optical depth to the surface of last scattering reported by WMAP. If those words don’t mean anything to you, don’t worry – all it means is that a couple of decades ago, we thought we needed lots more UV photons at high redshift (z ~ 17) than CDM naturally provided. The solution was, you guessed it, an IMF rich in high mass stars. Indeed, this result launched a thousand papers on supermassive Population III stars that didn’t pan out for reasons that were easily anticipated at the time. Nowadays, analysis to the Planck data suggest a much lower optical depth than initially inferred by WMAP, but JWST is observing too many UV photons at high redshift to remain consistent with Plank. This apparent tension for LCDM is a natural consequence of early structure formation in MOND; indeed, it is another thing that was specifically predicted (see section 3.1 of McGaugh 2004).

I relate all these stories of encounters with variations in the high mass end of the IMF because they’ve never once panned out. Maybe this time will be different.

Stochastic Star Formation

What else can we think up? There’s always another possibility. It’s a big universe, after all.

One suggestion I haven’t discussed yet is that high redshift galaxies appear overly bright from stochastic fluctuations in their early star formation. This again invokes the dubious relation between stellar mass and UV light, but in a more subtle way than simply stocking the IMF with a bunch more high mass stars. Instead, it notes that the instantaneous star formation rate is stochastic. The massive stars that produces all the UV light are short-lived, so the number present will fluctuate up and down. Over time, this averages out, but there hasn’t been much time yet in the early universe. So maybe the high redshift galaxies that seem to be over-luminous are just those that happen to be near a peak in the ups and downs of star formation. Galaxies will be brightest and most noticeable in this peak phase, so the real mass is less than it appears – albeit there must be a lot of galaxies in the off phase for every one that we see in the on phase.

One expects a lot of scatter in the inferred stellar mass in the early universe due to stochastic variations in the star formation rate. As time goes on, these average out and the inferred stellar mass becomes steady. That’s pretty much what is observed (data). The data track the monolithic model (purple line) and sometimes exceed it in the early, stochastic phase. The data bear no resemblance to hierarchical LCDM (orange line).

This makes a lot of sense to me. Indeed, it should happen at some level, especially in the chaotic early universe. It is also what I infer to be going on to explain why some measurements scatter above the monolithic line. That is the baseline star formation history for this population, with some scatter up and down at early times. Simply scattering from the orange LCDM line isn’t going to look like the purple monolithic line. The shape is wrong and the amplitude difference is too great to overcome in this fashion.

What else?

I’m sure we’ll come up with something, but I think I’ve covered everything I’ve heard so far. Indeed, most of these possibilities are obvious enough that I thought them up myself and wrote about them in McGaugh et al (2024). I don’t see anything in the wide-ranging discussion at KITP that wasn’t already in my paper.

I note this because I want to point out that we are following a well-worn script. This is the part where I tick off all the possibilities for more complicated LCDM models and point out their shortcomings. I expect the same response:

That’s too long to read. Dr. Z says it works, so he must be right since we already know that LCDM is correct.

Triton Station, 8 February 2022

People will argue about which of these auxiliary hypotheses is preferable. MOND is not an auxiliary hypothesis, but an entirely different paradigm, so it won’t be part of the discussion. After some debate, one of the auxiliaries (SESF not IMF!) will be adopted as the “standard” picture. This will be repeated until it becomes familiar, and once it is familiar it will seem that it was always so, and then people will assert that there was never a problem, indeed, that we expected it all along. This self-gaslighting reminds me of Feynman’s warning:

The first principle is that you must not fool yourself and you are the easiest person to fool.

Richard Feynman

What is persistently lacking in the community is any willingness to acknowledge, let alone engage with, the deeper question of why we have to keep invoking ad hoc patches to somehow match what MOND correctly predicted a priori. The sociology of invoking arbitrary auxiliary hypotheses to make these sorts of excuses for LCDM has been so consistently on display for so long that I wrote this parody a year ago:


It always seems to come down to special pleading:

Please don’t falsify LCDM! I ran out of computer time. I had a disk crash. I didn’t have a grant for supercomputer time. My simulation data didn’t come back from the processing center. A senior colleague insisted on a rewrite. Someone stole my laptop. There was an earthquake, a terrible flood, locusts! It wasn’t my fault! I swear to God!

And the community loves LCDM, so we fall for it every time.

Oh, LCDM. LCDM, honey.

PS – to appreciate the paraphrased quotes here, you need to hear it as it would be spoken by the pictured actors. So if you do not instantly recognize this scene from the Blues Brothers, you need to correct this shortcoming in your cultural education to get the full effect of the reference.

On the timescale for galaxy formation

On the timescale for galaxy formation

I’ve been wanting to expand on the previous post ever since I wrote it, which is over a month ago now. It has been a busy end to the semester. Plus, there’s a lot to say – nothing that hasn’t been said before, somewhere, somehow, yet still a lot to cobble together into a coherent story – if that’s even possible. This will be a long post, and there will be more after to narrate the story of our big paper in the ApJ. My sole ambition here is to express the predictions of galaxy formation theory in LCDM and MOND in the broadest strokes.

A theory is only as good as its prior. We can always fudge things after the fact, so what matters most is what we predict in advance. What do we expect for the timescale of galaxy formation? To tell you what I’m going to tell you, it takes a long time to build a massive galaxy in LCDM, but it happens much faster in MOND.

Basic Considerations

What does it take to make a galaxy? A typical giant elliptical galaxy has a stellar mass of 9 x 1010 M. That’s a bit more than our own Milky Way, which has a stellar mass of 5 or 6 x 1010 M (depending who you ask) with another 1010 M or so in gas. So, in classic astronomy/cosmology style, let’s round off and say a big galaxy is about 1011 M. That’s a hundred billion stars, give or take.

An elliptical galaxy (NGC 3379, left) and two spiral galaxies (NGC 628 and NGC 891, right).

How much of the universe does it take to make one big galaxy? The critical density of the universe is the over/under point for whether an expanding universe expands forever, or has enough self-gravity to halt the expansion and ultimately recollapse. Numerically, this quantity is ρcrit = 3H02/(8πG), which for H0 = 73 km/s/Mpc works out to 10-29 g/cm3 or 1.5 x 10-7 M/pc3. This is a very small number, but provides the benchmark against which we measure densities in cosmology. The density of any substance X is ΩX = ρXcrit. The stars and gas in galaxies are made of baryons, and we know the baryon density pretty well from Big Bang Nucleosynthesis: Ωb = 0.04. That means the average density of normal matter is very low, only about 4 x 10-31 g/cm3. That’s less than one hydrogen atom per cubic meter – most of space is an excellent vacuum!

This being the case, we need to scoop up a large volume to make a big galaxy. Going through the math, to gather up enough mass to make a 1011 M galaxy, we need a sphere with a radius of 1.6 Mpc. That’s in today’s universe; in the past the universe was denser by (1+z)3, so at z = 10 that’s “only” 140 kpc. Still, modern galaxies are much smaller than that; the effective edge of the disk of the Milky Way is at a radius of about 20 kpc, and most of the baryonic mass is concentrated well inside that: the typical half-light radius of a 1011 M galaxy is around 6 kpc. That’s a long way to collapse.

Monolithic Galaxy Formation

Given this much information, an early concept was monolithic galaxy formation. We have a big ball of gas in the early universe that collapses to form a galaxy. Why and how this got started was fuzzy. But we knew how much mass we needed and the volume it had to come from, so we can consider what happens as the gas collapses to create a galaxy.

Here we hit a big astrophysical reality check. Just how does the gas collapse? It has to dissipate energy to do so, and cool to form stars. Once stars form, they may feed energy back into the surrounding gas, reheating it and potentially preventing the formation of more stars. These processes are nontrivial to compute ab initio, and attempting to do so obsesses much of the community. We don’t agree on how these things work, so they are the knobs theorists can turn to change an answer they don’t like.

Even if we don’t understand star formation in detail, we do observe that stars have formed, and can estimate how many. Moreover, we do understand pretty well how stars evolve once formed. Hence a common approach is to build stellar population models with some prescribed star formation history and see what works. Spiral galaxies like the Milky Way formed a lot of stars in the past, and continue to do so today. To make 5 x 1010 M of stars in 13 Gyr requires an average star formation rate of 4 M/yr. The current measured star formation rate of the Milky Way is estimated to be 2 ± 0.7 M/yr, so the star formation rate has been nearly constant (averaging over stochastic variations) over time, perhaps with a gradual decline. Giant elliptical galaxies, in contrast, are “red and dead”: they have no current star formation and appear to have made most of their stars long ago. Rather than a roughly constant rate of star formation, they peaked early and declined rapidly. The cessation of star formation is also called quenching.

A common way to formulate the star formation rate in galaxies as a whole is the exponential star formation rate, SFR(t) = SFR0 e-t/τ. A spiral galaxy has a low baseline star formation rate SFR0 and a long burn time τ ~ 10 Gyr while an elliptical galaxy has a high initial star formation rate and a short e-folding time like τ ~ 1 Gyr. Many variations on this theme are possible, and are of great interest astronomically, but this basic distinction suffices for our discussion here. From the perspective of the observed mass and stellar populations of local galaxies, the standard picture for a giant elliptical was a large, monolithic island universe that formed the vast majority of its stars early on then quenched with a short e-folding timescale.

Galaxies as Island Universes

The density parameter Ω provides another useful way to think about galaxy formation. As cosmologists, we obsess about the global value of Ω because it determines the expansion history and ultimate fate of the universe. Here it has a more modest application. We can think of the region in the early universe that will ultimately become a galaxy as its own little closed universe. With a density parameter Ω > 1, it is destined to recollapse.

A fun and funny fact of the Friedmann equation is that the matter density parameter Ωm → 1 at early times, so the early universe when galaxies form is matter dominated. It is also very uniform (more on that below). So any subset that is a bit more dense than average will have Ω > 1 just because the average is very close to Ω = 1. We can then treat this region as its own little universe (a “top-hat overdensity”) and use the Friedmann equation to solve for its evolution, as in this sketch:

The expansion of the early universe a(t) (blue line). A locally overdense region may behave as a closed universe, recollapsing in a finite time (red line) to potentially form a galaxy.

That’s great, right? We have a simple, analytic solution derived from first principles that explains how a galaxy forms. We can plug in the numbers to find how long it takes to form our basic, big 1011 M galaxy and… immediately encounter a problem. We need to know how overdense our protogalaxy starts out. Is its effective initial Ωm = 2? 10? What value, at what time? The higher it is, the faster the evolution from initially expanding along with the rest of the universe to decoupling from the Hubble flow to collapsing. We know the math but we still need to know the initial condition.

Annoying Initial Conditions

The initial condition for galaxy formation is observed in the cosmic microwave background (CMB) at z = 1090. Where today’s universe is remarkably lumpy, the early universe is incredibly uniform. It is so smooth that it is homogeneous and isotropic to one part in a hundred thousand. This is annoyingly smooth, in fact. It would help to have some lumps – primordial seeds with Ω > 1 – from which structure can grow. The observed seeds are too tiny; the typical initial amplitude is 10-5 so Ωm = 1.00001. That takes forever to decouple and recollapse; it hasn’t yet had time to happen.

The cosmic microwave background as observed by ESA’s Planck satellite. This is an all-sky picture of the relic radiation field – essentially a snapshot of the universe when it was just a few hundred thousand years old. The variations in color are variations in temperature which correspond to variations in density. These variations are tiny, only about one part in 100,000. The early universe was very uniform; the real picture is a boring blank grayscale. We have to crank the contrast way up to see these minute variations.

We would like to know how the big galaxies of today – enormous agglomerations of stars and gas and dust separated by inconceivably vast distances – came to be. How can this happen starting from such homogeneous initial conditions, where all the mass is equally distributed? Gravity is an attractive force that makes the rich get richer, so it will grow the slight initial differences in density, but it is also weak and slow to act. A basic result in gravitational perturbation theory is that overdensities grow at the same rate the universe expands, which is inversely related to redshift. So if we see tiny fluctuations in density with amplitude 10-5 at z = 1000, they should have only grown by a factor of 1000 and still be small today (10-2 at z = 0). But we see structures of much higher contrast than that. You can’t here from there.

The rich large scale structure we see today is impossible starting from the smooth observed initial conditions. Yet here we are, so we have to do something to goose the process. This is one of the original motivations for invoking cold dark matter (CDM). If there is a substance that does not interact with photons, it can start to clump up early without leaving too large a mark on the relic radiation field. In effect, the initial fluctuations in mass are larger, just in the invisible substance. (That’s not to say the CDM doesn’t leave a mark on the CMB; it does, but it is subtle and entirely another story.) So the idea is that dark matter forms gravitational structures first, and the baryons fall in later to make galaxies.

An illustration of the the linear growth of overdensities. Structure can grow in the dark matter (long dashed lines) with the baryons catching up only after decoupling (short dashed line). In effect, the dark matter gives structure formation a head start, nicely explaining the apparently impossible growth factor. This has been standard picture for what seems like forever (illustration from Schramm 1992).

With the right amount of CDM – and it has to be just the right amount of a dynamically cold form of non-baryonic dark matter (stuff we still don’t know actually exists) – we can explain how the growth factor is 105 since recombination instead of a mere 103. The dark matter got a head start over the stuff we can see; it looks like 105 because the normal matter lagged behind, being entangled with the radiation field in a way the dark matter was not.

This has been the imperative need in structure formation theory for so long that it has become undisputed lore; an element of the belief system so deeply embedded that it is practically impossible to question. I risk getting ahead of the story, but it is important to point out that, like the interpretation of so much of the relevant astrophysical data, this belief assumes that gravity is normal. This assumption dictates the growth rate of structure, which in turn dictates the need to invoke CDM to allow structure to form in the available time. If we drop this assumption, then we have to work out what happens in each and every alternative that we might consider. That definitely gets ahead of the story, so first let’s understand what we should expect in LCDM.

Hierarchical Galaxy formation in LCDM

LCDM predicts some things remarkably well but others not so much. The dark matter is well-behaved, responding only to gravity. Baryons, on the other hand, are messy – one has to worry about hydrodynamics in the gas, star formation, feedback, dust, and probably even magnetic fields. In a nutshell, LCDM simulations are very good at predicting the assembly of dark mass, but converting that into observational predictions relies on our incomplete knowledge of messy astrophysics. We know what the mass should be doing, but we don’t know so well how that translates to what we see. Mass good, light bad.

Starting with the assembly of mass, the first thing we learn is that the story of monolithic galaxy formation outlined above has to be wrong. Early density fluctuations start out tiny, even in dark matter. God didn’t plunk down island universes of galaxy mass then say “let there be galaxies!” The annoying initial conditions mean that little dark matter halos form first. These subsequently merge hierarchically to make ever bigger halos. Rather than top-down monolithic galaxy formation, we have the bottom-up hierarchical formation of dark matter halos.

The hierarchical agglomeration of dark matter halos into ever larger objects is often depicted as a merger tree. Here are four examples from the high resolution Illustris TNG50 simulation (Pillepich et al. 2019; Nelson et al. 2019).

Examples of merger trees from the TNG50-1 simulation (Pillepich et al. 2019; Nelson et al. 2019). Objects have been selected to have very nearly the same stellar mass at z=0. Mass is built up through a series of mergers. One large dark matter halo today (at top) has many antecedents (small halos at bottom). These merge hierarchically as illustrated by the connecting lines. The size of the symbol is proportional to the halo mass. I have added redshift and the corresponding age of the universe for vanilla LCDM in a more legible font. The color bar illustrates the specific star formation rate: the top row has objects that are still actively star forming like spirals; those in the bottom row are “red and dead” – things that have stopped forming stars, like giant elliptical galaxies. In all cases, there is a lot of merging and a modest rate of growth, with the typical object taking about half a Hubble time (~7 Gyr) to assemble half of its final stellar mass.

The hierarchical assembly of mass is generic in CDM. Indeed, it is one of its most robust predictions. Dark matter halos start small, and grow larger by a succession of many mergers. This gradual agglomeration is slow: note how tiny the dark matter halos at z = 10 are.

Strictly speaking, it isn’t even meaningful to talk about a single galaxy over the span of a Hubble time. It is hard to avoid this mental trap: surely the Milky Way has always been the Milky Way? so one imagines its evolution over time. This is monolithic thinking. Hierarchically, “the galaxy” refers at best to the largest progenitor, the object that traces the left edge of the merger trees above. But the other protogalactic chunks that eventually merge together are as much part of the final galaxy as the progenitor that happens to be largest.

This complicated picture is complicated further by what we can see being stars, not mass. The luminosity we observe forms through a combination of in situ growth (star formation in the largest progenitor) and ex situ growth through merging. There is no reason for some preferred set of protogalaxies to form stars faster than the others (though of course there is some scatter about the mean), so presumably the light traces the mass of stars formed traces the underlying dark mass. Presumably.

That we should see lots of little protogalaxies at high redshift is nicely illustrated by this lookback cone from Yung et al (2022). Here the color and size of each point corresponds to the stellar mass. Massive objects are common at low redshift but become progressively rare at high redshift, petering out at z > 4 and basically absent at z = 10. This realization of the observable stellar mass tracks the assembly of dark mass seen in merger trees.

Fig. 2 from Yung et al. (2022) illustrating what an observer would see looking back through their simulation to high redshift.

This is what we expect to see in LCDM: lots of small protogalaxies at high redshift; the building blocks of later galaxies that had not yet merged. The observation of galaxies much brighter than this at high redshift by JWST poses a fundamental challenge to the paradigm: mass appears not to be subdivided as expected. So it is entirely justifiable that people have been freaking out that what we see are bright galaxies that are apparently already massive. That shouldn’t happen; it wasn’t predicted to happen; how can this be happening?

That’s all background that is assumed knowledge for our ApJ paper, so we’re only now getting to its Figure 1. This combines one of the merger trees above with its stellar mass evolution. The left panel shows the assembly of dark mass; the right pane shows the growth of stellar mass in the largest progenitor. This is what we expect to see in observations.


Fig. 1 from McGaugh et al (2024): A merger tree for a model galaxy from the TNG50-1 simulation (Pillepich et al. 2019; Nelson et al. 2019, left panel) selected to have M ≈ 9 × 1010 M at z = 0; i.e., the stellar mass of a local L giant elliptical galaxy (Driver et al. 2022). Mass assembles hierarchically, starting from small halos at high redshift (bottom edge) with the largest progenitor traced along the left of edge of the merger tree. The growth of stellar mass of the largest progenitor is shown in the right panel. This example (jagged line) is close to the median (dashed line) of comparable mass objects (Rodriguez-Gomez et al. 2016), and within the range of the scatter (the shaded band shows the 16th – 84th percentiles). A monolithic model that forms at zf = 10 and evolves with an exponentially declining star formation rate with τ = 1 Gyr (purple line) is shown for comparison. The latter model forms most of its stars earlier than occurs in the simulation.

For comparison, we also show the stellar mass growth of a monolithic model for a giant elliptical galaxy. This is the classic picture we had for such galaxies before we realized that galaxy formation had to be hierarchical. This particular monolithic model forms at zf = 10 and follows an exponential star formation rate with τ = 1 Gyr. It is one of the models published by Franck & McGaugh (2017). It is, in fact, the first model I asked Jay to construct when he started the project. Not because we expected it to best describe the data, as it turns out to do, but because the simple exponential model is a touchstone of stellar population modeling. It was a starter model: do this basic thing first to make sure you’re doing it right. We chose τ = 1 Gyr because that was the typical number bandied about for elliptical galaxies, and zf = 10 because that seemed ridiculously early for a massive galaxy to form. At the time we built the model, it was ludicrously early to imagine a massive galaxy would form, from an LCDM perspective. A formation redshift zf = 10 was, less than a decade ago, practically indistinguishable from the beginning of time, so we expected it to provide a limit that the data would not possibly approach.

In a remarkably short period, JWST has transformed z = 10 from inconceivable to run of the mill. I’m not going to go into the data yet – this all-theory post is already a lot – but to offer one spoiler: the data are consistent with this monolithic model. If we want to “fix” LCDM, we have to make the red line into the purple line for enough objects to explain the data. That proves to be challenging. But that’s moving the goalposts; the prediction was that we should see little protogalaxies at high redshift, not massive, monolith-style objects. Just look at the merger trees at z = 10!

Accelerated Structure Formation in MOND

In order to address these issues in MOND, we have to go back to the beginning. What is the evolution of a spherical region (a top-hat overdensity) that might collapse to form a galaxy? How does a spherical region under the influence of MOND evolve within an expanding universe?

The solution to this problem was first found by Felten (1984), who was trying to play the Newtonian cosmology trick in MOND. In conventional dynamics, one can solve the equation of motion for a point on the surface of a uniform sphere that is initially expanding and recover the essence of the Friedmann equation. It was reasonable to check if cosmology might be that simple in MOND. It was not. The appearance of a0 as a physical scale makes the solution scale-dependent: there is no general solution that one can imagine applies to the universe as a whole.

Felten reasonably saw this as a failure. There were, however, some appealing aspects of his solution. For one, there was no such thing as a critical density. All MOND universes would eventually recollapse irrespective of their density (in the absence of the repulsion provided by a cosmological constant). It could take a very long time, which depended on the density, but the ultimate fate was always the same. There was no special value of Ω, and hence no flatness problem. The latter obsessed people at the time, so I’m somewhat surprised that no one seems to have made this connection. Too soon*, I guess.

There it sat for many years, an obscure solution for an obscure theory to which no one gave credence. When I became interested in the problem a decade later, I started methodically checking all the classic results. I was surprised to find how many things we needed dark matter to explain were just as well (or better) explained by MOND. My exact quote was “surprised the bejeepers out of us.” So, what about galaxy formation?

I started with the top-hat overdensity, and had the epiphany that Felten had already obtained the solution. He had been trying to solve all of cosmology, which didn’t work. But he had solved the evolution of a spherical region that starts out expanding with the rest of the universe but subsequently collapses under the influence of MOND. The overdensity didn’t need to be large, it just needed to be in the low acceleration regime. Something like the red cycloidal line in the second plot above could happen in a finite time. But how much?

The solution depends on scale and needs to be solved numerically. I am not the greatest programmer, and I had a lot else on my plate at the time. I was in no rush, as I figured I was the only one working on it. This is usually a good assumption with MOND, but not in this case. Bob Sanders had had the same epiphany around the same time, which I discovered when I received his manuscript to referee. So all credit is due to Bob: he said these things first.

First, he noted that galaxy formation in MOND is still hierarchical. Small things form first. Crudely speaking, structure formation is very similar to the conventional case, but now the goose comes from the change in the force law rather than extra dark mass. MOND is nonlinear, so the whole process gets accelerated. To compare with the linear growth of CDM:

A sketch of how structures grow over time under the influence of cold dark matter (left, from Schramm 1992, same as above) and MOND (right, from Sanders & McGaugh 2002; see also this further discussion and previous post). The slow linear growth of CDM (long-dashed line, left panel) is replaced by a rapid, nonlinear growth in MOND (solid lines at right; numbers correspond to different scales). Nonlinear growth moderates after cosmic expansion begins to accelerate (dashed vertical line in right panel).

The net effect is the same. A cosmic web of large scale structure emerges. They look qualitatively similar, but everything happens faster in MOND. This is why observations have persistently revealed structures that are more massive and were in place earlier than expected in contemporaneous LCDM models.

Simulated structure formation in ΛCDM (top) and MOND (bottom) showing the more rapid emergence of similar structures in MOND (note the redshift of each panel). From McGaugh (2015).

In MOND, small objects like globular clusters form first, but galaxies of a range of masses all collapse on a relatively short cosmic timescale. How short? Let’s consider our typical 1011 M galaxy. Solving Felten’s equation for the evolution of a sphere numerically, peak expansion is reached after 300 Myr and collapse happens in a similar time. The whole galaxy is in place speedy quick, and the initial conditions don’t really matter: a uniform, initially expanding sphere in the low acceleration regime will behave this way. From our distant vantage point thirteen billion years later, the whole process looks almost monolithic (the purple line above) even though it is a chaotic hierarchical mess for the first few hundred million years (z > 14). In particular, it is easy to form half of the stellar mass early on: the mass is already assembled.

The evolution of a 1011 M sphere that starts out expanding with the universe but decouples and collapses under the influence of MOND (dotted line). It reaches maximum expansion after 300 Myr and recollapses in a similar time, so the entire object is in place after 600 Myr. (A version of this plot with a logarithmic time axis appears as Fig. 2 in our paper.) The inset shows the evolution of smaller shells within such an object (Fig. 2 from Sanders 2008). The inner regions collapse first followed by outer shells. These oscillate and cross, mixing and ultimately forming a reasonable size galaxy – see Sanders’s Table 1 and also his Fig. 4 for the collapse times for objects of other masses. These early results are corroborated by Eappen et al. (2022), who further demonstrate that the details of feedback are not important in MOND, unlike LCDM.

This is what JWST sees: galaxies that are already massive when the universe is just half a billion years old. I’m sure I should say more but I’m exhausted now and you may be too, so I’m gonna stop here by noting that in 1998, when Bob Sanders predicted that “Objects of galaxy mass are the first virialized objects to form (by z=10),” the contemporaneous prediction of LCDM was that “present-day disc [galaxies] were assembled recently (at z<=1)” and “there is nothing above redshift 7.” One of these predictions has been realized. It is rare in science that such a clear a priori prediction comes true, let alone one that seemed so unreasonable at the time, and which took a quarter century to corroborate.


*I am not quite this old: I was still an undergraduate in 1984. I hadn’t even decided to be an astronomer at that point; I certainly hadn’t started following the literature. The first time I heard of MOND was in a graduate course taught by Doug Richstone in 1988. He only mentioned it in passing while talking about dark matter, writing the equation on the board and saying maybe it could be this. I recall staring at it for a long few seconds, then shaking my head and muttering “no way.” I then completely forgot about it, not thinking about it again until it came up in our data for low surface brightness galaxies. I expect most other professionals have the same initial reaction, which is fair. The test of character comes when it crops up in their data, as it is doing now for the high redshift galaxy community.

Decision Trees & Philosophical Blunders

Decision Trees & Philosophical Blunders

Given recent developments in the long-running hunt for dark matter and the difficulty interpreting what this means, it seems like a good juncture to re-up* this:


The history of science is a decision tree. Vertices appear where we must take one or another branching. Sometimes, we take the wrong road for the right reasons.

A good example is the geocentric vs. heliocentric cosmology. The ancient Greeks knew that in many ways it made more sense for the earth to revolve around the sun than vice-versa. Yet they were very clever. Ptolemy and others tested for the signature of the earth’s orbit in the seasonal wobbling in the positions of stars, or parallax. If the earth is moving around the sun, nearby stars should appear to move on the sky as the earth moves from one side of the sun to the other. Try blinking back and forth between your left and right eyes to see this effect, noting how nearby objects appear to move relative to distant ones.

Problem is, Ptolemy did not find the parallax. Quite reasonably, he inferred that the earth stayed put. We know now that this was the wrong branch to choose, but it persisted as the standard world view for many centuries. It turns out that even the nearest stars are so distant that their angular parallax is tiny (the angle of parallax is inversely proportional to distance). Precision sufficient for measuring the parallax was not achieved until the 19th century, by which time astronomers were already convinced it must happen.

Ptolemy was probably aware of this possibility, though it must have seemed quite unreasonable to conjecture at that time that the stars could be so very remote. The fact was that parallax was not observed. Either the earth did not move, or the stars were ridiculously distant. Which sounds more reasonable to you?

So, science took the wrong branch. Once this happened, sociology kicked in. Generation after generation of intelligent scholars confirmed the lack of parallax until the opposing branch seemed so unlikely that it became heretical to even discuss. It is very hard to reverse back up the decision tree and re-assess what seems to be such a firm conclusion. It took the Copernican revolution to return to that ancient decision branch and try the other one.

Cosmology today faces a similar need to take a few steps back on the decision tree. The problem now is the issue of the mass discrepancy, typically attributed to dark matter. When it first became apparent that things didn’t add up when one applied the usual Law of Gravity to the observed dynamics of galaxies, there was a choice. Either lots of matter is present which happens to be dark, or the Law of Gravity has to be amended. Which sounds more reasonable to you?

Having traveled down the road dictated by the Dark Matter decision branch, cosmologists find themselves trapped in a web of circular logic entirely analogous to the famous Ptolemaic epicycles. Not many of them realize it yet, much less admit that this is what is going on. But if you take a few steps back up the decision branch, you find a few attempts to alter the equations of gravity. Most of these failed almost immediately, encouraging cosmologists down the dark matter path just as Ptolemy wisely chose a geocentric cosmology. However, one of these theories is not only consistent with the data, it actually predicts many important new results. This theory is known as MOND (MOdified Newtonian Dynamics). It was introduced in 1983 by Moti Milgrom of the Weizmann Institute in Israel.

MOND accurately describes the effective force law in galaxies based only on the observed stars and gas. What this means is unclear, but it clearly means something! It is conceivable that dark and luminous matter somehow interact to mimic the behavior stipulated by MOND. This is not expected, and requires a lot of epicyclic thinking to arrange. The more straightforward interpretation is that MOND is correct, and we took the wrong branch of the decision tree back in the ’70s.

MOND has dire implications for much modern cosmological thought which has developed symbiotically with dark matter. As yet, no one has succeeded in writing down a theory which encompasses both MOND and General Relativity. This leaves open many questions in cosmology that were thought to be solved, such as the expansion history of the universe. There is nothing a scientist hates to do more than unlearn what was thought to be well established. It is this sociological phenomenon that makes it so difficult to climb back up the decision tree to the faulty branching.

Once one returns and takes the correct branch, the way forward is not necessarily obvious. The host of questions which had been assigned seemingly reasonable explanations along the faulty branch must be addressed anew. And there will always be those incapable of surrendering the old world view irrespective of the evidence.

In my opinion, the new successes of MOND can not occur by accident. They are a strong sign that we are barking up the wrong tree with dark matter. A grander theory encompassing both MOND and General Relativity must exist, even if no one has as yet been clever enough to figure it out (few have tried).

These all combine to make life as a cosmologist interesting. Sometimes it is exciting. Often it is frustrating. Most of the time, “interesting” takes on the meaning implied by the old Chinese curse:

MAY YOU LIVE IN INTERESTING TIMES

Like it or not, we do.


*I wrote this in 2000. I leave it to the reader to decide how much progress has been made since then.

Why’d it have to be MOND?

Why’d it have to be MOND?

I want to take another step back in perspective from the last post to say a few words about what the radial acceleration relation (RAR) means and what it doesn’t mean. Here it is again:

The Radial Acceleration Relation over many decades. The grey region is forbidden – there cannot be less acceleration than caused by the observed baryons. The entire region above the diagonal line (yellow) is accessible to dark matter models as the sum of baryons and however much dark matter the model prescribes. MOND is the blue line.

This information was not available when the dark matter paradigm was developed. We observed excess motion, like flat rotation curves, and inferred the existence of extra mass. That was perfectly reasonable given the information available at the time. It is not now: we need to reassess as we learn more.

There is a clear organization to the data at both high and low acceleration. No objective observer with a well-developed physical intuition would look at this and think “dark matter.” The observed behavior does not follow from one force law plus some arbitrary amount of invisible mass. That could do literally anything in the yellow region above, and beyond the bounds of the plot, both upwards and to the left. Indeed, there is no obvious reason why the data don’t fall all over the place. One of the lingering, niggling concerns is the 5:1 ratio of dark matter:baryons – why is it in the same ballpark, when it could be pretty much anything? Why should the data organize in terms of acceleration? There is no reason for dark matter to do this.

Plausible dark matter models have been predicted to do a variety of things – things other than what we observe. The problem for dark matter is that real objects only occupy a tiny line through the vast region available to them in the plot above. This is a fine-tuning problem: why do the data reside only where they do when they could be all over the place? I recognized this as a problem for dark matter before I became aware$ of MOND. That it turns out that the data follow the line uniquely predicted* by MOND is just chef’s kiss: there is a fine-tuning problem for dark matter because MOND is the effective force law.

The argument against dark matter is that the data could reside anywhere in the yellow region above, but don’t. The argument against MOND is that a small portion of the data fall a little off the blue line. Arguing that such objects, be they clusters of galaxies or particular individual galaxies, falsify MOND while ignoring the fine-tuning problem faced by dark matter is a case of refusing to see the forest for a few outlying trees.%

So to return to the question posed in the title of this post, I don’t know why it had to be MOND. That’s just what we observe. Pretending dark matter does the same thing is a false presumption.


$I’d heard of MOND only vaguely, and, like most other scientists in the field, had paid it no mind until it reared its ugly head in my own data.

*I talk about MOND here because I believe in giving credit where credit is due. MOND predicted this; no other theory did so. Dark matter theories did not predict this. My dark matter-based galaxy formation theory did not predict this. Other dark matter-based galaxy formation theories (including simulations) continue to fail to explain this. Other hypotheses of modified gravity also did not predict what is observed. Who+ ordered this?

Modified Dynamics. Very dangerous. You go first.

Many people in the field hate MOND, often with an irrational intensity that has the texture of religion. It’s not as if I woke up one morning and decided to like MOND – sometimes I wish I had never heard of it – but disliking a theory doesn’t make it wrong, and ignoring it doesn’t make it go away. MOND and only MOND predicted the observed RAR a priori. So far, MOND and only MOND provides a satisfactory explanation of thereof. We might not like it, but there it is in the data. We’re not going to progress until we get over our fear of MOND and cope with it. Imagining that it will somehow fall out of simulations with just the right baryonic feedback prescription is a form of magical thinking, not science.

MOND. Why’d it have to be MOND?

+Milgrom. Milgrom ordered this.


%I expect many cosmologists would argue the same in reverse for the cosmic microwave background (CMB) and other cosmological constraints. I have some sympathy for this. The fit to the power spectrum of the CMB seems too good to be an accident, and it points to the same parameters as other constraints. Well, mostly – the Hubble tension might be a clue that things could unravel, as if they haven’t already. The situation is not symmetric – where MOND predicted what we observe a priori with a minimum of assumptions, LCDM is an amalgam of one free parameter after another after another: dark matter and dark energy are, after all, auxiliary hypotheses we invented to save FLRW cosmology. When they don’t suffice, we invent more. Feedback is single word that represents a whole Pandora’s box of extra degrees of freedom, and we can invent crazier things as needed. The results is a Frankenstein’s monster of a cosmology that we all agree is the same entity, but when we examine it closely the pieces don’t fit, and one cosmologist’s LCDM is not really the same as that of the next. They just seem to agree because they use the same words to mean somewhat different things. Simply agreeing that there has to be non-baryonic dark matter has not helped us conjure up detections of the dark matter particles in the laboratory, or given us the clairvoyance to explain# what MOND predicted a prioi. So rather than agree that dark matter must exist because cosmology works so well, I think the appearance of working well is a chimera of many moving parts. Rather, cosmology, as we currently understand it, works if and only if non-baryonic dark matter exists in the right amount. That requires a laboratory detection to confirm.

#I have a disturbing lack of faith that a satisfactory explanation can be found.

A Response to Recent Developments Concerning the Gravitational Potential of the Milky Way

A Response to Recent Developments Concerning the Gravitational Potential of the Milky Way

In the series of recent posts I’ve made about the Milky Way, I missed an important reply made in the comments by Francois Hammer, one of the eminent scientists doing the work. I was on to writing the next post when he wrote it, and simply didn’t see it until yesterday. Dr. Hammer has some important things to say that are both illustrative of the specific topic and also of how science should work. I wanted to highlight his concerns with their own post, so, with his permission, I cut & paste his comments below, making this, in effect, a guest post by Francois Hammer.


There are two aspects we’d like to mention, as they may help to clarify part of the debate:
1- When saying “Gaia is great, but has its limits. It is really optimized for nearby stars (within a few kpc). Outside of that, the statistics… leave something to be desired. Is it safe to push out beyond 20 kpc?”, one may wonder whether the significance of Gaia data has been really understood.
In the Eilers et al. 2019 DR2 rotation curve, you may see points with small error bar up to 21-22 kpc. Gaia DR3 provides proper motion (systematics) uncertainties that are 2 times smaller than from Gaia DR2, so it can easily goes to 25 kpc or more.
The gain in quality for parallaxes is indeed smaller (30% gain). However, our results cannot be affected by distance estimates, since the large number of stars with parallax estimates in Wang et al. (2023) is giving the same rotation curve than that from (a lower number of) RGB stars with spectrophotometric distances (Ou et al. 2023), i.e., following Eilers et al. 2019. And both show a Keplerian decline, which was already noticeable with DR2 results from Eilers et al 2019. The latter authors said in their conclusions: “We do see a mild but significant deviation from the straightly declining circular velocity curve at R≈19–21 kpc of Δv≈15 km s−1.” Our work using Gaia DR3 is nothing else than having a factor 2 better in accounting for systematics, and then being able to resolve what looks like a Keplerian decrease of the rotation curve.
We may also mention here that one of us participated to an unprecedented study of the kinematics LMC (Gaia Collaboration 2021, Luri’s paper), which is at 50 kpc. Unless one proves everything that people has done about the LMC and MW is wrong, and that the data are too uncertain to conclude anything about what happens at R=17-25 kpc, the above clarifications about Gaia accuracy are truly necessary for people reading your blog.
2- The argument that the result “violates a gazillion well-established constraints.” has to be taken with some caution, since otherwise, no one can do any progress in the field. In fact, the problem with many probes (so-called “satellites”) in the MW halo, is the fact that one cannot guarantee whether or not their orbits are at equilibrium with the MW potential. This is the reverse for the MW disk, for which stars are rotating in the disk, and, e.g., at 25 kpc, they have likely experience 7-8 orbits since the last merger (Gaia-Sausage-Enceladus), about 9 billion years go. In other words, the mass provided by a system mostly at equilibrium, likely supersedes masses provided by systems that equilibrium conditions are not secured. An interesting example of this is given by globular clusters (GCs). If taken as an ensemble of 156 GCs (from Baumgardt catalog), just by removing Pyxis and Terzan 8, the MW mass inside 50 kpc passes from 5.5 to 2.1 10^11 Msun. This is likely because these two GCs may have come quite recently, meaning that their initial kinetic energy is still contributing to their total energy. A similar mass overestimate could happen if one accounts the LMC or Leo I as MW satellites at equilibrium with the MW potential.
So we agree that near 25 kpc the disk of the MW may show signs of less-equilibrium, or sign of slightly less circular orbits due to different phenomenas discussed in the blog. However, why taking into account objects for which there is no proof they are at equilibrium as being the true measurements?
In our work, we have considerably focused in understanding and expanding the whole contribution of systematics, which may comes from Gaia data, but also from assumptions about stellar profile (i.e., deviations from exponential profiles), from the Sun distance and proper motion and so on. You may find a description in Ou et al.’s Figure 5 and Jiao et al.’s Figure 4, both showing that systematics cannot gives much more than 10% error on circular velocity estimates. This is an area where we are considered by the Local Group community as being quite conservative, and following Gaia specialists with who we have worked to deliver the EDR3 catalog of dwarf galaxy motions (Li, Hammer, Babusiaux et al 2021) up to about 150 kpc. Jiao et al. paper main contribution is the fair accounting of systematics, which analysis shows error bars that are much larger than those from other sources of errors especially in MW outskirts (see Fig. 2).

Francois Hammer, 24 September 2023

The image at top is Fig. 2 from Jiao et al. illustrating their assessment of the rotation curve and its systematic uncertainties.

Take it where?

Take it where?

I had written most of the post below the line before an exchange with a senior colleague who accused me of asking us to abandon General Relativity (GR). Anyone who read the last post knows that this is the opposite of true. So how does this happen?

Much of the field is mired in bad ideas that seemed like good ideas in the 1980s. There has been some progress, but the idea that MOND is an abandonment of GR I recognize as a misconception from that time. It arose because the initial MOND hypothesis suggested modifying the law of inertia without showing a clear path to how this might be consistent with GR. GR was built on the Equivalence Principle (EP), the equivalence1 of gravitational charge with inertial mass. The original MOND hypothesis directly contradicted that, so it was a fair concern in 1983. It was not by 19842. I was still an undergraduate then, so I don’t know the sociology, but I get the impression that most of the community wrote MOND off at this point and never gave it further thought.

I guess this is why I still encounter people with this attitude, that someone is trying to rob them of GR. It’s feels like we’re always starting at square one, like there has been zero progress in forty years. I hope it isn’t that bad, but I admit my patience is wearing thin.

I’m trying to help you. Don’t waste you’re entire career chasing phantoms.

What MOND does ask us to abandon is the Strong Equivalence Principle. Not the Weak EP, nor even the Einstein EP. Just the Strong EP. That’s a much more limited ask that abandoning all of GR. Indeed, all flavors of EP are subject to experimental test. The Weak EP has been repeatedly validated, but there is nothing about MOND that implies platinum would fall differently from titanium. Experimental tests of the Strong EP are less favorable.

I understand that MOND seems impossible. It also keeps having its predictions come true. This combination is what makes it important. The history of science is chock full of ideas that were initially rejected as impossible or absurd, going all the way back to heliocentrism. The greater the cognitive dissonance, the more important the result.


Continuing the previous discussion of UT, where do we go from here? If we accept that maybe we have all these problems in cosmology because we’re piling on auxiliary hypotheses to continue to be able to approximate UT with FLRW, what now?

I don’t know.

It’s hard to accept that we don’t understand something we thought we understood. Scientists hate revisiting issues that seem settled. Feels like a waste of time. It also feels like a waste of time continuing to add epicycles to a zombie theory, be it LCDM or MOND or the phoenix universe or tired light or whatever fantasy reality you favor. So, painful as it may be, one has find a little humility to step back and take account of what we know empirically independent of the interpretive veneer of theory.

As I’ve said before, I think we do know that the universe is expanding and passed through an early hot phase that bequeathed us the primordial abundances of the light elements (BBN) and the relic radiation field that we observe as the cosmic microwave background (CMB). There’s a lot more to it than that, and I’m not going to attempt to recite it all here.

Still, to give one pertinent example, BBN only works if the expansion rate is as expected during the epoch of radiation domination. So whatever is going on has to converge to that early on. This is hardly surprising for UT since it was stipulated to contain GR in the relevant limit, but we don’t actually know how it does so until we work out what UT is – a tall order that we can’t expect to accomplish overnight, or even over the course of many decades without a critical mass of scientists thinking about it (and not being vilified by other scientists for doing so).

Another example is that the cosmological principle – that the universe is homogeneous and isotropic – is observed to be true in the CMB. The temperature is the same all over the sky to one part in 100,000. That’s isotropy. The temperature is tightly coupled to the density, so if the temperature is the same everywhere, so is the density. That’s homogeneity. So both of the assumptions made by the cosmological principle are corroborated by observations of the CMB.

The cosmological principle is extremely useful for solving the equations of GR as applied to the whole universe. If the universe has a uniform density on average, then the solution is straightforward (though it is rather tedious to work through to the Friedmann equation). If the universe is not homogeneous and isotropic, then it becomes a nightmare to solve the equations. One needs to know where everything was for all of time.

Starting from the uniform condition of the CMB, it is straightforward to show that the assumption of homogeneity and isotropy should persist on large scales up to the present day. “Small” things like galaxies go nonlinear and collapse, but huge volumes containing billions of galaxies should remain in the linear regime and these small-scale variations average out. One cubic Gigaparsec will have the same average density as the next as the next, so the cosmological principle continues to hold today.

Anyone spot the rub? I said homogeneity and isotropy should persist. This statement assumes GR. Perhaps it doesn’t hold in UT?

This aspect of cosmology is so deeply embedded in everything that we do in the field that it was only recently that I realized it might not hold absolutely – and I’ve been actively contemplating such a possibility for a long time. Shouldn’t have taken me so long. Felten (1984) realized right away that a MONDian universe would depart from isotropy by late times. I read that paper long ago but didn’t grasp the significance of that statement. I did absorb that in the absence of a cosmological constant (which no one believed in at the time), the universe would inevitably recollapse, regardless of what the density was. This seems like an elegant solution to the flatness/coincidence problem that obsessed cosmologists at the time. There is no special value of the mass density that provides an over/under line demarcating eternal expansion from eventual recollapse, so there is no coincidence problem. All naive MOND cosmologies share the same ultimate fate, so it doesn’t matter what we observe for the mass density.

MOND departs from isotropy for the same reason it forms structure fast: it is inherently non-linear. As well as predicting that big galaxies would form by z=10, Sanders (1998) correctly anticipated the size of the largest structures collapsing today (things like the local supercluster Laniakea) and the scale of homogeneity (a few hundred Mpc if there is a cosmological constant). Pretty much everyone who looked into it came to similar conclusions.

But MOND and cosmology, as we know it in the absence of UT, are incompatible. Where LCDM encompasses both cosmology and the dynamics of bound systems (dark matter halos3), MOND addresses the dynamics of low acceleration systems (the most common examples being individual galaxies) but says nothing about cosmology. So how do we proceed?

For starters, we have to admit our ignorance. From there, one has to assume some expanding background – that much is well established – and ask what happens to particles responding to a MONDian force-law in this background, starting from the very nearly uniform initial condition indicated by the CMB. From that simple starting point, it turns out one can get a long way without knowing the details of the cosmic expansion history or the metric that so obsess cosmologists. These are interesting things, to be sure, but they are aspects of UT we don’t know and can manage without to some finite extent.

For one, the thermal history of the universe is pretty much the same with or without dark matter, with or without a cosmological constant. Without dark matter, structure can’t get going until after thermal decoupling (when the matter is free to diverge thermally from the temperature of the background radiation). After that happens, around z = 200, the baryons suddenly find themselves in the low acceleration regime, newly free to respond to the nonlinear force of MOND, and structure starts forming fast, with the consequences previously elaborated.

But what about the expansion history? The geometry? The big questions of cosmology?

Again, I don’t know. MOND is a dynamical theory that extends Newton. It doesn’t address these questions. Hence the need for UT.

I’ve encountered people who refuse to acknowledge4 that MOND gets predictions like z=10 galaxies right without a proper theory for cosmology. That attitude puts the cart before the horse. One doesn’t look for UT unless well motivated. That one is able to correctly predict 25 years in advance something that comes as a huge surprise to cosmologists today is the motivation. Indeed, the degree of surprise and the longevity of the prediction amplify the motivation: if this doesn’t get your attention, what possibly could?

There is no guarantee that our first attempt at UT (or our second or third or fourth) will work out. It is possible that in the search for UT, one comes up with a theory that fails to do what was successfully predicted by the more primitive theory. That just lets you know you’ve taken a wrong turn. It does not mean that a correct UT doesn’t exist, or that the initial prediction was some impossible fluke.

One candidate theory for UT is bimetric MOND. This appears to justify the assumptions made by Sanders’s early work, and provide a basis for a relativistic theory that leads to rapid structure formation. Whether it can also fit the acoustic power spectrum of the CMB as well as LCDM and AeST has yet to be seen. These things take time and effort. What they really need is a critical mass of people working on the problem – a community that enjoys the support of other scientists and funding institutions like NSF. Until we have that5, progress will remain grudgingly slow.


1The equivalence of gravitational charge and inertial mass means that the m in F=GMm/d2 is identically the same as the m in F=ma. Modified gravity changes the former; modified inertia the latter.

2Bekenstein & Milgrom (1984) showed how a modification of Newtonian gravity could avoid the non-conservation issues suffered by the original hypothesis of modified inertia. They also outlined a path towards a generally covariant theory that Bekenstein pursued for the rest of his life. That he never managed to obtain a completely satisfactory version is often cited as evidence that it can’t be done, since he was widely acknowledged as one of the smartest people in the field. One wonders why he persisted if, as these detractors would have us believe, the smart thing to do was not even try.

3The data for galaxies do not look like the dark matter halos predicted by LCDM.

4I have entirely lost patience with this attitude. If a phenomena is correctly predicted in advance in the literature, we are obliged as scientists to take it seriously+. Pretending that it is not meaningful in the absence of UT is just an avoidance strategy: an excuse to ignore inconvenient facts.

+I’ve heard eminent scientists describe MOND’s predictive ability as “magic.” This also seems like an avoidance strategy. I, for one, do not believe in magic. That it works as well as it doesthat it works at all – must be telling us something about the natural world, not the supernatural.

5There does exist a large and active community of astroparticle physicists trying to come up with theories for what the dark matter could be. That’s good: that’s what needs to happen, and we should exhaust all possibilities. We should do the same for new dynamical theories.