I see that I’ve been posting once a month so far in 2026. I’ve lots to say but no time to say it. Some of it good, some of it bad, maybe sometime I’ll get around to it. No guarantees. On the good side, I’ve been working on a big project or two; may have something to say about those soon. I’ve also been meaning to write about the Planet 9anomaly for months stretching into years now. Fascinating stuff related to MOND but not something I’ve worked on myself. On the bad side, I’ve been obliged to waste yet more time on my university administration’s insistence on merging our department into physics based on a snap decision made by a disinterested leader who employed all the forethought typically reserved for bombing a random country in the Middle East.
So I have had no time for novel posts lately, and today is no different. However, I thought readers of this blog would appreciate the post Paradigm Shifts in Modern Astrophysics: Applying Thomas Kuhn’s The Structure of Scientific Revolutions to Dark Matter at Heritage Diner that was pointed out to me by Moti Milgrom. Since I wouldn’t have seen it had he not mentioned it, perhaps that’s the case for you as well. I’m not gonna re-post it verbatim – you can read it there yourself – but I am going to offer a running commentary with a few observations, both personal and historical. So bring it up in a separate browser window and let’s read along…
This post riffs off of Kuhn’s The Structure of Scientific Revolutions as it pertains to dark matter and MOND. If you’re not familiar with it, Kuhn’s work on the philosophy of science is foundational to the way in which a lot of physical scientists approach their field (whether they realize it or not). Philosophers of science have done a lot more since then, but I’m not going to attempt to go there. I will look back to Popper* to note that I’ve heard Kuhn depicted as being some sort of antithesis to Popper. I don’t see it that way. To be pithy, Popper tells us how science should be done while Kuhn tells us how it is done. Who could have imagined that a human endeavor would be messy in practice and not always live up to its ideal?
I’m not sure how to do this; I guess I’ll excerpt relevant quotes and riff off those. The basic thesis is that dark matter is on the brink of a Kuhnian paradigm shift.
We are living through exactly that moment in modern astrophysics.
I certainly hope so! This moment in the history of science is taking a long damn time. A century ago, we went from “classical physics explains everything” to “quantum mechanics, WTF?’ in the space of about a decade. I’ve been working on matters related to MOND for over thirty years now, dark matter longer than that, and of course Milgrom started more than a decade before I did.
The essay discusses the “cartography of collapse,” which includes crisis and revolution:
The third stage is crisis — triggered when anomalies accumulate beyond the paradigm’s absorptive capacity. And the fourth is revolution, in which a new framework displaces the old not through incremental persuasion but through a gestalt shift, what Kuhn famously described as seeing the same duck-rabbit drawing and suddenly recognizing a rabbit where you had always seen a duck.
This resonated with me because I had exactly this experience. I started my career as much a believer in dark matter as anyone. I was barely aware that MOND existed (this seems to remain a common condition). But it reared its ugly head in my own data for low surface brightness galaxies. Try as I might – and I tried mighty hard, for a long time – I could not reconcile how the shapes of rotation curves depended on surface brightness as they should according to Newton while simultaneously lying exactly on the Tully-Fisher relation without any hint of dependence on surface brightness+. I could explain one or another, but not both simultaneously – at least, not without engaging in some form of tautology that made it so. I came up with a lot of those, and that has been a full-time occupation for many theorists ever since.
For me, this gradually became a genuine crisis. I pounded my head against the wall for months. Then, as I was wrestling with this problem, I happened to attend a talk by Milgrom. I almost didn’t go. I remember thinking “modified gravity? Who wants to hear about that?” But I did, and in a few short lines on the board, Milgrom derived from MOND exactly the result I found so confusing in terms of dark matter. This chance meeting in Middle Earth (Cambridge, UK) changed how I saw the universe. The change wasn’t immediate – it had to ferment a while – but ultimately I found myself asking myself over and over how this stupid theory could have its predictions come true when there was so much evidence for dark matter. Finally I realized that the evidence for dark matter assumes that gravity is normal; really it was just evidence of a discrepancy, and it could be that the assumption was at fault. That realization was sudden: where I’d always seen a duck, suddenly I could also see a rabbit.
Most scientists have not had this experience. What constitutes a crisis serious enough to contemplate a paradigm change is a highly personal matter of judgement. It happened in my data, so I took it seriously, but others didn’t care. So I made predictions for their data. Some of those came true, but they rejected the evidence of their own data. It just could not be so! At what point does a mere problem amount to a true anomaly?
Part of the sociological issue is that the dark matter paradigm has been in a constant state of crisis since its inception. The reasons vary over time. Sometimes valid solutions have been found to the crisis du jour, other times we’ve chosen to just live with it. It is much easier to live with a bad solution than to rethink one’s entire world view.
The problem with being in a constant state of crisis makes is that it seems like nothing can ever be a genuine crisis. Every foundationalchange is just another new normal. We complain, say it can’t be so, argue, offer bad ideas, reject them, get used to them, then eventually accept that one of them maybe isn’t so bad, so that must be what is going on. After a few years It is Known and people convince themselves that we expected just that all along.
It takes a lot of evidentiary weight for a paradigm to change, and it takes a lot of time for that to accumulate. But, as Kuhn recognized, mere facts are not enough. Humans and their attitudes matter. As Feyerabend noted,
The normal component [i.e. the accepted paradigm and its adherents] is large and well entrenched. Hence, a change of the normal component is very noticeable. So is the resistance of the normal component to change. This resistance becomes especially strong and noticeable in periods where a change seems to be imminent.
The post correctly points out that dark matter itself was an anomaly going back to Zwicky in 1933. This is often depicted^ as the first detection of dark matter, but it was also noted by Oort in 1932. Zwicky was aware of Oort’s work and cited him, but they’re very different results. Oort was worried about a factor of ~2 discrepancy in stellar dynamics in our local chunk of the Milky Way; Zwicky discovered a discrepancy of a factor of ~1000 in the Coma cluster of galaxies. These both imply the need for unseen mass, but the results are not at all the same. In retrospect, Oort’s discrepancy is a subtle detection of a flat rotation curve while Zwicky’s discrepancy was (at least) two distinct discrepancies: what we now consider the usual cosmic dark matter, but also missing baryons: most of the normal matter in clusters is in the hot, diffuse intracluster medium, not in the stars in the galaxies that Zwicky could see and account for. The modern discrepancy is only a factor of ~6, which is rather less than 1,000. (The distance scale also played a role in exaggerating Zwicky’s result.)
This all seemed crazy in the 1930s, even in the immediate aftermath of the quantum revolution. Consequently, Zwicky’s work was mostly ignored$. The subject of dark matter didn’t really take off until the 1970s. Considerable credit goes (rightly) to Vera Rubin, though many others made essential contributions – just on the subject of rotation curves, Albert Bosma, Mort Roberts, and Seth Shostak all made important contributions, the relative importance of which depends on who you ask.
An important aspect of scientific revolutions is persistence. Vera was persistent. She was fond of relating the story of showing her first (1970) flat rotation curve of Andromeda to Alan Sandage, only to have him dismiss it as “the effect of looking at a bright galaxy.” What the heck did that mean? Nothing, of course – it is the sort of stupid thing that smart people say when confronted with the inconceivable. So Vera persisted, and by the end of the decade had shown that flat rotation curves were the rule, not some strange exception. They became accepted as a de facto law of nature, and the dark matter interpretation was solidly in place by 1980.
The scientific community absorbed this anomaly not by questioning Newtonian gravity or Einstein’s general relativity, but by proposing an invisible scaffolding — a halo of non-luminous, non-interacting matter surrounding every galaxy. Dark matter became not a crisis but a patch.
Indeed, this seemed the most appropriate (scientifically conservative) course of action at the time, as summarized in this exchange (also from the early 1980s):
To emphasize the essence of what is said here:
Tohline: I might be so bold as to suggest that the validity of Newton’s law should now be seriously questioned.
Rubin: The point you raise is worth keeping in mind although I believe most of us would rather alter Newtonian gravitational theory only as a last resort.
This was a very reasonable attitude, at the time. But I’ve heard the phrase “only as a last resort” many times now over the course of many years from many different scientists. At what point have we reached the last resort? In the case of dark matter, once we’ve convinced ourselves that invisible mass has to exist, how can we possibly disabuse ourselves of that notion, should it happen to be wrong?
In Kuhnian terms the last resort is reached when the weight of anomalies in the standard paradigm become too great to sustain. But that point is never reached for many die-hard adherents. Whatever the right answer about dark matter turns out to be, I’m sure many brilliant people will go to their graves in denial. Hence the more cynical phrase
Science progresses one funeral at a time.%
But does it? What if the adherents of an ingrained but incorrect paradigm breed faster than they go away? I’ve seen True Believers train graduate students who’ve gone on to train students of their own. Each generation seems to accept without serious examination the inadequate explanations for the anomalies made by their antecedents, so the weight of the anomalies doesn’t accumulate; instead, each one gets swept separately under the proverbial rug and forgotten. Forgetting is important: when new anomalies come to light, hands are waved and new explanations are promulgated; no one chekcs if the new explanations contradict the previous generation of explanations. What passed before is a solved problem, and we need never speak of it again.
This is not a recipe for a scientific revolution, but for a thousand years of dark epicycles.
By the late 1980s and early 1990s, dark matter had been formally incorporated into the reigning cosmological framework. Lambda-CDM — where Lambda refers to the cosmological constant (a proxy for dark energy) and CDM stands for Cold Dark Matter — became the standard model of cosmology.
The essence of this statement is correct but some of the details are not. Dark matter was widely accepted by 1980. That’s still a little before my time, but my impression is that the magnitude of the discrepancy was at first a factor of two, so it could simply have been normal baryons that were hard to see. However, the discrepancy rapidly snowballed to an order of magnitude, so we needed something non-baryonic. This was happening simultaneously with talk of supersymmetry and grand unified theories in particle physics that could readily provide new particles to be candidates for the dark matter, leading to the shotgun marriage of particle physics and cosmology, two communities that had had little to do with each other before then, and which still make an odd couple. Cosmology as traditionally practiced by astronomers needed dark matter but didn’t much care what it was; particle physics was all about the possibility of new particles but didn’t care about the details of the astronomical evidence.
To rephrase the above quote, I think it is fair to say that “by the late 1980s and early 1990s, cold dark matter had been formally incorporated into the reigning cosmological framework.” But that framework was not yet LCDM, it was Ωm = 1 SCDM. The Lambda only came to prominence by the end of the 1990s, as I’ve related elsewhere. This process is depicted by many scientists as a revolution in itself, and in many regards it was. The cosmological constant had been very far out of favor; rehabilitating it was a grueling experience and no trivial matter. But it wasn’t really a scientific revolution in the sense that Kuhn meant: our picture didn’t fundamentally change, we just learned to accept a parameter& that was already there but that we didn’t like.
The post goes on to note the absence of dark matter detections:
This silence is itself an anomaly… as the silence deepens, the null result itself becomes harder to dismiss.
This is correct, and yet… Physicists have built many experiments that have achieved extraordinary sensitivities. If cold dark matter was composed of WIMPs as originally hypothesized, we would have detected them long ago. Initially, the reaction was to modify WIMPs. Did we say the cross-section would be 10-39 cm2? We meant 10-44 cm2. When that was excluded, we slid the cross section still lower, but people also started giving themselves permission to think the unthinkable. By unthinkable I mean a particle that can’t be detected, not modified gravity. That’s more unthinkable. So the anomaly isn’t dismissed, but it is treated with less gravity than it should be, and certainly with less import than a positive detection would have been granted. Did we say WIMPs? We didn’t mean just WIMPs. It could be anything. (They damn well meant WIMPs and only WIMPs#. Anyone who tells you otherwise is gaslighting*% you, and probably themselves.)
The post goes on to talk about MOND. It gives me too much credit for the gravitational lensing work. This was done by Tobias Mistele, and our work is based on that of Brouwer et al. But it is correct to note that these data are a problem for the dark matter paradigm. Rotation curves remain flat beyond where dark matter halos should end. If correct, this is a genuine anomaly. Perhaps in some distant future it will be recognized** as such in retrospect; at present it seems mostly to be ignored.
It goes on to talk about the JWST observations. Yeah, that part is correct. The community seems to be in the usual process of gaslighting itself into denial of the anomaly. For the first two years after JWST started returning images of the deep universe, people were aghast. How can this be so? It was all anyone could talk about. But then the unexpected became the new normal. Hands were waved, star formation was accepted to be absurdly efficient, and people accepted the impossible. I no longer hear the talk of how problematic the JWST observations are; this chatter simply stopped.
Anomalies don’t weigh a paradigm down if we don’t accept that they’re anomalies. But I’ve lived through the revolution, it’s hard to see a positive outcome while it is still ongoing. For it is certainly true that
What waits on the other side of the dark matter revolution — if that is what is coming — we cannot yet know.
The future is the unknown territory. We don’t know, and can never know, if dark matter doesn’t exist – it is impossible to prove the negative. But we do know MOND works much better than it should in a universe made of dark matter. That demands a scientific explanation that is still wanting. But MOND by itself is not a complete answer, so we are like the parable of the blind men and the elephant, each sensing a different part of reality but as yet unable to see the whole.
Still, there is reason for optimism. The article closes by noting that
Kuhn’s deepest insight was not that science changes. It is that the change, when it comes, is never merely technical. It is a reorganization of the world itself — the universe seen suddenly whole in a configuration it has always had, but that we had simply lacked the paradigm to perceive.
Not knowing how things ultimately work out is good, actually. One way or the other, there is still fundamental science to be done. We have not reached the stage of looking for our discoveries in the sixth place of decimals.
*Trivia I just learned looking at Popper’s wikipedia page: he was spending his last days in London around the same time I was a postdoc in Cambridge just starting to struggle with the scientific and philosophical implications of the dark matter-MOND miasma.
Unrelated trivia: I was at a workshop in Jerusalem early in the century but missed the opportunity to meet Jacob Bekenstein because I was too shy to bother the great man.
+If you do not find this confusing, you are not thinking clearly.
^A nice, brief summary of this early history is related by Einasto. This is the first place I’ve seen the citation to Opik (1915) written out. I’ve only heard mentioned verbally before, so I’ll have to try looking that up later.
The full story is way more complicated than this sounds, and still gets debated off and on. The amplitude of the Oort discrepancy is much smaller today. Locally, the 3D density of mass seems to be accounted for by known stars, gas, and stellar remnants (which were still a new thing in the 1930s). So this Oort limit shows no discrepancy. There remains a modest discrepancy in the 2D dynamical surface density. It appears to me to boil down to the vertical restoring force having a (sometimes ignored) term that depends on the gradient of the rotation curve. Were that falling in a normal Newtonian way, there would be no discrepancy. But it isn’t; this deviation from Newton in the radial direction leads to the Oort discrepancy in the vertical direction. Instead of being as negative as Newton predicts, dV/dR is close to zero, hence my description of this as an indirect detection of a[n almost] flat rotation curve. (dV/dR = -1.7 km/s/kpc, so not exactly zero, but a lot closer to zero than Newton without dark matter would have it be.) The vertical discrepancy is nevertheless much reduced, now being well below a factor of two.
$To his apparently great embitterment. He had some choice things to say about astronomers of his time. I am inclined to suspect that those who praise Zwicky the loudest today would have been among those he had reason to complain about had they been contemporaries.
%This is attributed to Planck, but he had a lot more nuanced things to say about it in his Nobel Prize lecture.
&Einstein disavowed the cosmological constant as his “greatest blunder,” so one argument against it was (for a long time) that it should never have been a part of the theory of General Relativity in the first place. I wonder how things might have gone had that been the case – that he had never introduced Lambda. Perhaps then the data that led to us accepting Lambda would have required a genuine revolution, but it isn’t obvious that we would have accepted it (we might still be debating it), nor is it apparent that LCDM is what comes out of such a revolution. But we don’t get to do that experiment: the Great Man had suggested Lambda, so it was OK to bring it back: we weren’t wrecking his theory by introducing a crazy new entity, we were just admitting an unlikely (antigravity-like) component thereof.
#Or axions! Or warm or self-interacting dark matter. Or macros nee strange nuggets! Or or or… Sure, there have been lots of ideas for what the dark matter could be. But when we say that “by the late 1980s and early 1990s, cold dark matter had been formally incorporated into the reigning cosmological framework” what the vast majority of scientists working on the topic (including myself) meant was that CDM == WIMPs. We were aggressively derisive of other ideas, and these are only dredged up again now because of the experimental non-detection of WIMPs. WIMPs are still a better dark matter candidate than the others for the same reasons that we were derisive of the others back in the day. We haven’t been looking as hard for the others, so comparable experimental limits do not yet exist. To quote myself,
The concept of dark matter is not falsifiable. If we exclude one candidate, we are free to make up another one. After WIMPs, the next obvious candidate is axions. Should those be falsified, we invent something else. (Particle physicists love to do this. The literature is littered with half-baked dark matter candidates invented for dubious reasons, often to explain phenomena with obvious astrophysical causes. The ludicrous uproar over the ATIC and PAMELA cosmic ray experiments is a good example.)
McGaugh (2008)
*%An easy way to deflate such gaslighting is to ask why so many experiments have been built to search for WIMPs but not all these other allegedly great dark matter candidates. After a pause and dismayed stare, you’ll probably get an answer about “looking under the lamp post” because that’s where it is possible to make detections. That’s sorta true, but it isn’t the real reason. The real reason is that we all drank the Kool-Aid of the WIMP miracle, so genuinely believed that the dark matter had to be WIMPS, not merely that they were a convenient experimental target. (I did not chug the kool-aid as hard as the people who based entire careers on building WIMP detection experiments, but I did buy into the idea to the exclusion of other possibilities for dark matter – as did most everyone else.)
**In retrospect, Galileo’s observations of the angular size and phases of Venus were utterly fatal to the geocentric paradigm. That’s easy to say now; at the time it was just another piece of evidence.
The stability of spiral galaxies was a foundational motivation to invoke dark matter: a thin disk of self-gravitating stars is unstable unless embedded in a dark matter halo. Modified dynamics can also stabilize galactic disks. A related test is provided by how thin such galaxies can be.
Thin galaxies exist
Spiral galaxies seen edge-on are thin. They have a typical thickness – their short-to-long axis ratio – of q ≈ 0.2. Sometimes they’re thicker, sometimes they’re thinner, but this is often what we assume when buildingmass models of the stellar disk of galaxies that are not seen exactly* edge-on. One can employ more elaborate estimators, but the results are not particularly sensitive to the exact thickness so long as it isn’t the limit of either razor thin (q = 0) or a spherical cow (q = 1).
Sometimes galaxies are very thin. Behold the “superthin” galaxy UGC 7321:
It also looks very thin in the infrared, which is the better tracer of stellar mass:
Fig. 1 fromMatthews et al (1999): H-band (1.6 micron) image of UGC 7321. Matthews (2000) finds a near-IR axis ratio of 14:1. That’s super thin (q = 0.07)!
UGC 7321 is very thin, would be low surface brightness if seen face-on (Matthews estimates a central B-band surface brightness of 23.4 mag arcsec-2), has no bulge component thickening the central region, and contains roughly as much mass in gas as stars. All of these properties dispose a disk to be fragile (to perturbations like mergers and subhalo crossings) and unstable, yet there it is. There are enough similar examples to build a flat galaxy catalog, so somehow the universe has figured out a way for galaxy disks to remain thin and dynamically cold# for the better part of a Hubble time.
We see spiral galaxies at various inclinations to our line of sight. Some will appear face on, others edge-on, and everything in between. If we observe enough of them, we can work out what the intrinsic distribution is based on the projected version we see.
First, some definitions. A 3D object has three principle axes of lengths a, b, and c. By convention, a is the longest and c the shortest. An oblate model imagines a galaxy like a frisbee: it is perfectly round seen face-on (a = b); seen edge-on q = c/a. More generally, an object can be triaxial, with a ≠ b ≠ c. In this case, a galaxy would not appear perfectly round even when seen perfectly face-on^ because it is intrinsically oval (with similar axis lengths a ≈ b but not exactly equal). I expect this is fairly common among dwarf Irregular galaxies.
The observed and intrinsic distribution of disk thicknesses
Benevides et al. (2025) find that the distribution of observed axis ratios q is pretty flat. This is a consequence of most galaxies being seen at some intermediate viewing angle. One can posit an intrinsic distribution, model what one would see at a bunch of random viewing angles, and iterate to extract the true distribution in nature, which they do:
Figure 6 from Benevides et al. (2025): Comparison between the observed (projected) distribution and the inferred intrinsic 3D axis ratios for a subsample of dwarfs in the GAMA survey with –. The observed shapes are shown with the solid black line and are used to derive an intrinsic (long-dashed) and (dotted) distribution when projected. Solid color lines in each panel corresponds to the values obtained from the 3D model after random projections. Note that a wide distribution of values is generated by a much narrower intrinsic distribution. For example, the blue shaded region in the left panel shows that an observed of galaxies with requires of galaxies to have an intrinsic for an oblate model. Similarly, for a triaxal model (right panel, red curve) of galaxies are required to be thinner than . The additional freedom of in the triaxial model helps to obtain a better fit to the projected distribution, but the changes mostly affect large values and changes little the frequency derived from highly elongated objects.
That we see some thin galaxies implies that they they have to be common, as most of them are not seen edge-on. For dwarf$ galaxies of a specific mass range, which happens to include UGC 7321, Benevides et al. (2025) infer a lot% of thin galaxies, at least 40% with q < 0.2. They also infer a little bit of triaxiality, a ≈ b.
The existence and numbers of thin dwarfs seems to come as a surprise to many astronomers. This is perhaps driven in part by theoretical expectations for dwarf galaxies to be thick: a low surface brightness disk has little self-gravity to hold stars in a narrow plane. This expectation is so strong that Benevides et al. (2025) feel compelled to provide some observed examples, as if to say look, really:
Figure 8 – images of real galaxies from Benevides et al. (2025): Examples of highly elongated dwarf galaxies with and – . They resemble thin edge-on disks and can be found even among the faintest dwarfs in our sample. Legends in each panel quote the stellar mass, the shape parameter , as well as the GAMA identifier. Objects are sorted by increasing , left to right.
As an empiricist who has spent a career looking at low mass and low surface brightness galaxies, this does not come as a surprise to me. These galaxies look normal. That’s what the universe of late type dwarf$ galaxies looks like.
Edge-on galaxies in LCDM simulations
Thin galaxies do not occur naturally in the hierarchical mergers of LCDM (e.g., Haslbauer et al. 2022), where one would expect a steady bombardment by merging masses to mess things up. The picture above is not what galaxy-like objects in LCDM simulations look like. Scraping through a few simulations to find the flattest galaxies, Benevides et al. (2025) find only a handful of examples:
Figure 11 – images of simulated galaxies from Benevides et al. (2025): Edge-on projection of examples of the flattest galaxies in the TNG50 simulation, in different bins of stellar mass.
Note that only the four images on the left here occupy the same stellar mass range as the images of reality above. These are as close as it gets. Not terrible, but also not representative&. The fraction of galaxies this thin is a tiny fraction of the simulated population whereas they are quite common in reality. Here the two are compared: three different surveys (solid lines) vs. three different simulations (dashed lines).
Figure 9 from Benevides et al. (2025): Fraction of galaxies that are derived to be intrinsically thinner than as a function of stellar mass. Thick solid lines correspond to our observational samples while dashed lines are used to display the results of cosmological simulations. Different colors highlight the specific survey or simulation name, as quoted in the legend. In all observational surveys, the frequency of thin galaxies peaks for dwarfs with , almost doubling the frequency observed on the scale of MW-mass galaxies. Thin galaxies do not disappear at lower masses: we infer a significant fraction of dwarf galaxies with to have . This is in stark contrast with the negligible production of thin dwarf galaxies in all numerical simulations analyzed here.
Note that the thinnest galaxies in nature are dwarfs of mass comparable to UGC 7321. Thin disks aren’t just for bright spirals like the Milky Way with log(M*) > 10.5. They are also common*$ for dwarfs with log(M*) = 9 and even log(M*) = 8, which are often gas dominated. In contrast, the simulations produce almost no galaxies that are thin at these lower masses.
The simulations simply do not look like reality. Again. And again, etc., etc., ad nauseam. It’s almost as if the old adage applies: garbage in, garbage out. Maybe it’s not the resolution or the implementation of the simulations that’s the problem. One could get all that right, but it wouldn’t matter if the starting assumption of a universe dominated by cold dark matter was the input garbage.
Galaxy thickness in Newton and MOND
Thick disks are not merely a product of simulations, they are endemic to Newtonian dynamics. As stars orbit around and around a galaxy’s center, they also oscillate up and down, bobbing in and out of the plane. How far up they get depends on how fast they’re going (the dynamical temperature of the stellar population) and how strong the restoring force to the plane of the disk is.
In the traditional picture of a thin spiral galaxy embedded in a quasi-spherical dark matter halo, the restoring force is provided by the stars in the disk. The dark matter halo is there to boost the radial force to make the rotation curve flat, and to stabilize the disk, for which it needs to be approximately spherical. The dark matter halo does not contribute much to the vertical restoring force because it adds little mass near the disk plane. In order to do that, the halo would have to be very squashed (small q) like the disk, in which case we revive the stability problem the halo was put there to solve.
This is why we expect low surface brightness disks to be thick. Their stars are spread thin, the surface mass density is low, so the restoring force to the disk should be small. Disks as thin as UGC 7321 shouldn’t be possible unless they are extremely cold*# dynamically – a situation that is unlikely to persist in a cosmogony built by hierarchical merging. The simulations discussed above corroborate this expectation.
In MOND, there is no dark matter halo, but the modified force should boost the vertical restoring force as well as the radial force. One thus expects thinner disks in MOND than in Newton.
I pointed this out in McGaugh & de Blok (1998) along with pretty much everything else in the universe that people tell me I should consider without bothering to check if I’ve already considered. Here is the plot I published at the time:
Figure 9 of McGaugh & de Blok (1998): Thickness q = z0/h expected for disks of various central surface densities 0. Shown along the top axis is the equivalent B-band central surface brightness 0 for * = 2. Parameters chosen for illustration are noted in the figure (a typical scale length h and two choices of central vertical velocity dispersion z). Other plausible values give similar results. The solid lines are the Newtonian expectation and the dashed lines that of MOND. The Newtonian and MOND cases are similar at high surface densities but differ enormously at low surface densities. Newtonian disks become very thick at low surface brightness. In contrast, MOND disks can remain reasonably thin to low surface density.
There are many approximations that have to be made in constructing the figure above. I assumed disks were plane-parallel slabs of constant velocity dispersion, which they are not. But this suffices to illustrate the basic point, that disks should remain thinner&% in MOND than in Newton as surface density decreases: as one sinks further into the MOND regime, there is relatively more restoring force keep disks thin. To duplicate this effect in Newton, one must invent two kinds of dark matter: a dissipational kind of dark matter that forms a dark matter disk in addition to the usual dissipationless cold dark matter that makes a quasi-spherical dark matter halo.
The idea of the plot above was to illustrate the trend of expected thickness for galaxies of different central surface brightness. One can also build a model to illustrate the expected thickness as a function of radius for a pair of galaxies, one high surface brightness (so it starts in the Newtonian regime at small radii) and one of low surface brightness (in the MOND regime everywhere). I have chosen numbers** resembling the Milky Way for the high surface brightness galaxy model, and scaled the velocity dispersion of the low surface brightness model so it has very nearly the same thickness in the Newtonian regime. In MOND, both disks remain thin as a function of radius (they flare a lot in Newton) and the lower surface brightness disk model is thinner thanks to the relatively stronger restoring force that follows from being deeper in the MOND regime.
The thickness of two model disks, one high surface brightness (solid lines) and the other low surface brightness (dashed lines), as a function of radius. The two are similar in Newton (black), but differ in MOND (blue). The restoring force to the disk is stronger in MOND, so there is less flaring with increasing radius. The low surface brightness galaxy is further in the MOND regime, leading naturally to a thinner disk.
These are not realistic disk models, but they again suffice to illustrate the point: thin disks occur naturally in MOND. Low surface brightness disks should be thick in LCDM (and in Newtonian dynamics in general), but can be as thin as UGC 7321 in MOND. I didn’t aim to make q ≈ 0.1 in the model low surface brightness disk; it just came out that way for numbers chosen to be reasonable representations of the genre.
What the distribution of thicknesses is depends on the accretion and heating history of each individual disk. I don’t claim to understand that. But the mere existence of dwarf galaxies with thin disks is a natural outcome in MOND that we once again struggle to comprehend in terms of dark matter.
*Seeing a galaxy highly inclined minimizes the inclination correction to the kinematic observations [Vrot = Vobs/sin(i)] but to build a mass model we also need to know the face-on surface density profile of the stars, the correction for which depends on 1/cos(i). So as a practical matter, the competition between sin(i) and cos(i) makes it difficult to analyze galaxies at either extreme.
#Dynamically cold means the random motions (quantified by the velocity dispersion of stars σ) are small compared to ordered rotation (V) in the disk, something like V/σ ≈ 10. As a disk heats (higher σ) it thickens, as some of that random motion goes in the vertical direction perpendicular to the disk. Mergers heat disks because they bring kinetic energy in from random directions. Even after an object is absorbed, the splash it made is preserved in the vertical distribution of the stars which, once displaced, never settle back into a thin disk. (Gas can settle through dissipation, but point masses like stars cannot.)
$In a field with abominable terminology, what is meant by a “dwarf” galaxy is one of the worst offenders. One of my first conference contributions thirty years ago griped about the [mis]use of this term, and matters have not improved. For this particular figure, Benevides et al. (2025) define it to mean galaxies with stellar masses in the range 9 < log(M*) < 9.5, which seems big to me, but at least it is below the mass of a typical L* spiral, which has log(M*) ~ 10.5. For comparison, see Fig. 6 of the review of Bullock & Boylan-Kolchin (2017), who define “bright dwarfs” to have 7 < log(M*) < 9, and go lower from there, but not higher into the regime that we’re calling dwarf right now. So what a dwarf galaxy is depends on context.
%Note that the intrinsic distribution peaks below q = 0.2, so arguably one should perhaps adopt as typical the mode of the distribution (q ≈ 0.17).
&Another way in which even the thin simulated objects are not representative of reality is that they are dynamically hot, as indicated by the κrot parameter printed with the image. This is the fraction of kinetic energy in rotation. One of the more favorable cases with κrot = 0.67 corresponds to V/σ = 2.5. That happens in reality, but higher values are common. Of course, thin disks and dynamical coldness go hand in hand. Since the simulations involve a lot of mergers, the fraction of kinetic energy in rotation is naturally small. So I’m not saying the simulations are wrong in what they predict given the input physics that they assume, but I am saying that this prediction does not match reality.
*$The fraction of thin galaxies observed by DESI is slightly higher than found in the other surveys. Having looked at all these data, I am inclined to suspect the culprit is image quality: that of DESI is better. Regardless of the culprit for this small discrepancy between surveys, thin disks are much more common in reality than in the current generation of simulations.
*#There seems to be a limit to how cold disks get, with a minimum velocity dispersion around ~7 km/s observed in face-on dwarfs when the appropriate number, according to Newton, would be more like 2 km/s, tops. I remember this number from observations in the ’80s and ’90s, along with lots of discussion then to the effect of how can it be so? but it is the new year and I’m feeling too lazy to hunt down all the citations so you get a meme instead.
&%In an absolute sense, all other things being equal, which they’re not, disks do become thicker to lower surface brightness in both Newton and MOND. There is less restoring force for less surface mass density. It is the relative decline in restoring force and consequent thickening of the disk that is much more precipitous in Newton.
**For the numerically curious, these models are exponential disks with surface density profiles Σ(R) = Σ0 e-R/Rd. Both models have a scale length Rd = 3 kpc. The HSB has Σ0 = 866 M☉ pc-2; this is a good match to the Eilers et al. (2019) Milky Way disk; see McGaugh (2019). The LSB has Σ0 = 100 M☉ pc-2, which corresponds roughly to what I consider the boundary of low surface brightness, a central B-band surface brightness of ~23 mag. arcsec-2. For the velocity dispersion profile I also assume an exponential with scale length 2Rd (that’s what supposed to happen). The central velocity dispersion of the HSB is 100 km/s (an educated guess that gets us in the right ballpark) and that of the LSB is 33 km/s – the mass is down by a factor of ~9 so the velocity dispersion should be lower by a factor of . (I let it be inexact so the solid and dashed Newtonian lines wouldn’t exactly overlap.)
These models are crude, being single-population (there can be multiple stellar populations each with their own velocity dispersion and vertical scale height) and lacking both a bulge and gas. The velocity dispersion profile sometimes falls with a scale length twice the disk scale length as expected, sometimes not. In the Milky Way, Rd ≈ 2.5 or 3 kpc, but the velocity dispersion falls off with a scale length that is not 5 or 6 kpc but rather 21 or 25 kpc. I have also seen the velocity dispersion profile flatten out rather than continue to fall with radius. That might itself be a hint of MOND, but there are lots of different aspects of the problem to consider.
The mass distribution of dark matter halos that we infer from observations tells us where the dark matter needs to be now. This differs form the mass distribution it had to start, as it gets altered by the process of galaxy formation. It is the primordial distribution that dark matter-only simulations predict most robustly. We* reverse-engineer the collapse of the baryons that make up the visible Galaxy to infer the primordial distribution, which turns out to be… odd.
The Gaia rotation curve and the mass of the Milky Way
As we discussed a couple of years ago, Gaia DR3 data indicate a declining rotation curve for the Milky Way. This decline becomes more steep, nearly Keplerian, in the outskirts of the Milky Way (17 < R < 30 kpc). This is may or may not be consistent with data further out, which gets hard to interpret as the LMC (at 50 kpc) perturbs orbits and the observed motions may not correspond to orbits in dynamical equilibrium. So how much do the data inform us about the gravitational potential?
Milky Way rotation curve (various data) including Gaia DR3 (multiple analyses). Also shown is the RAR model (blue line) that was fit to the terminal velocities from 3 < R < 8.2 kpc (gray points) and predates other data illustrated here.
I am skeptical of the Keplerian portion of this result (as discussed at length at the time) because other galaxies don’t do that. However, I am a big fan of listening to the data, and the people actually doing the work. Taken at face value, the Gaia data show a Keplerian decline with a total mass around 2 x 1011 M☉. If correct, this falsifies MOND.
How does dark matter fare? There is an implicit assumption made by many in the community that any failing of MOND is an automatic win for dark matter. However, it has been my experience that observations that are problematic for MOND are also problematic for dark matter. So let’s check.
Short answer: this is really weird in terms of dark matter. How weird? For starters, most recent non-Gaia dynamical analyses suggest a total mass closer to 1012 M☉, a factor of five higher than the Gaia value. I’m old enough to remember when the accepted mass was 2 x 1012 M☉, an order of magnitude higher. Yet even this larger mass is smaller than suggested by abundance matching recipes, which give more like 4 x 1012 M☉. So somewhere in the range 2 – 40 x 1011 M☉.
The Milky Mass has been adjusted so often, have we finally hit it?
The guy was all over the road. I had to swerve a number of times before I hit him.
Boston Driver’s Handbook (1982 edition)&
If it sounds like we’re all over the map, that’s because we are. It is very hard to constrain the total mass of a dark matter halo. We can’t see it, nor tell where it ends. We infer, indirectly, that the edge is way out beyond the tracers we can see. Heck, even speaking of an “edge” is ill-defined. Theoretically, we expect it to taper off with the density of dark matter falling as ρ ~ r-3, so there is no definitive edge. Somewhat arbitrarily,** we adopt the radius that encloses a density 200 times the average density of the universe as the “virial” radius. This is all completely notional, and it gets worse, as the process of forming a galaxy changes the initial mass distribution. What we observe today is the changedform, not the primordial initial condition for which the notional mass is defined.
Adiabatic compression during galaxy formation
To form a visible galaxy, baryons must dissipate and sink to the center of their parent dark matter halo. This process changes the mass distribution and alters the halo from its primordial state. In effect, the gravity of the sinking baryons drags some dark matter along# with them.
The change to the dark matter halo is often called adiabatic compression. The actual process need not be adiabatic, but that’s how we approximate it. We’ve tested this approximation with detailed numerical simulations, and it works pretty well, at least if you do it right (there are boring debates about technique). What happens makes sense intuitively: the response of the primordial halo to the infall of baryons is to become more dense at the center. While this makes sense physically, it is problematic for LCDM as it takes an NFW halo that is already too dense at the center to be consistent with data and makes it more dense. This has been known forever, so opposing this is one thing feedback is invoked to do, which it may or may not do, depending on how it really works. Even if feedback can really turn a compressed cusp into a core, it is widely to expected to be important only in low mass galaxies where the gravitational potential well isn’t too deep. It isn’t supposed to be all that important in galaxies as massive as the Milky Way, though I’m sure that can change as needed.
There are a variety of challenges to implementing an accurate compression computation, so we usually don’t bother: the standard practice is to assume a halo model and fit it to the data. That will, at best, given a description of the current dark matter halo, not what it started as, which is our closest point of comparison with theory. To give an example of the effect, here is a Milky Way model I built a decade ago:
Figure 13 from McGaugh (2016): Milky Way rotation curve from the data of Luna et al. (2006, red points) and McClure-Griffiths & Dickey (2007, gray points) together with a bulgeless baryonic mass model (black line). The total rotation is approximately fit (blue line) with an adiabatically compressed NFW halo (solid green line) using the procedure implemented by Sellwood & McGaugh (2005). The primordial halo before compression is shown as the dashed line. The parameters of the primordial halo are a concentration c = 7 and a mass M200 = 6 x 1011 M☉. Fitting NFW to the present halo instead gives c = 14, M200 = 4 x 1011 M☉, so the difference is appreciable and depend on the quality and radial extent of the available data.
The change from the green dashed line to the solid green line is the difference compression makes. That’s what happens if a baryon distribution like that of the Milky Way settles in an NFW halo. The inferred mass M200 is lower and the concentration c higher than it originally was – and it is the original version that we should compare to the expectations of LCDM.
When I built this model, I considered several choices for the bulge/bar fraction: something reasonable, something probably too large, and something definitely too small (zero). The model above is the last case of zero bulge/bar. I show it because it is the only case for which the compression procedure worked. If there is a larger central concentration of baryons – i.e., a bulge and/or a bar – then the compression is greater. Too great, in fact: I could not obtain a fit (see also Binney & Piffl and this related discussion).
The calculation of the compression requires knowledge of the primordial halo parameters, which is what one is trying to obtain. So one has to guess an initial state, run the code, check how close it came, then iterate the initial guess. This is computationally expensive, so I was just eyeballing the fit above. Pengfei has done a lot of work to implement a method that iteratively computes the compression and rigorously fits it to data. So we decided to apply it to the newer Gaia DR3 data.
Fitting the Gaia rotation curve with adiabatically compressed halos
We need two inputs here: one, the rotation curve to fit, and two, the baryonic distribution of the Milky Way. The latter is hard to specify given our location within the Milky Way, so there are many different estimates. We tried a dozen.
Another challenge of doing this is deciding which data rotation curve data to fit. We chose to focus on the rotation curve of Jiao et al. (2023) because they made estimates of the systematic as well as random errors. The statistics of Gaia are so good it is practically impossible to fit any equilibrium model to them. There are aspects of the data for which we have to consider non-equilibrium effects (spiral arms, the bar, “snails” from external perturbations) so the usual assumptions are at best an approximation, plus there can always be systematic errors. So the approach is to believe the data, but with the uncertainty estimate of Jiao et al. (2023) that includes systematics.
For a halo model, we started with the boilerplate LCDM NFW halo$. This doesn’t fit the data. Indeed, all attempts to fit NFW halos fail in similar ways for all of the different baryonic mass models we tried. The quasi-Keplerian part of the Gaia rotation curve simply cannot be fit: the NFW halo inevitably requires more mass further out.
Here are a few examples of the NFW fits:
Fig. A.3 from Li et al. (2025). Fits of Galactic circular velocities using the NFW model implementing adiabatic halo contraction using 3 baryonic models. [Another 9 appear in the paper.] Data points with errors are the rotation velocities from Jiao et al. (2023), while open triangles show the data from Eilers et al. (2019), which are not fitted. [The radius ranges from 5 to 30 kpc.] Blue, purple, green and black solid lines correspond to the contributions by the stellar disk, central bar, gas (and dust if any), and compressed dark matter halo, respectively. The total contributions are shown using red solid lines. Black dashed lines are the inferred primordial halos.
LCDM as represented by NFW suffers the same failure mode as seen in MOND (plot at top): both theories overshoot the Gaia rotation curve at R > 17 kpc. This is an example of how data that are problematic for MOND are also problematic for dark matter.
We do have more freedom in the case of dark matter. So we tried a different halo model, Einasto. (For this and many other halo models, see Pengfei’s epic compendium of dark matter halo fits.) Where NFW has two parameters, a concentration c and mass M200, Einasto has a third parameter that modulates the shape of the density profile%. For a very specific choice of this third parameter (α = 0.17), it looks basically the same as NFW. But if we let α be free, then we can obtain a fit. Of all the baryonic models, the RAR model+compressed Einasto fits best:
Fig. 1 from Li et al. (2025). Example of a circular velocity fit using the McGaugh19$$ model for baryonic mass distributions. The purple, blue, and green lines represent the contributions of the bar, disk, and gas components, respectively. The solid and dashed black lines show the current and primordial dark matter halos, respectively. The solid red line indicates the total velocity profile. The black points show the latest Gaia measurements (Jiao et al. 2023), and the gray upward triangles and squares show the terminal velocities from (McClure-Griffiths & Dickey 2007, 2016), and Portail et al. (2017), respectively. The data marked with open symbols were not fit because they do not consider the systematic uncertainties.
So it is possible to obtain a fit considering adiabatic compression. But at what price? The parameters of the best-fit primordial Einasto halo shown above are c = 5.1, M200 = 1.2 x 1011 M☉, and α = 2.75. That’s pretty far from the α = 0.17 expected in LCDM. The mass is lower than low. The concentration is also low. There are expectation values for all these quantities in LCDM, and all of them miss the mark.
Fig. 2from Li et al. (2025). Halo masses and concentrations of the primordial Galactic halos derived from the Gaia circular velocity fits using 12 baryonic models. The red and blue stars with errors represent the halos with and without adiabatic contraction, respectively. The predicted halo mass-concentration relation within 1 σ from simulations (Dutton & Macciò 2014) is shown as the declining band. The vertical band shows the expected range of the MW halo mass according to the abundance- matching relation (Moster et al. 2013). The upper and lower limits are set by the highest stellar mass model plus 1 σ and the lowest stellar mass model minus 1 σ, respectively.
The expectation for mass and concentration is shown as the bands above. If the primordial halo were anything like what it should be in LCDM, the halo parameters represented by the red stars should be where the bands intersect. They’re nowhere close. The same goes for the shape parameter. The halo should have a density profile like the blue band in the plot below; instead it is more like the red band.
Fig. 3 from Li et al. (2025). Structure of the inferred primordial and current Galactic halos, along with predictions for the cold and warm dark matter. The density profiles are scaled so that there is no need to assume or consider the masses or concentrations for these halos. The gray band indicates the range of the current halos derived from the Gaia velocity fits using the 12 baryonic models, and the red band shows their corresponding primordial halos within 1σ. The blue band presents the simulated halos with cold dark matter only (Dutton & Macciò 2014). The purple band shows the warm dark matter halos (normalized to match the primordial Galactic halo) with a core size spanning from 4.56 kpc (WDM5 in Macciò et al. 2012) to 7.0 kpc, corresponding to a particle mass of 0.05 keV and lower.
So the primordial halo of the Milky Way is pretty odd. From the perspective of LCDM, the mass is too low and the concentration is too low. The inner profile is too flat (a core rather than a cusp) and the outer profile is too steep. This outer steepness is a large part of why the mass comes out so low; there just isn’t a lot of halo out there. The characteristic density ρs is at least in the right ballpark, so aside from the inner slope, the outer slope, the mass, and the concentration, LCDM is doing great.
What if we ignore the naughty bits?
It is really hard for any halo model to fit the steep decline of the Gaia rotation curve at R > 17 kpc. Doing so is what makes the halo mass so small. I’m skeptical about this part of the data, so do things improve if we don’t sweat that part?
Ignoring the data at R > 17 kpc allows the mass to be larger, consistent with other dynamical determinations if not quite with abundance matching. However, the inner parts of the rotation curve still prefer a low density core. That is, something like the warm dark matter halo depicted as the purple band above rather than NFW with its dense central cusp. Or self-interacting dark matter. Or cold dark matter with just-so feedback. Or really anything that obfuscates the need to confront the dangerous question: why does MONDperform better?
*This post is based on the recently published paper by my former student Pengfei Li, who is now faculty at Nanjing University. They have a press release about it.
&A few months after reading this in the Boston Driver’s Handbook, this exact thing happened to me.
**This goes back to BBKS in 1986 when the bedrock assumption was that the universe had Ωm = 1, for which the virial radius was 188 times the critical density. 200 was close enough, and stuck, even though for LCDM the virial radius is more like an overdensity close to 100, which is even further out.
#This is one of many processes that occur in simulations, which are great for examining the statistics of simulated galaxy-like objects but completely useless for modeling individual galaxies in the real universe. There may be similar objects, but one can never say “this galaxy is represented by that simulated thing.” To model a real galaxy requires a customized approach.
$NFW halos consistently perform worse in fitting data than any other halo model, of which there are many. It has been falsified as a viable representation of reality so many times that I can’t recall them all, and yet they remain the go-to model. I think that’s partly thanks to their simplicity – it is mathematically straightforward to implement – and to the fact that is what simulations predict: LCDM halos should look like NFW. People, including scientists, often struggle to differentiate simulation from reality, so we keep flogging the dead horse.
%The density profile of the NFW halo model asymptotes to power laws at both small and large radii: ρ → r-1 as r → 0 and ρ → r-3 as r → ∞. The third parameter of Einasto allows a much wider ranges of shapes.
Einasto profiles. Einasto is observationally indistinguishable from NFW for α = 0.17, but allows many other shapes.
$$The McGaugh19 model user here is the one with a reasonable bulge/bar. This dense component can be fit in this case because we start with a halo model with a core rather than a cusp (closer to α = 1 than to the α = 0.17 of NFW/LCDM).
This is a long post. It started focused on ultrafaint dwarfs, but can’t avoid more general issues. In order to diagnose non-equilibrium effects, we have to have some expectation for what equilibrium would be. The Tully-Fisher relation is a useful empirical touchstone for that. How the Tully-Fisher relation comes about is itself theory-dependent. These issues are intertwined, so in addition to discussing the ultrafaints, I also review some of the many predictions for Tully-Fisher, and how our theoretical expectation for it has evolved (or not) over time.
In the last post, we discussed how non-equilibrium dynamics might make a galaxy look like it had less dark matter than similar galaxies. That pendulum swings both ways: sometimes non-equilibrium effects might stir up the velocity dispersion above what it would nominally be. Some galaxies where this might be relevant are the so-called ultrafaint dwarfs (not to be confused with ultradiffuse galaxies, which are themselves often dwarfs). I’ve talked about these before, but more keep being discovered, so an update seems timely.
Galaxies and ultrafaint dwarfs
It’s a big universe, so there’s a lot of awkward terminology, and the definition of an ultrafaint dwarf is somewhat debatable. Most often I see them defined as having an absolute magnitude limit MV > -8, which corresponds to a luminosity less than 100,000 suns. I’ve also seen attempts at something more physical, like being a “fossil” whose star formation was entirely before cosmic reionization, which ended way back at z ~ 6 so all the stars would be at least*&^# 12.5 Gyr old. While such physics-based definitions are appealing, these are often tied up with theoretical projection: the UV photons that reionized the universe should have evaporated the gas in small dark matter halos, so these tiny galaxies can only be fossils from before that time. This thinking pervades much of the literature despite it being obviously wrong, as counterexamples! exist. For example, Leo P is practically an ultrafaint dwarf by luminosity, but has ample gas (so a larger baryonic mass) and is currently forming stars.
A luminosity-based definition is good enough for us here; I don’t really care exactly where we make the cut. Note that ultrafaint is an appropriate moniker: a luminosity of 105 L☉ is tiny by galaxy standards. This is a low-grade globular cluster, and some ultrafaints are only a few hundred solar luminosities, which is barely even# a star cluster. At this level, one has to worry about stochastic effects in stellar evolution. If there are only a handful of stars, the luminosity of the entire system changes markedly as a single star evolves up the red giant branch. Consequently, our mapping from observed quantities to stellar mass is extremely dodgy. For consistency, to compare with brighter dwarfs, I’ve adopted the same boilerplate M*/LV = 2 M☉/L☉. That makes for a fair comparison luminosity-to-luminosity, but the uncertainty in the actual stellar mass is ginormous.
It gets worse, as the ultrafaints that we know about so far are all very nearby satellites of the Milky Way. They are not discovered in the same way as other galaxies, where one plainly sees a galaxy on survey plates. For example, NGC 7757:
While bright, high surface brightness galaxies like NGC 7757 are easy to see, lower surface brightness galaxies are not. However, they can usually still be seen, if you know where to look:
UGC 1230 as seen on the Palomar Sky Survey. It’s in the middle.
I like to use this pair as an illustration, as they’re about the same distance from us and about the same angular size on the sky – at least, once you crank up the gain for the low surface brightness UGC 1230:
Zoom in on deep CCD images of NGC 7757 (left) and UGC 1230 (right) with the contrast of the latter enhanced. The chief difference between the two is surface brightness – how spread out their stars are. They have a comparable physical diameter, they both have star forming regions that appear as knots in their spiral arms, etc. These galaxies are clearly distinct from the emptiness of the cosmic void around them, being examples of giant stellar systems that gave rise to the term “island universe.”
In contrast to objects that are obvious on the sky as independent island universes, ultrafaint dwarfs are often invisible to the eye. They are recognized as a subset of stars near each other on the sky that also share the same distance and direction of motion in a field that might otherwise be crowded with miscellaneous, unrelated stars. For example, here is Leo IV:
I don’t. I do see a number of background galaxies, including an edge-on spiral near the center of the square. Those are not the ultrafaint dwarf, which is some subset of the stars in this image. To decide which ones are potentially a part of such a dwarf, one examines the color magnitude diagram of all the stars to identify those that are consistent with being at the same distance, and assigns membership in a probabilistic way. It helps if one can also obtain radial velocities and/or proper motions for the stars to see which hang together – more or less – in phase space.
Part of the trick here is deciding what counts as hanging together. A strong argument in favor of these things residing in dark matter halos is that the velocity differences between the apparently-associated stars are too great for them to remain together for any length of time otherwise. This is essentially the same situation that confronted Zwicky in his observations of galaxies in clusters in the 1930s. Here are these objects that appear together in the sky, but they should fly apart unless bound together by some additional, unseen force. But perhaps some of these ultrafaints are not hanging together; they may be in the process of coming apart. Indeed, they may have so few stars because they are well down the path of dissolution.
Since one cannot see an ultrafaint dwarf in the same way as an island universe, I’ve heard people suggest that being bound by a dark matter halo be included in the definition of a galaxy. I see where they’re coming from, but find it unworkable. I know a galaxy when I see one. As did Hubble, as did thousands of other observers since, as can you when you look at the pictures above. It is absurd to make the definition of an object that is readily identifiable by visual inspection be contingent on the inferred presence of invisible stuff.
So are ultrafaints even galaxies? Yes and no. Some of the probabilistic identifications may be mere coincidences, not real objects. However, they can’t all be fakes, and I think that if you put them in the middle of intergalactic space, we would recognize them as galaxies – provided we could detect them at all. At present we can’t, but hopefully that situation will improve with the Rubin Observatory. In the meantime, what we have to work with are these fragmentary systems deep in the potential well of the seventy billion solar mass cosmic gorilla that is the Milky Way. We have to be cognizant that they might have gotten knocked around, as we can see in more massive systems like the Sagittarius dwarf. Of course, if they’ve gotten knocked around too much, then they shouldn’t be there at all. So how do these systems evolve under the influence of a comic gorilla?
Let’s start by looking at the size-mass diagram, as we did before. Ultrafaint dwarfs extend this relation to much lower mass, and also to rather small sizes – some approaching those of star clusters. They approximately follow a line of constant surface density, ~0.1 M☉ pc-2 (dotted line)..
This looks weird to me. All other types of galaxies scatter all over the place in this diagram. The ultrafaints are unique in following a tight line in the size-mass plane, and one that follows a line of constant surface brightness. Every element of my observational experience screams that this is likely to be an artifact. Given how these “galaxies” are identified as the loose association of a handful of stars, it is easy to imagine that this trend might be an artifact of how we define the characteristic size of a system that is essentially invisible. It might also arise for physical reasons to do with the cosmic gorilla; i.e., it is a consequence of dynamical evolution. So maybe this correlation is real, but the warning lights that it is not are flashing red.
The Baryonic Tully-Fisher relation as a baseline
Ideally, we would measure accelerations to test theories, particularly MOND. Here, we would need to use the size to estimate the acceleration, but I straight up don’t believe these sizes are physically meaningful. The stellar mass, dodgy as it is, seems robust by comparison. So we’ll proceed as if we know that much – which we don’t, really – but let’s at least try.
With the stellar mass (there is no gas in these things), we are halfway to constructing the baryonic Tully-Fisher relation (BTFR), which is the simplest test of the dynamics that we can make with the available data. The other quantity we need is the characteristic circular speed of the gravitational potential. For rotating galaxies, that is the flat rotation speed, Vf. For pressure supported dwarfs, what is usually measured is the velocity dispersion σ. We’ve previously established that for brighter dwarfs in the Local Group, a decent approximation is Vf = 2σ, so we’ll start by assuming that this should apply to the ultrafaints as well. This allows us to plot the BTFR:
The baryonic mass and characteristic circular speeds of both rotationally supported galaxies (circles) and pressure supported dwarfs (squares). The colored points follow the same baryonic Tully-Fisher relation (BTFR), but the data for low mass ultrafaint dwarfs (gray squares) flattens out, having nearly the same characteristic speed over several decades in mass.
The BTFR is an emprical relation of the form Vf ~ Mb1/4 over about six decades in mass. Somewhere around the ultrafaint scale, this no longer appears to hold, with the observed velocity flattening out to become approximately constant for these lowest mass galaxies. I’m not sure this is real, as there many practical caveats to interpreting the observations. Measuring stellar velocities is straightforward but demanding at this level of accuracy. There are many potential systematics, pretty much all of which cause the intrinsic velocity dispersion to be overestimated. For example, observations made with multislit masks tend to return larger dispersions than observations of the same object with fibers. That’s likely because it is hard to build a mask so well that all of the stars perfectly hit the centers of the slitlets assigned to them; offsets within the slit shift the spectrum in a way that artificially adds to the apparent velocity dispersion. Fibers are less efficient in their throughput, but have the virtue of blending the input light in a way that precludes this particular systematic. Another concern is physical – some of the stars that are observed are presumably binaries, and some of the velocity will be due to motion within the binary pair and nothing to do with the gravitational potential of the larger system. This can be addressed with repeated observations to see if some velocities change, but it is hard to do that for each and every system, especially when it is way more fun to discover and explore new systems than follow up on the same one over and over and over again.
There are lots of other things that can go wrong. At some level, some of them probably do – that’s the nature of observational astronomy&. While it seems likely that some of the velocity dispersions are systematically overestimated, it seems unlikely that all of them are. Let’s proceed as if the bulk of the data is telling us something, even if we treat individual objects with suspicion.
MOND
MOND makes a clear prediction for the BTFR of isolated galaxies: the baryonic mass goes as the fourth power of the flat rotation speed. Contrary to Newtonian expectation, this holds irrespective of surface brightness, which is what attracted my attention to the theory in the first place. So how does it do here?
The same data as above with the addition of the line predicted by MOND (Milgrom 1983).
Low surface density means low acceleration, so low surface brightness galaxies would make great tests of MOND if they were isolated. Oh, right – they already did. Repeatedly. MOND also correctly predicted the velocities of low mass, gas-rich dwarfs that were unknown when the prediction was made. These are highly nontrivial successes of the theory.
The ultrafaints we’re discussing here are not isolated, so they do not provide the clean tests that isolated galaxies provide. However, galaxies subject to external fields should have low velocities relative to the BTFR, while the ultrafaints have higher velocities. They’re on the wrong side of the relation! Taking this at face value (i.e., assuming equilibrium), MOND fails here.
Whenever MOND has a problem, it is widely seen as a success of dark matter. In my experience, this is rarely true: observations that are problematic for MOND usually don’t make sense in terms of dark matter either. For each observational test we also have to check how LCDM fares.
LCDM
How LCDM fares is often hard to judge because its predictions for the same phenomena are not always clear. Different people predict different things for the same theory. There have been lots of LCDM-based predictions made for both dwarf satellite galaxies and the Tully-Fisher relation. Too many, in fact – it is a practical impossibility to examine them all. Nevertheless, some common themes emerge if we look at enough examples.
The halo mass-velocity relation
The most basic prediction of LCDM is that the mass of a dark matter halo scales with the cube of the circular velocity of a test particle at the virial radius (conventionally taken to be the radius R200 that encompasses an average density 200 times the critical density of the universe. If that sounds like gobbledygook to you, just read “halo” for “200”): M200 ~ V2003. This is a very basic prediction that everyone seems to agree to.
There is a tiny problem with testing this prediction: it refers to the dark matter halo that we cannot see. In order to test it, we have to introduce some scaling factors to relate the dark to the light. Specifically, Mb = fd M200 and Vf = fv V200, where fd is the observed fraction of mass in baryons and fv relates the observed flat velocity to the circular speed of our notional test particle at the virial radius. The obvious assumptions to make are that fd is a constant (perhaps as much as but not more than the cosmic baryon fraction of 16%) and fv is close to untiy. The latter requirement stems from the need for dark matter to explain the amplitude of the flat rotation speed, but fv could be slightly different; plausible values range from 0.9 < fv < 1.4. Values large than one indicate a rotation curve that declines before the virial radius is reached, which is the natural expectation for NFW halos.
Here is a worked example with fd = 0.025 and fv = 1:
The same data as above with the addition of the nominal prediction of LCDM. The dotted line is the halo mass-circular velocity relation; the gray band is a simple model with fd = 0.025 and fv = 1 (e.g., Mo, Mao, & White 1998).
I have illustrated the model with a fat grey line because fd = 0.025 is an arbitrary choice* I made to match the data. It could be more, it could be less. The detected baryon fraction can be anythings up to or less than the cosmic value, fd < fb = 0.16 as not all of the baryons available in a halo cool and condense into cold gas that forms visible stars. That’s fine; there’s no requirement that all of the baryons have to become readily observable, but there is also no reason to expect all halos to cool exactly the same fraction of baryons. Naively one would expect at least some variation in fd from halo to halo, so there could and probably should be a lot of scatter: the gray line could easily be a muchwider band than depicted.
In addition to the rather arbitrary value of fd, this reasoning also predicts a Tully-Fisher relation with the wrong slope. Picking a favorable value of fd only matches the data over a narrow range of mass. It was nevertheless embraced for many years by many people. Selection effects bias samples to bright galaxies. Consequently, the literature is rife with TF samples dominated by galaxies with Mb > 1010 M☉ (the top right corner of the plot above); with so little dynamic range, a slope of 3 looks fine. Once you look outside that tiny box, it does not look fine.
Personally, I think a slope of 3 is an oversimplification. That is the prediction for dark matter halos; there can be effects that vary systematically with mass. An obvious one is adiabatic compression, the effect by which baryons drag some dark matter along with them as they settle to the center of their halos. This increases fv by an amount that depends on the baryonic surface density. Surface density correlates with mass, so I would nominally expect higher velocities in brighter galaxies; this drives up the slope. There are various estimates of this effect; typically one gets a slope like 3.3, not the observed 4. Worse, it predicts an additional effect: at a given mass, galaxies of higher surface brightness should also have higher velocity. Surface brightness should be a second parameter in the Tully-Fisher relation, but this is not observed.
The easiest way to reconcile the predicted and observed slopes are to make fd a function of mass. Since Mb = fd M200 and M200 ~ V2003, Mb ~ fd V2003. Adopting fv = 1 for simplicity, Mb ~ Vf4 follows if fd ~ Vf. Problem solved, QED.
There are [at least] two problems with this argument. One is that the scaling fd ~ Vf must hold perfectly without introducing any scatter. This is a fine-tuning problem: we need one parameter to vary precisely with an another, unrelated parameter. There is no good reason to expect this; we just have to insert the required dependence by hand. This is much worse than choosing an arbitrary value for fd: now we’re making it a rolling fudge factor to match whatever we need it to. We can make it even more complicated by invoking some additional variation in fv, but this just makes the fine-tuning worse as the product fdfv-3 has to vary just so. Another problem is that what we’re doing all this to adjust the prediction of one theory (LCDM) to match that of a different theory (MOND). It is never a good sign when we have to do that, whether we admit it or not.
Abundance matching
The reasoning leading to a slope 3 Tully-Fisher relation assumes a one-to-one relation between baryonic and halo mass (fd = constant). This is an eminently reasonable assumption. We spent a couple of decades trying to avoid having to break this assumption. Once we do so and make fd a freely variable parameter, then it can become a rolling fudge factor that can be adjusted to fit anything. Everyone agrees that is Bad. However, it might be tolerable if there is an independent way of estimating this variation. Rather than make fd just be what we need it to be as described above, we can instead estimate it with abundance matching.
Abundance matching comes from equating the observed number density of galaxies as a function of mass with the number density of dark matter halos. This process gives fd, or at least the stellar fraction, f*, which is close to fd for bright galaxies. Critically, it provides a way to assign dark matter halo masses to galaxies independently of their kinematics. This replaces an arbitrary, rolling fudge factor with a predictive theory.
Abundance matching models generically introduce curvature into the prediction for the BTFR. This stems from the mismatch in the shape of the galaxy stellar mass function (a Schechter function) and the dark halo mass function (a power law on galaxy scales). This leads to a bend in relations that map between visible and dark mass.
The transition from the M ~ V3 reasoning to abundance matching occurred gradually, but became pronounced circa 2010. There are many abundance matching models; I already faced the problem of the multiplicity of LCDM predictions when I wrote a lengthy article on the BTFR in 2012. To get specific, let’s start with an example from then, the model of Trujillo-Gomez-et al. (2011):
The same data as above with the addition of the line predicted by LCDM in the model of Trujillo-Gomez-et al. (2011).
One thing Trujillo-Gomez-et al. (2011) say in their abstract is “The data present a clear monotonic LV relation from ∼50 km s−1 to ∼500 km s−1, with a bend below ∼80 km s−1“. By LV they mean luminosity-velocity, i.e., the regular Tully-Fisher relation. The bend they note is real; that’s what happens when you consider only the starlight and ignore the gas. The bend goes away if you include that gas. This was already known at the time – our original BTFR paper from 2000 has nearly a thousand citations, so it isn’t exactly obscure. Ignoring the gas is a choice that makes no sense empirically but makes a lot of sense from the perspective of LCDM simulations. By 2010, these had become reasonably good at matching the numbers of stars observed in galaxies, but the gas properties of simulated galaxies remained, hmmmmmmm, wanting. It makes sense to utilize the part that works. It makes less sense to pretend that this bend is something physically meaningful rather than an artifact of ignoring the gas. The pressure-supported dwarfs are all star dominated, so this distinction doesn’t matter here, and they follow the BTFR, not the stars-only version.
An old problem in galaxy formation theory is how to calibrate the number density of dark matter halos to that of observed galaxies. For a long time, a choice that people made was to match either the luminosity function or the kinematics. These didn’t really match up, so there was occasional discussion of the virtues and vices of the “luminosity function calibration” vs. the “Tully-Fisher calibration.” These differed by a factor of ~2. This tension between remains with us. Mostly simulations have opted to adopt the luminosity function calibration, updated and rebranded as abundance matching. Again, this makes sense from the perspective of LCDM simulations, because the number density of dark matter halos is something that simulations can readily quantify while the kinematics of individual galaxies are much harder to resolve**.
The nonlinear relation between stellar mass and halo mass obtained from abundance matching inevitably introduces curvature into the corresponding Tully-Fisher relation predicted by such models. That’s what you see in the curved line of Trujillo-Gomez-et al. (2011) above. They weren’t the first to obtain such a result, and the certainly weren’t the last: this is a feature of LCDM with abundance matching, not a bug.
The line of Trujillo-Gomez-et al. (2011) matches the data pretty well at intermediate masses. It diverges to higher velocities at both small and large galaxy masses. I’ve written about this tension at high masses before; it appears to be real, but let’s concentrate on low masses here. At low masses, the velocity of galaxies with Mb < 108 M☉ appears to be overestimated. But the divergence between model and reality has just begun, and it is hard to resolve small things in simulations, so this doesn’t seem too bad. Yet.
Moving ahead, there are the “Latte” simulations of Wetzel et al. (2016) that use the well-regarded FIRE code to look specifically at simulated dwarfs, both isolated and satellites – specifically satellites of Milky Way-like systems. (Milky Way. Latte. Get it? Nerd humor.) So what does that find?
The same data as above with the addition of simulated dwarfs (orange triangles) from the Latte LCDM simulation of Wetzel et al. (2016), specifically the simulated satellites in the top panel of their Fig. 3. Note that we plot Vf = 2σ for pressure supported systems, both real and simulated.
The individual simulated dwarf satellites of Wetzel et al. (2016) follow the extrapolation of the line predicted by Trujillo-Gomez-et al. (2011). To first order, it is the same result to higher resolution (i.e., smaller galaxy mass). Most of the simulated objects have velocity dispersions that are higher than observed in real galaxies. Intriguingly, there are a couple of simulated objects with M* ~ 5 x 106 M☉ that fall nicely among the data where there are both star-dominated and gas-rich galaxies. However, these two are exceptions; the rule appears to be characteristic speeds that are higher than observed.
The lowest mass simulated satellite objects begin to approach the ultrafaint regime, but resolution continues to be an issue: they’re not really there yet. This hasn’t precluded many people from assuming that dark matter will work where MOND fails, which seems like a heck of a presumption given that MOND has been consistently more successful up until that point. Where MOND underpredicts the characteristic velocity of ultrafaints, LCDM hasn’t yet made a clear prediction, and it overpredicts velocities for objects of slightly larger mass. Ain’t no theory covering itself in glory here, but this is a good example where objects that are a problem for MOND are also a problem for dark matter, and it seems likely that non-equilibrium dynamics play a role in either case.
Comparing apples with apples
A persistent issue with comparing simulations to reality is extracting comparable measures. Where circular velocities are measured from velocity fields in rotating galaxies and estimated from measured velocity dispersions in pressure supported galaxies, the most common approach to deriving rotation curves from simulated objects is to sum up particles in spherical shells and assume V2 = GM/R. These are not the same quantities. They should be proxies for one another, but equality holds only in the limit of isotropic orbits in spherical symmetry. Reality is messier than that, and simulations aren’t that simple either%.
Sales et al. (2017) make the effort to make a better comparison between what is observed given how it is observed, and what the simulations would show for that quantity. Others have made a similar effort; a common finding is that the apparent rotation speeds of simulated gas disks do not trace the gravitational potential as simply as GM/R. That’s no surprise, but most simulated rotation curves do not look like those of real galaxies^, so the comparison is not straightforward. Those caveats aside, Sales et al. (2017) are doing the right thing in trying to make an apples-to-apples comparison between simulated and observed quantities. They extract from simulations a quantity Vout that is appropriate for comparison with what we observe in the outer parts of rotation curves. So here is the resulting prediction for the BTFR:
The same data as above with the addition of the line predicted by LCDM in the model ofSales et al. (2017), specifically the formula for Vout in their Table 2 which is their proxy for the observable rotation speed.
That’s pretty good. It still misses at high masses (those two big blue points at the top are Andromeda and the Milky Way) and it still bends away from the data at low masses where there are both star-dominated and gas-rich galaxies. (There are a lot more examples of the latter that I haven’t used here because the plot gets overcrowded.) Despite the overshoot, the use of an observable aspect of the simulations gets closer to the data, and the prediction flattens out in the same qualitative sense. That’s good, so one might see cause for hope that this problem is simply a matter of making a fair comparison between simulations and data. We should also be careful not to over-interpret it: I’ve simply plotted the formula they give; the simulations to which they fit it surely do not resolve ultrafaint dwarfs, so really the line should stop at some appropriate mass scale.
Nevertheless, it makes sense to look more closely at what is observed vs. what is simulated. This has recently been done in greater detail by Ruan et al. (2025). They consider two simulations that implement rather different feedback; both wind up producing rotating, gas rich dwarfs that actually fall on the BTFR.
The same data as above with the addition of simulated dwarfs of Ruan et al. (2025), specifically from the top right panel of their Fig. 6. The orange circles are their “massives” and the red triangles the “marvels” (the distinction refers to different feedback models).
Finally some success after all these years! Looking at this, it is tempting to declare victory: problem solved. It was just a matter of doing the right simulation all along, and making an apples-to-apples comparison with the data.
That sounds too goo to be true. Is it repeatable in other simulations? What works now that didn’t before?
These are high resolution simulations, but they still don’t resolve ultrafaints. We’re talking here about gas-rich dwarfs. That’s also an important topic, so let’s look more closely. What works now is in the apples-to-apples assessment: what we would measure for Vout is less than Vmax (related to V200) of the halo:
Two panels from Fig. 7 of Ruan et al. (2025) showing the ratio of the velocity we might observe relative to the characteristic circular velocity of the halo (top) and the ratio of the radii where these occur (bottom).
The treatment of cold gas in simulations has improved. In these simulations, Vout(Rout) is measured where the gas surface density falls to 1 M☉ pc-2, which is typical of many observations. But the true rotation curve is still rising for objects with Mb < a few x 108 M☉; it has not yet reached a value that is characteristic of the halo. So the apparent velocity is low, even if the dark matter halos are doing basically the same thing as before:
As above, but with the addition of the true Vmax(small black dots) of the simulated halos discussed by Ruan et al. (2025), which follow the relation of Sales et al. (2017) (line for Vmax in their Table 2).
I have mixed feelings about this. On the one hand, there are many dwarf galaxies with rising rotation curves that we don’t see flatten out, so it is easy to imagine they might keep going up, and I find it plausible that this is what we would find if we looked harder. So plausible that I’ve spend a fair amount of time doing exactly this. Not all observations terminate at 1 M☉ pc-2, and whenever we push further out, we see the same damn thing over and over: the rotation curve flattens out and stays flat!!. That’s been my anecdotal experience; getting beyond that systematically is the point of the MOHNGOOSE survey. This was constructed to detect much lower atomic gas surface densities, and routinely detects gas at the 0.1 M☉ pc-2 level where Ruan et al. suggest we should see something closer to Vmax. So far, we don’t.
I don’t want to sound too negative, because how we map what we predict in simulations to what we measure in observations is a serious issue. But it seems a bit of a stretch for a low-scatter power law BTFR to be the happenstance of observational sensitivity that cuts in at a convenient mass scale. So far, we see no indication of that in more sensitive observations. I’ll certainly let you know if that changes.
Survey says…
At this juncture, we’ve examined enough examples that the reader can appreciate my concern that LCDM models can predict rather different things. What does the theory really predict? We can’t really test it until we agree what it should do!!!.
I thought it might be instructive to combine some of the models discussed above. It is.
Some of the LCDM predictions discussed above shown together. The dotted line to the right of the data is the halo mass-velocity relation, which is the one thing we all agree LCDM predicts but which is observationally inaccessible. The grey band is a Mo, Mao, & White-type model with fd = 0.025. The red dotted line is the model of Trujillo-Gomez-et al. (2011); the solid red line that of Sales et al. (2017) for Vmax.
The models run together, more or less, for high mass galaxies. Thanks to observational selection effects, these are the objects we’ve always known about and matched our theories to. In order to test a theory, one wants to force it to make predictions in new regimes it wasn’t built for. Low mass galaxies do that, as do low surface brightness galaxies, which are often but not always low mass. MOND has done well for both, down to the ultrafaints we’re discussing here. LCDM does not yet explain those, or really any of the intermediate mass dwarfs.
What really disturbs me about LCDM models is their flexibility. It’s not just that they miss, it’s that it is possible to miss the data on either side of the BTFR. The older fd = constant models predict velocities that are too low for low mass galaxies. The more recent abundance matching models predict velocities that are too high for low mass galaxies. I have no doubt that a model can be constructed that gets it right, because there is obviously enough flexibility to do pretty much anything. Adding new parameters until we get it right is an example of epicyclic thinking, as I’ve been pointing out for thirty years. I don’t know what could be worse for an idea like dark matter that is not falsifiable.
We still haven’t come anywhere close to explaining the ultrafaints in either theory. In LCDM, we don’t even know if we should draw a curved line that catches them as if they’re in equilibrium, or start from a power-law BTFR and look for departures from that due to tidal effects. Both are possible in LCDM, both are plausible, as is some combination of both. I expect theorists will pick an option and argue about it indefinitely.
Tidal effects
The typical velocity dispersion of the ultrafaint dwarfs is too high for them to be in equilibrium in MOND. But there’s also pretty much no way these tiny things could be in equilibrium, being in the rough neighborhood dominated by our home, the cosmic gorilla. That by itself doesn’t make an explanation; we need to work out what happens to such things as they evolve dynamically under the influence of a pronounced external field. To my knowledge, this hasn’t been addressed in detail in MOND any more than in LCDM, though Brada & Milgrom addressed some of the relevant issues.
There is a difference in approach required for the two theories. In LCDM, we need to increase the resolution of simulations to see what happens to the tiniest of dark matter halos and their resident galaxies within the larger dark matter halos of giant galaxies. In MOND we have to simulate the evolution along the orbit of each unique individual. This is challenging on multiple levels, as each possible realization of a MOND theory requires its own code. Writing a simulation code for AQUAL requires a different numerical approach than QUMOND, and those are both modifications of gravity via the Poisson euqation. We don’t know which might be closer to reality; heck, we don’t even know [yet] if MOND is a modification of gravity or intertia, the latter being even harder to code.
Cold dark matter is scale-free, so crudely I expect ultrafaint dwarfs in LCDM to do the same as larger dwarf satellites that have been simulated: their outer dark matter halos are gradually whittled away by tidal stripping for many Gyr. At first the stars are unaffected, but eventually so little dark matter is left that the stars start to be lost impulsively during pericenter passages. Though the dark matter is scale free, the stars and the baryonic physics that made them are not, so that’s where it gets tricky. The apparent dark-to-luminous mass ratio is huge, so one possibility is that the ultrafaints are in equilibrium despite their environment; they just made ridiculously few stars from the amount of mass available. That’s consistent with a wild extrapolation of abundance matching models, but how it comes about physically is less clear. For example, at some low mass, a galaxy would make so few stars that none are massive enough to result in a supernova, so there is no feedback, which is what is preventing too many stars from forming. Awkward. Alternately, the constant exposure to tidal perturbation might stir things up, with the velocity dispersion growing and stars getting stripped to form tidal streams, so they may have started as more massive objects. Or some combination of both, plus the evergreen possibility of things that don’t occur to me offhand.
Equilibrium for ultrafaint satellites is not an option in MOND, but tidal stirring and stripping is. As a thought experiment, let’s imagine what happens to a low mass dwarf typical of the field that falls towards the Milky Way from some large distance. Initially gas-rich, the first environmental effect that it is likely to experience is ram pressure stripping by the hot coronal gas around the Milky Way. That’s a baryonic effect that happens in either theory; it’s nothing to do with the effective law of gravity. A galaxy thus deprived of much of its mass will be out of equilibrium; its internal velocities will be typical of the original mass but the stripped mass is less. Consequently, its structure must adjust to compensate; perhaps dwarf Irregulars puff up and are transformed into dwarf Spheroidals in this way. Our notional infalling dwarf may have time to equilibrate to its new mass before being subject to strong tidal perturbation by the Milky Way, or it may not. If not, it will have characteristic internal velocities that are too high for its new mass, and reside above the BTFR. I doubt this suffices to explain [m]any of the ultrafaints, as their masses are so tiny that some stellar mass loss is also likely to have occurred.
Let’s suppose that our infalling dwarf has time to [approximately] equilibrate, or it simply formed nearby to begin with. Now it is a pressure supported system [more or less] on the BTFR. As it orbits the Milky Way, it feels an extra force from the external field. If it stays far enough out to remain in quasi-equilibrium in the EFE regime, then it will oscillate in size and velocity dispersion in phase with the strength of the external field it feels along its orbit.
If instead a satellite dips too close, it will be tidally disturbed and depart from equilibrium. The extra energy may stir it up, increasing its velocity dispersion. It doesn’t have the mass to sustain that, so stars will start to leak out. Tidal disruption will eventually happen, with the details depending on the initial mass and structure of the dwarf and on the eccentricity of its orbit, the distance of closest approach (pericenter), whether the orbit is prograde or retrograde relative to any angular momentum the dwarf may have… it’s complicated, so it is hard to generalize##. Nevertheless, we (McGaugh & Wolf 2010) anticipated that “the deviant dwarfs [ultrafaints] should show evidence of tidal disruption while the dwarfs that adhere to the BTFR should not.” Unlike LCDM where most of the damage is done at closest approach, we anticipate for MOND that “stripping of the deviant dwarfs should be ongoing and not restricted to pericenter passage” because tides are stronger and there is no cocoon of dark matter to shelter the stars. The effect is still maximized at pericenter, its just not as impulsive as in the some of the dark matter simulations I’ve seen.
Stellar streams in the Milky Way identified using Gaia (Malhan et al. 2018).
As a tidally influence dwarf dissolves, the stars will leak out and form a trail. This happens in LCDM too, but there are differences in the rate, coherence, and symmetry of the resulting streams. Perhaps ultrafaint dwarfs are just the last dregs of the tidal disruption process. From this perspective, it hardly matters if they originated as external satellites or are internal star clusters: globular clusters native to the Milky Way should undergo a similar evolution.
Evolutionary tracks
Perhaps some of the ultrafaint dwarfs are the nuggets of disturbed systems that have suffered mass loss through tidal stripping. That may be the case in either LCDM or MOND, and has appealing aspects in either case – we went through all the possibilities in McGaugh & Wolf (2010). In MOND, the BTFR provides a reference point for what a stable system in equilibrium should do. That’s the starting point for the evolutionary tracks suggested here:
BTFR with conceptual evolutionary tracks (red lines) for tidally-stirred ultrafaint dwarfs.
Objects start in equilibrium on the BTFR. As they become subject to the external field, their velocity dispersions first decreases as they transition through the quasi-Newtonian regime. As tides kick in, stars are lost and stretched along the satellite’s orbit, so mass is lost but the apparent velocity dispersion increases as stars gradually separate and stretch out along a stream. Their relative velocities no longer represent a measure of the internal gravitational potential; rather than a cohesive dwarf satellite they’re more an association of stars in similar orbits around the Milky Way.
This is crudely what I imagine might be happening in some of the ultrafaint dwarfs that reside above the BTFR. Reality can be more complicated, and probably is. For example, objects that are not yet disrupted may oscillate around and below the BTFR before becoming completely unglued. Moreover, some individual ultrafaints probably are not real, while the data for others may suffer from systematic uncertainties. There’s a lot to sort out, and we’ve reached the point where the possibility of non-equilibrium effects cannot be ignored.
As a test of theories, the better course remains to look for new galaxies free from environmental perturbation. Ultrafaint dwarfs in the field, far from cosmic gorillas like the Milky Way, would be ideal. Hopefully many will be discovered in current and future surveys.
!Other examples exist and continue to be discovered. More pertinent to my thinking is that the mass threshold at which reionization is supposed to suppress star formation has been a constantly moving goal post. To give an amusing anecdote, while I was junior faculty at the University of Maryland (so at least twenty years ago), Colin Norman called me up out of the blue. Colin is an expert on star formation, and had a burning question he thought I could answer. “Stacy,” he says as soon as I pick up, “what is the lowest mass star forming galaxy?” Uh, Hi, Colin. Off the cuff and totally unprepared for this inquiry, I said “um, a stellar mass of a few times 107 solar masses.” Colin’s immediate response was to laugh long and loud, as if I had made the best nerd joke ever. When he regained his composure, he said “We know that can’t be true as reionization will prevent star formation in potential wells that small.” So, after this abrupt conversation, I did some fact-checking, and indeed, the number I had pulled out of my arse on the spot was basically correct, at that time. I also looked up the predictions, and of course Colin knew his business too; galaxies that small shouldn’t exist. Yet they do, and now the minimum known is two orders of magnitude lower in mass, with still no indication that a lower limit has been reached. So far, the threshold of our knowledge has been imposed by observational selection effects (low luminosity galaxies are hard to see), not by any discernible physics.
More recently, McQuinn et al. (2024) have made a study of the star formation histories of Leo P and a few similar galaxies that are near enough to see individual stars so as to work out the star formation rate over the course of cosmic history. They argue that there seems to be a pause in star formation after reionization, so a more nuanced version of the hypothesis may be that reionization did suppress star forming activity for a while, but these tiny objects were subsequently able to re-accrete cold gas and get started again. I find that appealing as a less simplistic thing that might have happened in the real universe, and not just a simple on/off switch that leaves only a fossil. However, it isn’t immediately clear to me that this more nuanced hypothesis should happen in LCDM. Once those baryons have evaporated, they’re gone, and it is far from obvious that they’ll ever come back to the weak gravity of such a small dark matter halo. It is also not clear to me that this interpretation, appealing as it is, is unique: the reconstructed star formation histories also look consistent with stochastic star formation, with fluctuations in the star formation rate being a matter of happenstance that have nothing to do with the epoch of reionization.
#So how are ultrafaint dwarfs different from star clusters? Great question! Wish we had a great answer.
Some ultrafaints probably are star clusters rather than independent satellite galaxies. How do we tell the difference? Chiefly, the velocity dispersion: star clusters show no need for dark matter, while ultrafaint dwarfs generally appear to need a lot. This of course assumes that their measured velocity dispersions represent an equilibrium measure of their gravitational potential, which is what we’re questioning here, so the opportunity for circular reasoning is rife.
$Rather than apply a strict luminosity cut, for convenience I’ve kept the same “not safe from tidal disruption” distinction that we’ve used before. Some of the objects in the 105 – 106 M☉ range might belong more with the classical dwarfs than with the ultrafaints. This is a reminder that our nomenclature is terrible more than anything physically meaningful.
&Astronomy is an observational science, not a laboratory science. We can only detect the photons nature sends our way. We cannot control all the potential systematics as can be done in an enclosed, finite, carefully controlled laboratory. That means there is always the potential for systematic uncertainties whose magnitude can be difficult to estimate, or sometimes to even be aware of, like how local variations impact Jeans analyses. This means we have to take our error bars with a grain of salt, often such a big grain as to make statistical tests unreliable: goodness of fit is only as meaningful as the error bars.
I say this because it seems to be the hardest thing for physicists to understand. I also see many younger astronomers turning the crank on fancy statistical machinery as if astronomical error bars can be trusted. Garbage in, garbage out.
*This is an example of setting a parameter in a model “by hand.”
**The transition to thinking in terms of the luminosity function rather than Tully-Fisher is so complete that the most recent, super-large, Euclid flagship simulation doesn’t even attempt to address the kinematics of individual galaxies while giving extraordinarily detailed and extensive details about their luminosity distributions. I can see why they’d do that – they want to focus on what the Euclid mission might observe – but it is also symptomatic of the growing tendency to I’ve witnessed to just not talk about those pesky kinematics.
%Halos in dark matter simulations tend to be rather triaxial, i.e., a 3D bloboid that is neither spherical like a soccer ball nor oblate like a frisbee nor prolate like an American football: each principle axis has a different length. If real halos were triaxial, it would lead to non-circular orbits in dark matter-dominated galaxies that are not observed.
The triaxiality of halos is a result from dark matter-only simulations. Personally, I suspect that the condensation of gas within a dark matter halo (presuming such things exist) during the process of galaxy formation rounds-out the inner halo, making it nearly spherical where we are able to make measurements. So I don’t see this as necessarily a failure of LCDM, but rather an example of how more elaborate simulations that include baryonic physics are sometimes warranted. Sometimes. There’s a big difference between this process, which also compresses the halo (making it more dense when it already starts out too dense), and the various forms of feedback, which may or may not further alter the structure of the halo.
^There are many failure modes in simulated rotation curves, the two most common being the cusp-core problem in dwarfs and sub-maximal disks in giants. It is common for the disks of bright spiral galaxies to be nearly maximal in the sense that the observed stars suffice to explain the inner rotation curve. They may not be completely maximal in this sense, but they come close for normal stellar populations. (Our own Milky Way is a good example.) In contrast, many simulations produce bright galaxies that are absurdly sub-maximal; EAGLE and SIMBA being two examples I remember offhand.
Another common problem is that LCDM simulations often don’t produce rotation curves that are as flat as observed. This was something I also found in my early attempts at model-building with dark matter halos. It is easy to fit a flat rotation curve given the data, but it is hard to predict a priori that rotation curves should be flat.
!!Gravitational lensing indicates that rotation curves remain flat to even larger radii. However, these observations are only sensitive to galaxies more massive than those under discussion here. So conceivably there could be another coincidence wherein flatness persists for galaxies with Mb > 1010 M☉, but not those with Mb < 109 M☉.
!!!Many in the community seem to agree that it will surely work out.
##I’ve tried to estimate dissolution timescales, but find the results wanting. For plausible assumptions, one finds timescales that seem plausible (a few Gyr) but with some minor fiddling one can also find results that are no-way that’s-too-short (a few tens of millions of years), depending on the dwarf and its orbit. These are crude analytic estimates; I’m not satisfied that these numbers were particularly meaningful. Still, this is a worry with the tidal-stirring hypothesis: will perturbed objects persist long enough to be observed as they are? This is another reason we need detailed simulations tailored to each object.
*&^#Note added after initial publication: While I was writing this, a nice paper appeared on exactly this issue of the star formation history of a good number of ultrafaint dwarfs. They find that 80% of the stellar mass formed 12.48 ± 0.18 Gyr ago, so 12.5 was a good guess. Formally, at the one sigma level, this is a little after reionization, but only a tiny bit, so close enough: the bulk of the stars formed long ago, like a classical globular cluster, and these ultrafaints are consistent with being fossils.
Intriguingly, there is a hint of an age difference by kinematic grouping, with things that have been in the Milky Way being the oldest, those on first infall being a little younger (but still very old), and those infalling with the Large Magellanic Cloud a tad younger still. If so, then there is more to the story than quenching by cosmic reionization.
They also show a nice collection of images so you can see more examples. The ellipses trace out the half-light radii, so can see the proclivity for many (not all!) of these objects to be elongated, perhaps as a result of tidal perturbation:
Figure 2 from Durbin et al. (2025): Footprints of all HST observations (blue filled patches) overlaid on DSS2 imaging cutouts. Open black ellipses show the galaxy profiles at one half-light radius.
There are many tensions in the era of precision cosmology. The most prominent, at present, is the Hubble tension – the difference between traditional measurements, which consistently obtain H0 = 73 km/s/Mpc, and best fit* to the acoustic power spectrum of the cosmic microwave background (CMB) observed by Planck, H0 = 67 km/s/Mpc. There are others of varying severity that are less widely discussed. In this post, I want to talk about a persistent tension in the baryon density implied by the measured primordial abundances of deuterium and lithium+. Unlike the tension in H0, this problem is not nearly as widely discussed as it should be.
Framing
Part of the reason that this problem is not seen as an important tension has to do with the way in which it is commonly framed. In most discussions, it is simply the primordiallithiumproblem. Deuterium agrees with the CMB, so those must be right and lithium must be wrong. Once framed that way, it becomes a trivial matter specific to one untrustworthy (to cosmologists) observation. It’s a problem for specialists to sort out what went wrong with lithium: the “right” answer is otherwise known, so this tension is not real, making it unworthy of wider discussion. However, as we shall see, this might not be the right way to look at it.
It’s a bit like calling the acceleration discrepancy the dark matter problem. Once we frame it this way, it biases how we see the entire problem. Solving this problem becomes a matter of finding the dark matter. It precludes consideration of the logical possibility that the observed discrepancies occur because the force law changes on the relevant scales. This is the mental block I struggled mightily with when MOND first cropped up in my data; this experience makes it easy to see when other scientists succumb to it sans struggle.
Big Bang Nucleosynthesis (BBN)
I’ve talked about the cosmic baryon density here a lot, but I’ve never given an overview of BBN itself. That’s because it is well-established, and has been for a long time – I assume you, the reader, already know about it or are competent to look it up. There are many good resources for that, so I’ll only give enough of a sketch necessary to the subsequent narrative – a sketch that will be both too little for the experts and too much for the subsequent narrative that most experts are unaware of.
Primordial nucleosynthesis occurs in the first few minutes after the Big Bang when the universe is the right temperature and density to be one big fusion reactor. The protons and available neutrons fuse to form helium and other isotopes of the light elements. Neutrons are slightly more massive and less numerous than protons to begin with. In addition, free neutrons decay with a half-life of roughly ten minutes, so are outnumbered by protons when nucleosynthesis happens. The vast majority of the available neutrons pair up with protons and wind up in 4He while most of the protons remain on their own as the most common isotope of hydrogen, 1H. The resulting abundance ratio is one alpha particle for every dozen protons, or in terms of mass fractions&, Xp = 3/4 hydrogen and Yp = 1/4 helium. That is the basic composition with which the universe starts; heavy elements are produced subsequently in stars and supernova explosions.
Though 1H and 4He are by far the most common products of BBN, there are traces of other isotopes that emerge from BBN:
The time evolution of the relative numbers of light element isotopes through BBN. As the universe expands, nuclear reactions “freeze-out” and establish primordial abundances for the indicated species. The precise outcome depends on the baryon density, Ωb. This plot illustrates a particular choice of Ωb; different Ωb result in observationally distinguishable abundances. (Figures like this are so ubiquitous in discussions of the early universe that I have not been able to identify the original citation for this particular version.)
After hydrogen and helium, the next most common isotope to emerge from BBN is deuterium, 2H. It is the first thing made (one proton plus one neutron) but most of it gets processed into 4He, so after a brief peak, its abundance declines. How much it declines is very sensitive to Ωb: the higher the baryon density, the more deuterium gets gobbled up by helium before freeze-out. The following figure illustrates how the abundance of each isotope depends on Ωb:
“Schramm diagram” adopted from Cyburt et al (2003) showing the abundance of 4He by mass fraction (top) and the number relative to hydrogen of deuterium (D = 2H), helium-3, and lithium as a function of the baryon-to-photon ratio. We measure the photon density in the CMB, so this translates directly to the baryon density$Ωbh2 (top axis).
If we can go out and measure the primordial abundances of these various isotopes, we can constrain the baryon density.
The Baryon Density
It works! Each isotope provides an independent estimate of Ωbh2, and they agree pretty well. This was the first and for a long time the only over-constrained quantity in cosmology. So while I am going to quibble about the exact value of Ωbh2, I don’t doubt that the basic picture is correct. There are too many details we have to get right in the complex nuclear reaction chains coupled to the decreasing temperature of a universe expanding at the rate required during radiation domination for this to be an accident. It is an exquisite success of the standard Hot Big Bang cosmology, albeit not one specific to LCDM.
Getting at primordial, rather than current, abundances is an interesting observational challenge too involved to go into much detail here. Suffice it to say that it can be done, albeit to varying degrees of satisfaction. We can then compare the measured abundances to the theoretical BBN abundance predictions to infer the baryon density.
The Schramm diagram with measured abundances (orange boxes) for the isotopes of the light elements. The thickness of the box illustrates the uncertainty: tiny for deuterium and large for 4He because of the large zoom on the axis scale. The lithium abundance could correspond to either low or high baryon density. 3He is omitted because its uncertainty is too large to provide a useful constraint.
Deuterium is considered the best baryometer because its relic abundance is very sensitive to Ωbh2: a small change in baryon density corresponds to a large change in D/H. In contrast, 4He is a great confirmation of the basic picture – the primordial mass fraction has to come in very close to 1/4 – but the precise value is not very sensitive to Ωbh2. Most of the neutrons end up in helium no matter what, so it is hard to distinguish# a few more from a few less. (Note the huge zoom on the linear scale for 4He. If we plotted it logarithmically with decades of range as we do the other isotopes, it would be a nearly flat line.) Lithium is annoying for being double-valued right around the interesting baryon density so that the observed lithium abundance can correspond to two values of Ωbh2. This behavior stems from the trade off with 7Be which is produced at a higher rate but decays to 7Li after a few months. For this discussion the double-valued ambiguity of lithium doesn’t matter, as the problem is that the deuterium abundance indicates Ωbh2 that is even higher than the higher branch of lithium.
BBN pre-CMB
The diagrams above and below show the situation in the 1990s before CMB estimates became available. Consideration of all the available data in the review of Walker et al. led to the value Ωbh2 = 0.0125 ± 0.0025. This value** was so famous that it was Known. It formed the basis of my predictions for the CMB for both LCDM and no-CDM. This prediction hinged on BBN being correct, and that we understood the experimental bounds on the baryon density. A few years after Walker’s work, Copi et al. provided the estimate++ 0.009 < Ωbh2 < 0.02. Those were the extreme limits of the time, as illustrated by the green box below:
The baryon density as it was known before detailed observations of the acoustic power spectrum of the CMB. BBN was a mature subject before 1990; the massive reviews of Walker et al. and Copi et al. creak with the authority of a solved problem. The controversial tension at the time was between the high and low deuterium measurements from Hogan and Tytler, which were at the extreme ends of the ranges indicated by the bulk of the data in the reviews.
Up until this point, the constraints on BBN had come mostly from helium observations in nearby galaxies and lithium measurements in metal poor stars. It was only just then becoming possible to obtain high quality spectra of sufficiently high redshift quasars to see weak deuterium lines associated with strongly damped primary hydrogen absorption in intergalactic gas along the line of sight. This is great: deuterium is the most sensitive baryometer, the redshifts were high enough to be early in the history of the universe close to primordial times, and the gas was in the middle of intergalactic nowhere so shouldn’t be altered by astrophysical processes. These are ideal conditions, at least in principle.
First results were binary. Craig Hogan obtained a high deuterium abundance, corresponding to a low baryon density. Really low. From my Walker et al.-informed confirmation bias, too low. It was a a brand new result, so promising but probably wrong. Then Tytler and his collaborators came up with the opposite result: low deuterium abundance corresponding to a high baryon density: Ωbh2 = 0.019 ± 0.001. That seemed pretty high at the time, but at least it was within the bound Ωbh2 < 0.02 set by Copi et al. There was a debate between these high/low deuterium camps that ended in a rare act of intellectual honesty by a cosmologist when Hogan&& conceded. We seemed to have settled on the high-end of the allowed range, just under Ωbh2 = 0.02.
Enter the CMB
CMB data started to be useful for constraining the baryon density in 2000 and improved rapidly. By that point, LCDM was already well-established, and I had published predictions for both LCDM and no-CDM. In the absences of cold dark matter, one expects a damping spectrum, with each peak lower than the one before it. For the narrow (factor of two) Known range of possible baryon densities, all the no-CDM models run together to essentially the same first-to-second peak ratio.
Peak locations measured by WMAP in 2003 (points) compared to the a priori (1999) predictions of LCDM (red tone lines) and no-CDM (blue tone lines). Models are normalized in amplitude around the first peak.
Adding CDM into the mix adds a driver to the oscillations. This fights the baryonic damping: the CDM is like a parent pushing a swing while the baryons are the kid dragging his feet. This combination makes just about any pattern of peaks possible. Not all free parameters are made equal: the addition of a single free parameter, ΩCDM, makes it possible to fit any plausible pattern of peaks. Without it (no-CDM means ΩCDM = 0), only the damping spectrum is allowed.
For BBN as it was known at the time, the clear difference was in the relative amplitude$$ of the first and second peaks. As can be seen above, the prediction for no-CDM was correct and that for LCDM was not. So we were done, right?
Of course not. To the CMB community, the only thing that mattered was the fit to the CMB power spectrum, not some obscure prediction based on BBN. Whatever the fit said was True; too bad for BBN if it didn’t agree.
The way to fit the unexpectedly small## second peak was to crank up the baryon density. To do that, Tegmark & Zaldarriaga (2000) needed 0.022 < Ωbh2 < 0.040. That’s what the first blue point below. This was the first time that I heard it suggested that the baryon density could be so high.
The baryon density from deuterium (red triangles) before and after (dotted vertical line) estimates from the CMB (blue points). The horizontal dotted line is the pre-CMB upper limit of Copi et al.
The astute reader will note that the CMB-fit 0.022 < Ωbh2 < 0.040 sits entirely outside the BBN bounds 0.009 < Ωbh2 < 0.02. So we’re done, right? Well, no – the community simply ignored the successful a priori prediction of the no-CDM scenario. That was certainly easier than wrestling with its implications, and no one seems to have paused to contemplate why the observed peak ratio came in exactly at the one unique value that it could obtain in the case of no-CDM.
For a few years, the attitude seemed to be that BBN was close but not quite right. As the CMB data improved, the baryon density came down, ultimately settling on Ωbh2 = 0.0224 ± 0.0001. Part of the reason for this decline from the high initial estimate is covariance. In this case, the tilt plays a role: the baryon density declined as ns = 1 → 0.965 ± 0.004. Getting the second peak amplitude right takes a combination of both.
Now we’re back in the ballpark, almost: Ωbh2 = 0.0224 is not ridiculously far above the BBN limit Ωbh2 < 0.02. Close enough for Spergel et al. (2003) to say “The remarkable agreement between the baryon density inferred from D/H values and our [WMAP] measurements is an important triumph for the basic big bang model.” This was certainly true given the size of the error bars on both deuterium and the CMB at the time. It also elides*** any mention of either helium or lithium or the fact that the new Known was not consistent with the previous Known. Ωbh2 = 0.0224 was always the ally; Ωbh2 = 0.0125 was always the enemy.
Note, however, that deuterium made a leap from below Ωbh2 = 0.02 to above 0.02 exactly when the CMB indicated that it should do so. They iterated to better agreement and pretty much stayed there. Hopefully that is the correct answer, but given the history of the field, I can’t help worrying about confirmation bias. I don’t know if that is what’s going on, but if it were, this convergence over time is what it would look like.
Lithium does not concur
Taking the deuterium results at face value, there really is excellent agreement with the LCDM fit to the CMB, so I have some sympathy for the desire to stop there. Deuterium is the best baryometer, after all. Helium is hard to get right at a precise enough level to provide a comparable constraint, and lithium, well, lithium is measured in stars. Stars are tiny, much smaller than galaxies, and we know those are too puny to simulate.
Spite & Spite (1982) [those are names, pronounced “speet”; we’re not talking about spiteful stars] discovered what is now known as the Spite plateau, a level of constant lithium abundance in metal poor stars, apparently indicative of the primordial lithium abundance. Lithium is a fragile nucleus; it can be destroyed in stellar interiors. It can also be formed as the fragmentation product of cosmic ray collisions with heavier nuclei. Both of these things go on in nature, making some people distrustful of any lithium abundance. However, the Spite plateau is a sort of safe zone where neither effect appears to dominate. The abundance of lithium observed there is indeed very much in the right ballpark to be a primordial abundance, so that’s the most obvious interpretation.
Lithium indicates a lowish baryon density. Modern estimates are in the same range as BBN of old; they have not varied systematically with time. There is no tension between lithium and pre-CMB deuterium, but it disagrees with LCDM fits to the CMB and with post-CMB deuterium. This tension is both persistent and statistically significant (Fields 2011 describes it as “4–5σ”).
The baryon density from lithium (yellow symbols) over time. Stars are measurements in groups of stars on the Spite plateau; the square represents the approximate value from the ISM of the SMC.
I’ve seen many models that attempt to fix the lithium abundance, e.g., by invoking enhanced convective mixing via <<mumble mumble>> so that lithium on the surface of stars is subject to destruction deep in the stellar interior in a previously unexpected way. This isn’t exactly satisfactory – it should result in a mess, not a well-defined plateau – and other attempts I’ve seen to explain away the problem do so with at least as much contrivance. All of these models appeared after lithium became a problem; they’re clearly motivated by the assumption bias that the CMB is correct so the discrepancy is specific to lithium so there must be something weird about stars that explains it.
Another way to illustrate the tension is to use Ωbh2 from the Planck fit to predict what the primordial lithium abundance should be. The Planck-predicted band is clearly higher than and offset from the stars of the Spite plateau. There should be a plateau, sure, but it’s in the wrong place.
The lithium abundance in metal poor stars (points), the interstellar medium of the Small Magellanic Cloud (green band), and the primordial lithium abundance expected for the best-fit Planck LCDM. For reference, [Fe/H] = -3 means an iron abundance that is one one-thousandth that of the sun.
An important recent observation is that a similar lithium abundance is obtained in the metal poor interstellar gas of the Small Magellanic Cloud. That would seem to obviate any explanation based on stellar physics.
The Schramm diagram with the Planck CMB-LCDM value added (vertical line). This agrees well with deuterium measurements made after CMB data became available, but not with those before, nor with the measured abundance of lithium.
We can also illustrate the tension on the Schramm diagram. This version adds the best-fit CMB value and the modern deuterium abundance. These are indeed in excellent agreement, but they don’t intersect with lithium. The deuterium-lithium tension appears to be real, and comparable in significance to the H0 tension.
So what’s the answer?
I don’t know. The logical options are
A systematic error in the primordial lithium abundance
A systematic error in the primordial deuterium abundance
Physics beyond standard BBN
I don’t like any of these solutions. The data for both lithium and deuterium are what they are. As astronomical observations, both are subject to the potential for systematic errors and/or physical effects that complicate their interpretation. I am also extremely reluctant to consider modifications to BBN. There are occasional suggestions to this effect, but it is a lot easier to break than it is to fix, especially for what is a fairly small disagreement in the absolute value of Ωbh2.
I have left the CMB off the list because it isn’t part of BBN: it’s constraint on the baryon density is real, but involves completely different physics. It also involves different assumptions, i.e., the LCDM model and all its invisible baggage, while BBN is just what happens to ordinary nucleons during radiation domination in the early universe. CMB fits are corroborative of deuterium only if we assume LCDM, which I am not inclined to accept: deuterium disagreed with the subsequent CMB data before it agreed. Whether that’s just progress or a sign of confirmation bias, I also don’t know. But I do know confirmation bias has bedeviled the history of cosmology, and as the H0 debate shows, we clearly have not outgrown it.
The appearance of confirmation bias is augmented by the response time of each measured elemental abundance. Deuterium is measured using high redshift quasars; the community that does that work is necessarily tightly coupled to cosmology. It’s response was practically instantaneous: as soon as the CMB suggested that the baryon density needed to be higher, conforming D/H measurements appeared. Indeed, I recall when that first high red triangle appeared in the literature, a colleague snarked to me “we can do that too!” In those days, those of us who had been paying attention were all shocked at how quickly Ωbh2 = 0.0125 ± 0.0025 was abandoned for literally double that value, ΩBh2 = 0.025 ± 0.001. That’s 4.6 sigma for those keeping score.
The primordial helium abundance is measured in nearby dwarf galaxies. That community is aware of cosmology, but not as strongly coupled to it. Estimates of the primordial helium abundance have drifted upwards over time, corresponding to higher implied baryon densities. It’s as if confirmation bias is driving things towards the same result, but on a timescale that depends on the sociological pressure of the CMB imperative.
Fig. 8 from Steigman (2012)showing the history of primordial helium mass fraction (YP) determinations as a function of time.
In this case, how we sample evidence depends on the field we’re active in. Lithium is measured in stars. One can have a productive career in stellar physics while entirely ignoring cosmology; it is the least likely to be perturbed by edicts from the CMB community. The inferred primordial lithium abundance has not budged over time.
What’s your confirmation bias?
I try not to succumb to confirmation bias, but I know that’s impossible. The best I can do is change my mind when confronted with new evidence. This is why I went from being sure that non-baryonic dark matter had to exist to taking seriously MOND as the theory that predicted what I observed.
I do try to look at things from all perspectives. Here, the CMB has been a roller coaster. Putting on an LCDM hat, the location of the first peak came in exactly where it was predicted: this was strong corroboration of a flat FLRW geometry. What does it mean in MOND? No idea – MOND doesn’t make a prediction about that. The amplitude of the second peak came in precisely as predicted for the case of no-CDM. This was corroboration of the ansatz inspired by MOND, and the strongest possible CMB-based hint that we might be barking up the wrong tree with LCDM.
As an exercise, I went back and maxed out the baryon density as it was known before the second peak was observed. We already thought we knew LCDM parameters well enough to do this. We couldn’t. The amplitude of the second peak came as a huge surprise to LCDM; everyone acknowledged that at the time (if pressed; many simply ignored it). Nowadays this is forgotten, or people have gaslit themselves into believing this was expected all along. It was not.
From the perspective of no-CDM, we don’t really care whether deuterium or lithium hits closer to the right baryon density. All plausible baryon densities predict essentially the same A1:2 amplitude ratio. Once we admit CDM as a possibility, then the second peak amplitude becomes very sensitive to the mix of CDM and baryons. From this perspective, the lithium-indicated baryon density is unacceptable. That’s why it is important to have a test that is independent of the CMB. Both deuterium and lithium provide that, but they disagree about the answer.
Once we broke BBN to fit the second peak in LCDM, we were admitting (if not to ourselves) that the a priori prediction of LCDM had failed. Everything after that is a fitting exercise. There are enough free parameters in LCDM to fit any plausible power spectrum. Cosmologists are fond of saying there are thousands of independent multipoles, but that overstates the case: it doesn’t matter how finely we sample the wave pattern, it matters what the wave pattern is. That is not as over-constrained as it is made to sound. LCDM is, nevertheless, an excellent fit to the CMB data; the test then is whether the parameters of this fit are consistent with independent measurements. It was until it wasn’t; that’s why we face all thesetensions now.
Despite the success of the prediction of the second peak, no-CDM gets the third peak wrong. It does so in a way that is impossible to fix short of invoking new physics. We knew that had to happen at some level; empirically that level occurs at L = 600. After that, it becomes a fitting exercise, just as it is in LCDM – only now, one has to invent a new theory of gravity in which to make the fit. That seems like a lot to ask, so while it remained as a logical possibility, LCDM seemed the more plausible explanation for the CMB if not dynamical data. From this perspective, that A1:2 came out bang on the value predicted by no-CDM must just be one heck of a cosmic fluke. That’s easy to accept if you were unaware of the prediction or scornful of its motivation; less so if you were the one who made it.
Either way, the CMB is now beyond our ability to predict. It has become a fitting exercise, the chief issue being what paradigm in which to fit it. In LCDM, the fit follows easily enough; the question is whether the result agrees with other data: are these tensions mere hiccups in the great tradition of observational cosmology? Or are they real, demanding some new physics?
The widespread attitude among cosmologists is that it will be impossible to fit the CMB in any way other than LCDM. That is a comforting thought (it has to be CDM!) and for a long time seemed reasonable. However, it has been contradicted by the success of Skordis & Zlosnik (2021) using AeST, which can fit the CMB as well as LCDM.
AeST is a very important demonstration that one does not need dark matter to fit the CMB. One does need other fields+++, so now the reality of those have to be examined. Where this show stops, nobody knows.
I’ll close by noting that the uniqueness claimed by the LCDM fit to the CMB is a property more correctly attributed to MOND in galaxies. It is less obvious that this is true because it is always possible to fit a dark matter model to data once presented with the data. That’s not science, that’s fitting French curves. To succeed, a dark matter model must “look like” MOND. It obviously shouldn’t do that, so modelers refuse to go there, and we continue to spin our wheels and dig the rut of our field deeper.
Note added in proof, as it were: I’ve been meaning to write about this subject for a long time, but hadn’t, in part because I knew it would be long and arduous. Being deeply interested in the subject, I had to slap myself repeatedly to refrain from spending even more time updating the plots with publication date as an axis: nothing has changed, so that would serve only to feed my OCD. Even so, it has taken a long time to write, which I mention because I had completed the vast majority of this post before the IAU announced on May 15 that Cooke & Pettini have been awarded the Gruber prize for their precision deuterium abundance. This is excellent work (it is one of the deuterium points in the relevant plot above), and I’m glad to see this kind of hard, real-astronomy work recognized.
The award of a prize is a recognition of meritorious work but is not a guarantee that it is correct. So this does not alter any of the concerns that I express here, concerns that I’ve expressed for a long time. It does make my OCD feels obliged to comment at least a little on the relevant observations, which is itself considerably involved, but I will tack on some brief discussion below, after the footnotes.
*These methods were in agreement before they were in tension, e.g., Spergel et al. (2003) state: “The agreement between the HST Key Project value and our [WMAP CMB] value, h = 0.72 ±0.05, is striking, given that the two methods rely on different observables, different underlying physics, and different model assumptions.”
+Here I mean the abundance of the primary isotope of lithium, 7Li. There is a different problem involving the apparent overabundance of 6Li. I’m not talking about that here; I’m talking about the different baryon densities inferred separately from the abundances of D/H and 7Li/H.
&By convention, X, Y, and Z are the mass fractions of hydrogen, helium, and everything else. Since the universe starts from a primordial abundance of Xp = 3/4 and Yp = 1/4, and stars are seen to have approximately that composition plus a small sprinkling of everything else (for the sun, Z ≈ 0.02), and since iron lines are commonly measured in stars to trace Z, astronomers fell into the habit of calling Z the metallicity even though oxygen is the third most common element in the universe today (by both number and mass). Since everything in the periodic table that isn’t hydrogen and helium is a small fraction of the mass, all the heavier elements are often referred to collectively as metals despite the unintentional offense to chemistry.
$The factor of h2 appears because of the definition of the critical density ρc = (3H02)/(8πG): Ωb = ρb/ρc. The physics cares about the actual density ρb but Ωbh2 = 0.02 is a lot more convenient to write than ρb,now = 3.75 x 10-31 g/cm3.
#I’ve worked on helium myself, but was never able to do better than Yp = 0.25 ± 0.01. This corroborates the basic BBN picture, but does not suffice as a precise measure of the baryon density. To do that, one must obtain a result accurate to the third place of decimals, as discussed in the exquisite works of Kris Davidson, Bernie Pagel, Evan Skillman, and their collaborators. It’s hard to do for both observational reasons and because a wealth of subtle atomic physics effects come into play at that level of precision – helium has multiple lines; their parent population levels depend on the ionization mechanism, the plasma temperature, its density, and fluorescence effects as well as abundance.
**The value reported by Walker et al. was phrased as Ωbh502 = 0.05 ± 0.01, where h50 = H0/(50 km/s/Mpc); translating this to the more conventional h = H0/(100 km/s/Mpc) decreases these numbers by a factor of four and leads to the impression of more significant digits than were claimed. It is interesting to consider the psychological effect of this numerology. For example, the modern CMB best-fit value in this phrasing is Ωbh502 = 0.09, four sigma higher than the value Known from the combined assessment of the light isotope abundances. That seems like a tension – not just involving lithium, but the CMB vs. all of BBN. Amusingly, the higher baryon density needed to obtain a CMB fit assuming LCDM is close to the threshold where we might have gotten away without the dynamical need (Ωm > Ωb) for non-baryonic dark matter that motivated non-baryonic dark matter in the first place. (For further perspective at a critical juncture in the development of the field, see Peebles 1999).
The use of h50 itself is an example of the confirmation bias I’ve mentioned before as prevalent at the time, that Ωm = 1 and H0 = 50 km/s/Mpc. I would love to be able to do the experiment of sending the older cosmologists who are now certain of LCDM back in time to share the news with their younger selves who were then equally certain of SCDM. I suspect their younger selves would ask their older selves at what age they went insane, if they didn’t simply beat themselves up.
++Craig Copi is a colleague here at CWRU, so I’ve asked him about the history of this. He seemed almost apologetic, since the current “right” baryon density from the CMB now is higher than his upper limit, but that’s what the data said at the time. The CMB gives a more accurate value only once you assume LCDM, so perhaps BBN was correct in the first place.
&&Or succumbed to peer pressure, as that does happen. I didn’t witness it myself, so don’t know.
$$The absolute amplitude of the no-CDM model is too high in a transparent universe. Part of the prediction of MOND is that reionization happens early, causing the universe to be a tiny bit opaque. This combination came out just right for τ = 0.17, which was the original WMAP measurement. It also happens to be consistent with the EDGES cosmic dawn signal and the growing body of evidence from JWST.
##The second peak was unexpectedly small from the perspective of CDM; it was both natural and expected in no-CDM. At the time, it was computationally expensive to calculate power spectra, so people had pre-computed coarse grids within which to hunt for best fits. The range covered by the grids was informed by extant knowledge, of which BBN was only one element. From a dynamical perspective, Ωm > 0.2 was adopted as a hard limit that imposed an edge in the grids of the time. There was no possibility of finding no-CDM as the best fit because it had been excluded as a possibility from the start.
***Spergel et al. (2003) also say “the best-fit Ωbh2 value for our fits is relatively insensitive to cosmological model and dataset combination as it depends primarily on the ratio of the first to second peak heights (Page et al. 2003b)” which is of course the basis of the prediction I made using the baryon density as it was Known at the time. They make no attempt to test that prediction, nor do they cite it.
+++I’ve heard some people assert that this is dark matter by a different name, so is a success of the traditional dark matter picture rather than of modified gravity. That’s not at all correct. It’s just stage three in the list of reactions to surprising results identified by Louis Agassiz.
All of the figures below are from Cooke & Pettini (2018), which I employ here to briefly illustrate how D/H is measured. This is the level of detail I didn’t want to get into for either deuterium or helium or lithium, which are comparably involved.
First, here is a spectrum of the quasar they observe, Q1243+307. The quasar itself is not the object of interest here, though quasars are certainly interesting! Instead, we’re looking at the absorption lines along the line of sight; the quasar is being used as a spotlight to illuminate the gas between it and us.
Figure 1. Final combined and flux-calibrated spectrum of Q1243+307 (black histogram) shown with the corresponding error spectrum (blue histogram) and zero level (green dashed line). The red tick marks above the spectrum indicate the locations of the Lyman series absorption lines of the sub-DLA at redshift zabs = 2.52564. Note the exquisite signal-to-noise ratio (S/N) of the combined spectrum, which varies from S/N ≃ 80 near the Lyα absorption line of the sub-DLA (∼4300 Å) to S/N ≃ 25 at the Lyman limit of the sub-DLA, near 3215 Å in the observed frame.
The big hump around 4330 Å is Lyman α emission from the quasar itself. Lyα is the n = 2 to 1 transition of hydrogen, Lyβ is the n = 3 to 1 transition, and so on. The rest frame wavelength of Lyα is far into the ultraviolet at 1216 Å; we see it redshifted to z = 2.558. The rest of the spectrum is continuum and emission lines from the quasar with absorption lines from stuff along the line of sight. Note that the red end of the spectrum at wavelengths longer than 4400 Å is mostly smooth with only the occasional absorption line. Blueward of 4300 Å, there is a huge jumble. This is not noise, this is the Lyα forest. Each of those lines is absorption from hydrogen in clouds at different distances, hence different redshifts, along the line of sight.
Most of the clouds in the Lyα forest are ephemeral. The cross section for Lyα is huge so It takes very little hydrogen to gobble it up. Most of these lines represent very low column densities of neutral hydrogen gas. Once in a while though, one encounters a higher column density cloud that has enough hydrogen to be completely opaque to Lyα. These are damped Lyα systems. In damped systems, one can often spot the higher order Lyman lines (these are marked in red in the figure). It also means that there is enough hydrogen present to have a shot at detecting the slightly shifted version of Lyα of deuterium. This is where the abundance ratio D/H is measured.
To measure D/H, one has not only to detect the lines, but also to model and subtract the continuum. This is a tricky business in the best of times, but here its importance is magnified by the huge difference between the primary Lyα line which is so strong that it is completely black and the deuterium Lyα line which is incredibly weak. A small error in the continuum placement will not matter to the measurement of the absorption by the primary line, but it could make a huge difference to that of the weak line. I won’t even venture to discuss the nonlinear difference between these limits due to the curve of growth.
Figure 2. Lyα profile of the absorption system at zabs = 2.52564 toward the quasar Q1243+307 (black histogram) overlaid with the best-fitting model profile (red line), continuum (long dashed blue line), and zero-level (short dashed green line). The top panels show the raw, extracted counts scaled to the maximum value of the best-fitting continuum model. The bottom panels show the continuum normalized flux spectrum. The label provided in the top left corner of every panel indicates the source of the data. The blue points below each spectrum show the normalized fit residuals, (data–model)/error, of all pixels used in the analysis, and the gray band represents a confidence interval of ±2σ. The S/N is comparable between the two data sets at this wavelength range, but it is markedly different near the high order Lyman series lines (see Figures 4 and 5). The red tick marks above the spectra in the bottom panels show the absorption components associated with the main gas cloud (Components 2, 3, 4, 5, 6, 8, and 10 in Table 2), while the blue tick marks indicate the fitted blends. Note that some blends are also detected in Lyβ–Lyε.
The above examples look pretty good. The authors make the necessary correction for the varying spectral sensitivity of the instrument, and take great care to simultaneously fit the emission of the quasar and the absorption. I don’t think they’ve done anything wrong; indeed, it looks like they did everything right – just as the people measuring lithium in stars have.
Still, as an experienced spectroscopist, there are some subtle details that make me queasy. There are two independent observations, which is awesome, and the data look almost exactly the same, a triumph of repeatability. The fitted models are nearly identical, but if you look closely, you can see the model cuts slightly differently along the left edge of the damped absorption around 4278 Å in the two versions of the spectrum, and again along the continuum towards the right edge.
These differences are small, so hopefully don’t matter. But what is the continuum, really? The model line goes through the data, because what else could one possibly do? But there is so much Lyα absorption, is that really continuum? Should the continuum perhaps trace the upper envelope of the data? A physical effect that I worry about is that weak Lyα is so ubiquitous, we never see the true continuum but rather continuum minus a tiny bit of extraordinarily weak (Gunn-Peterson) absorption. If the true continuum from the quasar is just a little higher, then the primary hydrogen absorption is unaffected but the weak deuterium absorption would go up a little. That means slightly higher D/H, which means lower Ωbh2, which is the direction in which the measurement would need to move to come into closer agreement with lithium.
Is the D/H measurement in error? I don’t know. I certainly hope not, and I see no reason to think it is. I do worry that it could be. The continuum level is one thing that could go wrong; there are others. My point is merely that we shouldn’t assume it has to be lithium that is in error.
An important check is whether the measured D/H ratio depends on metallicity or column density. It does not. There is no variation with metallicity as measured by the logarithmic oxygen abundance relative to solar (left panel below). Nor does it appear to depend on the amount of hydrogen in the absorbing cloud (right panel). In the early days of this kind of work there appeared to be a correlation, raising the specter of a systematic. That is not indicated here.
Figure 6. Our sample of seven high precision D/H measures (symbols with error bars); the green symbol represents the new measure that we report here. The weighted mean value of these seven measures is shown by the red dashed and dotted lines, which represent the 68% and 95% confidence levels, respectively. The left and right panels show the dependence of D/H on the oxygen abundance and neutral hydrogen column density, respectively. Assuming the Standard Model of cosmology and particle physics, the right vertical axis of each panel shows the conversion from D/H to the universal baryon density. This conversion uses the Marcucci et al. (2016) theoretical determination of the d(p,γ)3He cross-section. The dark and light shaded bands correspond to the 68% and 95% confidence bounds on the baryon density derived from the CMB (Planck Collaboration et al. 2016).
I’ll close by noting that Ωbh2 from this D/H measurement is indeed in very good agreement with the best-fit Planck CMB value. The question remains whether the physics assumed by that fit, baryons+non-baryonic cold dark mater+dark energy in a strictly FLRW cosmology, is the correct assumption to make.
I set out last time to discuss some of the tensions that persist in afflicting cosmic concordance, but didn’t get past the Hubble tension. Since then, I’ve come across more of that, e.g., Boubel et al (2024a), who use a variant of Tully-Fisher to obtain H0 = 73.3 ± 2.1(stat) ± 3.5(sys) km/s/Mpc. Having done that sort of work, their systematic uncertainty term seemed large to me. I then came across Scolnic et al. (2024) who trace this issue back to one apparently erroneous calibration amongst many, and correct the results to H0 = 76.3 ± 2.1(stat) ± 1.5(sys) km/s/Mpc. Boubel is an author of the latter paper, so apparently agrees with this revision. Fortunately they didn’t go all Sandage-de Vaucouleurs on us, but even so, this provides a good example of how fraught this field can get. It also demonstrates the opportunity for confirmation bias, as the revised numbers are almost exactly what we find ourselves. (New results coming soon!)
It’s a dang mess.
The Hubble tension is only the most prominent of many persistent tensions, so let’s wade into some of the rest.
The persistent tension in the amplitude of the power spectrum
The tension that cosmologists seem to stress about most after the Hubble tension is that in σ8. σ8 quantifies the amplitude of the power spectrum; it is a measure of the rms fluctuation in mass in spheres of 8h-1 Mpc. Historically, this scale was chosen because early work by Peebles & Yu (1970) indicated that this was the scale on which the rms contrast in galaxy numbers* is unity. This is also a handy dividing line between linear and nonlinear regimes. On much larger scales, the fluctuations are smaller (a giant sphere is closer to the average for the whole universe) so can be treated in the limit of linear perturbation theory. Individual galaxies are “small” by this standard, so can’t be treated+ so simply, which is the excuse many cosmologists use to run shrieking from discussing them.
As we progressed from wrapping our heads around an expanding universe to quantifying the large scale structure (LSS) therein, the power spectrum statistically describing LSS became part of the canonical set of cosmological parameters. I don’t myself consider it to be on par with the Big Two, the Hubble constant H0 and the density parameter Ωm, but many cosmologists do seem partial to it despite the lack of phase information. Consequently, any tension in the amplitude σ8 garners attention.
The tension in σ8 has been persistent insofar as I recall debates in the previous century where some kinds of data indicated σ8 ~ 0.5 while other data preferred σ8 ~ 1. Some of that tension was in underlying assumptions (SCDM before LCDM). Today, the difference is [mostly] between the Planck best-fit amplitude σ8 = 0.811 ± 0.006 and various local measurements that typically yield 0.7something. For example, Karim et al. (2024) find low σ8 for emission line galaxies, even after specifically pursuing corrections in a necessary dust model that pushed things in the right direction:
Fig. 16 from Karim et al. (2024): Estimates of σ8 from emission line galaxies (red and blue), luminous red galaxies (grey), and Planck (green).
As with so many cosmic parameters, there is degeneracy, in this case between σ8 and Ωm. Physically this happens because you get more power when you have more stuff (Ωm), but the different tracers are sensitive to it in different ways. Indeed, if I put on a cosmology hat, I personally am not too worried about this tension – emission line galaxies are typically lower mass than luminous red galaxies, so one expects that there may be a difference in these populations. The Planck value is clearly offset from both, but doesn’t seem too far afield. We wouldn’t fret at all if it weren’t for Planck’s damnably small error bars.
This tension is also evident as a function of redshift. Here are measures of the combination of parameters fσ8 = Ωm(z)γσ8 measured and compiled by Boubel et al (2024b):
Fig. 16 from Boubel et al (2024b). LCDM matches the data for σ8 = 0.74 (green line); the purple line is the expectation from Planck (σ8 = 0.81). The inset shows the error ellipse, which is clearly offset from the Planck value (crossed lines), particularly for the GR& value of γ = 0.55.
The line representing the Planck value σ8 = 0.81 overshoots most of the low redshift data, particularly those with the smallest uncertainties. The green line has σ8 = 0.74, so is a tad lower than Planck in the same sense as other low redshift measures. Again, the offset is modest, but it does look significant. The tension is persistent but not a show-stopper, so we generally shrug our shoulders and proceed as if it will inevitably work out.
The persistent tension in the cosmic mass density
A persistent tension that nobody seems to worry about is that in the density parameter Ωm. Fits to the Planck CMB acoustic power spectrum currently peg Ωm = 0.315±0.007, but as we’ve seen before, this covaries with the Hubble constant. Twenty years ago, WMAP indicated Ωm = 0.24 and H0 = 73, in good agreement with the concordance region of other measurements, both then and now. As with H0, the tension is posed by the itty bitty uncertainties on the Planck fit.
Experienced cosmologists may be inclined to scoff at such tiny error bars. I was, so I’ve confirmed them myself. There is very little wiggle room to match the Planck data within the framework of the LCDM model. I emphasize that last bit because it is an assumption now so deeply ingrained that it is usually left unspoken. If we leave that part out, then the obvious interpretation is that Planck is correct and all measurements that disagree with it must suffer from some systematic error. This seems to be what most cosmologists believe at present. If we don’t leave that part out, perhaps because we’re aware of other possibilities so are not willing to grantthis assumption, then the various tensions look like failures of a model that’s already broken. But let’s not go there today, and stay within the conventional framework.
There are lots of ways to estimate the gravitating mass density of the universe. Indeed, it was the persistent, early observation that the mass density Ωm exceeded that in baryons, Ωb, from big bang nucleosynthesis that got got the non-baryonic dark matter show on the road: there appears to be something out there gravitating that’s not normal matter. This was the key observation that launched non-baryonic cold dark matter: if Ωm > Ωb, there has% to be some kind of particle that is non-baryonic.
So what is Ωm? Most estimates have spanned the range 0.2 < Ωm < 0.4. In the 1980s and into the 1990s, this seemed close enough to Ωm = 1, by the standards of cosmology, that most Inflationary cosmologists presumed it would work out to what Inflation predicted, Ωm = 1 exactly. Indeed, I remember that community directing some rather vicious tongue-lashings at observers, castigating them to look harder: you will surely get Ωm = 1 if you do it right, you fools. But despite the occasional claim to get this “right” answer, the vast majority of the evidence never pointed that way. As I’ve related before, an important step on the path to LCDM – probably the most important step – was convincing everyone that really Ωm < 1.
Discerning between Ωm = 0.2 and 0.3 is a lot more challenging than determining that Ωm < 1, so we tend to treat either as acceptable. That’s not really fair in this age of precision cosmology. There are far too many estimates of the mass density to review here, so I’ll just note a couple of discrepant examples while also acknowledging that it is easy to find dynamical estimates that agree with Planck.
To give a specific example, Mohayaee & Tully (2005) obtained Ωm = 0.22 ± 0.02 by looking at peculiar velocities in the local universe. This was consistent with other constraints at the time, including WMAP, but is 4.5σ from the current Planck value. That’s not quite the 5σ we arbitrarily define to be an undeniable difference, but it’s plenty significant.
There have of course been other efforts to do this, and many of them lead to the same result, or sometimes even lower Ωm. For example, Shaya et al. (2022) use the Numerical Action Method developed by Peebles to attempt to work out the motions of nearly 10,000 galaxies – not just their Hubble expansion, but their individual trajectories under the mutual influence of each other’s gravity and whatever else may be out there. The resulting deviations from a pure Hubble flow depend on how much mass is associated with each galaxy and whatever other density there is to perturb things.
Fig. 4 from Shaya et al (2022): The gravitating mass density as a function of scale. After some local variations (hello Virgo cluster!), the data converge to Ωm = 0.12. Reaching Ωm = 0.24 requires an equal, additional amount of mass in “interhalo matter.” Even more mass would be required to reach the Planck value (red line added to original figure).
This result is in even greater tension with Planck than the earlier work by Mohayaee & Tully (2005). I find the need to invoke interhalo matter disturbing, since it acts as a pedestal in their analysis: extra mass density that is uniform everywhere. This is necessary so that it contributes to the global mass density Ωm but does not contribute to perturbing the Hubble flow.
One can imagine mass that is uniformly distributed easily enough, but what bugs me is that dark matter should not do this. There is no magic segregation between dark matter that forms into halos that contain galaxies and dark matter that just hangs out in the intergalactic medium and declines to participate in any gravitational dynamics. That’s not an option available to it: if it gravitates, it should clump. To pull this off, we’d need to live in a universe made of two distinct kinds of dark matter: cold dark matter that clumps and a fluid that gravitates globally but does not clump, sort of an anti-dark energy.
Alternatively, we might live in an underdense region such that the local Ωm is less than the global Ωm. This is an idea that comes and goes for one reason or another, but it has always been hard to sustain. The convergence to low Ωm looks pretty steady out to ~100 Mpc in the plot above; that’s a pretty big hole. Recall the non-linearity scale discussed above; this scale is a factor of ten larger so over/under-densities should typical be ±10%. This one is -60%, so I guess we’d have to accept that we’re not Copernican observers after all.
The persistent tension in bulk flows
Once we get past the basic Hubble expansion, individual galaxies each have their own peculiar motion, and beyond that we have bulk flows. These have been around a long time. We obsessed a lot about them for a while with discoveries like the Great Attractor. It was weird; I remember some pundits talking about “plate tectonics” in the universe, like there were giant continents of galaxy superclusters wandering around in random directions relative to the frame of the microwave background. Many of us, including me, couldn’t grok this, so we chose not to sweat it.
There is no single problem posed by bulk flows^, and of course you can find those that argue they pose no problem at all. We are in motion relative to the cosmic (CMB) frame$, but that’s just our Milky Way’s peculiar motion. The strange fact is that it’s not just us; the entirety of the local universe seems to have a unexpected peculiar motion. There are lots of ways to quantify this; here’s a summary table from Courtois et al (2025):
Table 1 from Courtois et al (2025): various attempts to measure the scale of dynamical homogeneity.
As we look to large scales, we expect the universe to converge to homogeneity – that’s the Cosmological Principle, which is one of those assumptions that is so fundamental that we forget we made it. The same holds for dynamics – as we look to large scales, we expect the peculiar motions to average out, and converge to a pure Hubble flow. The table above summarizes our efforts to measure the scale on which this happens – or doesn’t. It also shows what we expect on the second line, “predicted LCDM,” where you can see the expected convergence in the declining bulk velocities as the scale probed increases. The third line is for “cosmic variance;” when you see these words it usually means something is amiss so in addition to the usual uncertainties we’re going to entertain the possibility that we live in an abnormal universe.
Like most people, I was comfortably ignoring this issue until recently, when we had a visit and a talk from one of the protagonists listed above, Richard Watkins (W23). One of the problems that challenge this sort of work is the need for a large sample of galaxies with complete sky coverage. That’s observationally challenging to obtain. Real data are heterogeneous; treating this properly demands a more sophisticated treatment than the usual top-hat or Gaussian approaches. Watkins described in detail what a better way could be, and patiently endured the many questions my colleagues and I peppered him with. This is hard to do right, which gives aid and comfort to the inclination to ignore it. After hearing his talk, I don’t think we should do that.
Panel from Fig. 7 of Watkins et al. (2023): The magnitude of the bulk flow as a function of scale. The green points are the data and the red dashed line is the expectation of LCDM. The blue dotted line is an estimate of known systematic effects.
The data do not converge with increasing scale as expected. It isn’t just the local space density Ωm that’s weird, it’s also the way in which things move. And “local” isn’t at all small here, with the effect persisting out beyond 300 Mpc for any plausible h = H0/100.
This is formally a highly significant result, with the authors noting that “the probability of observing a bulk flow [this] large … is small, only about 0.015 per cent.” Looking at the figure above, I’d say that’s a fairly conservative statement. A more colloquial way of putting it would be “no way we gonna reconcile this!” That said, one always has to worry about systematics. They’ve made every effort to account for these, but there can always be unknown unknowns.
Mapping the Universe
It is only possible to talk about these things thanks to decades of effort to map the universe. One has to survey a large area of sky to identify galaxies in the first place, then do follow-up work to obtain redshifts from spectra. This has become big business, but to do what we’ve just been talking about, it is further necessary to separate peculiar velocities from the Hubble flow. To do that, we need to estimate distances by some redshift-independent method, like Tully-Fisher. Tully has been doing this his entire career, with the largest and most recent data product being Cosmicflows-4. Such data reveal not only large bulk flows, but extensive structure in velocity space:
We have a long way to go to wrap our heads around all of this.
Persistent tensions persist
I’ve discussed a few of the tensions that persist in cosmic data. Whether these are mere puzzles or a mounting pile of anomalies is a matter of judgement. They’ve been around for a while, so it isn’t fair to suggest that all of the data are consistent with LCDM. Nevertheless, I hear exactly this asserted with considerable frequency. It’s as if the definition of all is perpetually shrinking to include only the data that meet the consistency criterion. Yet it’s the discrepant bits that are interesting for containing new information; we need to grapple with them if the field is to progress.
*This was well before my time, so I am probably getting some aspect of the history wrong or oversimplifying it in some gross way. Crudely speaking, if you randomly plop down spheres of this size, some will be found to contain the cosmic average number of galaxies, some twice that, some half that. That the modern value of σ8 is close to unity means that Peebles got it basically right with the data that were available back then and that galaxy light very nearly traces mass, which is not guaranteed in a universe dominated by dark matter.
Not all of us are limited to working on the simplest systems. In this case, it doesn’t matter. The LCDM prediction here is that galaxies should be complicated because they are nonlinear. But the observation is that they are simple – so simple that they obey a single effective force law. That’s the contradiction right there, regardless of what flavor of complicated might come out of some high resolution simulation.
++At one KITP conference I attended, a particle-cosmologist said during a discussion session, in all seriousness and with a straight face, “We should stop talking about rotation curves.” Because scientific truth is best revealed by ignoring the inconvenient bits. David Merritt remarked on this in his book A Philosophical Approach to MOND. He surveyed the available cosmology textbooks, and found that not a single one of them mentioned the acceleration scale in the data. I guess that would go some way to explaining why statements of basic observational facts are often met with stunned silence. What’s obvious and well-established to me is a wellspring of fresh if incredible news to them. I’d probably give them the stink-eye about the cosmological constant if I hadn’t been paying the slightest attention to cosmology for the past thirty years.
&There is an elegant approach to parameterizing the growth of structure in theories that deviate modestly from GR. In this context, such theories are usually invoked as an alternative to dark energy, because it is socially acceptable to modify GR to explain dark energy but not dark matter. The curious hysteresis of that strange and seemingly self-contradictory attitude aside, this approach cannot be adapted to MOND because it assumes linearity while MOND is inherently nonlinear. My very crude, back-of-the-envelope expectation for MOND is very nearly constant γ ~ 0.4 (depending on the scale probed) out to high redshift. The bend we see in the conventional models around z ~ 0.6 will occur at z > 2 (and probably much higher) because structure forms fast in MOND. It is annoyingly difficult to put a more precise redshift on this prediction because it also depends on the unknown metric. So this is a more of a hunch than a quantitative prediction. Still, it will be interesting to see if roughly constant fσ8 persists to higher redshift.
%The inference that non-baryonic dark matter has to exist assumes that gravity is normal in the sense taught to us by Newton and Einstein. If some other theory of gravity applies, then one has to reassess the data in that context. This is one of the first considerations I made of MOND in the cosmological context, finding Ωm ≈ Ωb.
$Fun fact: you can type the name of a galaxy into NED (the NASA Extragalactic Database) and it will give you lots of information, including its recession velocity referenced to a variety of frames of reference and the corresponding distance from the Hubble law V = H0D. Naively, you might think that the obvious choice of reference from is the CMB. You’d be wrong. If you use this, you will get the wrong distance to the galaxy. Of all the choices available there, it consistently performs the worst as adjudicated by direct distance measurements (e.g., Cepheids).
NED used to provide a menu of choices for the value of H0 to use. It says something about the social-tyranny of precision cosmology that it now defaults to the Planck value. If you use this, you will get the wrong distance to the galaxy. Even if the Planck H0 turns out to be correct in some global sense, it does not work for real galaxies that are relatively near to us. That’s what it means to have all the “local” measurements based on direct distance measurements (e.g., Cepheids) consistently give a larger H0.
Galaxies in the local universe are closer than they appear. Photo by P.S. Pratheep, www.pratheep.com
I took the occasion of the NEIU debate to refresh my knowledge of the status of some of the persistent tensions in cosmology. There wasn’t enough time to discuss those, so I thought I’d go through a few of them here. These issues tend to get downplayed or outright ignored when we hype LCDM’s successes.
When I teach cosmology, I like to have the students do a project in which they each track down a measurement of some cosmic parameter, and then report back on it. The idea, when I started doing this back in 1999, was to combine the different lines of evidence to see if we reach a consistent concordance cosmology. Below is an example from the 2002 graduate course at the University of Maryland. Does it all hang together? I ask the students to debate the pros and cons of the various lines of evidence.
The mass density parameter Ωm = ρm/ρcrit and the Hubble parameter h = H0/(100 km/s/Mpc) from various constraints (colored lines) available in 2002. I later added the first (2003) WMAP result (box). The combination of results excludes the grey region; only the white portion is viable: this is the concordance region.
The concordance cosmology is the small portion of this diagram that was not ruled out. This is the way in which LCDM was established. Before we had either the CMB acoustic power spectrum or Type Ia supernovae, LCDM was pretty much a done deal based on a wide array of other astronomical evidence. It was the subsequentα agreement of the Type Ia SN and the CMB that cemented the picture in place.
The implicit assumption in this approach is that we have identified the correct cosmology by process of elimination: whatever is left over must be the right answer. But what if nothing is left over?
I have long worried that we’ve painted ourselves into a corner: maybe the concordance window is merely the least unlikely spot before everything is excluded. Excluding everything would effectively falsify LCDM cosmology, if not the more basic picture of an expanding universe% emerging from a hot big bang. Once one permits oneself to think this way, then it occurs to one that perhaps the reason we have to invoke the twin tooth fairies of dark matter and dark energy is to get FLRW to approximate some deeper, underlying theory.
Most cosmologists do not appear to contemplate this frightening scenario. And indeed, before we believe something so drastic, we have to have thoroughly debunked the standard picture – something rather difficult to do when 95% of it is invisible. It also means believing all the constraints that call the standard picture into question (hence why contradictory results experience considerably more scrutiny* than conforming results). The fact is that some results are more robust than others. The trick is deciding which to trust.^
In the diagram above, the range of Ωm from cluster mass-to-light ratios comes from some particular paper. There are hundreds of papers on this topic, if not thousands. I do not recall which one this particular illustration came from, but most of the estimates I’ve seen from the same method come in somewhat higher. So if we slide those green lines up, the allowed concordance window gets larger.
The practice of modern cosmology has necessarily been an exercise in judgement: which lines of evidence should we most trust? For example, there is a line up there for rotation curves. That was my effort to ask what combination of cosmological parameters led to dark matter halo densities that were tolerable to the rotation curve data of the time. Dense cosmologies give birth to dense dark matter halos, so everything above that line was excluded because those parameters cram too much dark matter into too little space. This was a pretty conservative limit at the time, but it is predicated on the insistence of theorists that dark matter halos had to have the NFW form predicted by dark matter-only simulations. Since that time, simulations including baryons have found any number of ways to alter the initial cusp. This in turn means that the constraint no longer applies as the halo might have been altered from its original, cosmology-predicted initial form. Whether the mechanisms that might cause such alterations are themselves viable becomes a separate question.
If we believed all of the available constraints, then there is no window left and FLRW is already ruled out. But not all of those data are correct, and some contradict each other, even absent the assumption of FLRW. So which do we believe? Finding one’s path in this field is like traipsing through an intellectual mine field full of hardened positions occupied by troops dedicated to this or that combination of parameters.
It is in every way an invitation to confirmation bias. The answer we get depends on how we weigh disparate lines of evidence. We are prone to give greater weight to lines of evidence that conform to our pre-established+ beliefs.
So, with that warning, let’s plunge ahead.
The modern Hubble tension
Gone but not yet forgotten are the Hubble wars between camps Sandage (H0 = 50!) and de Vaucouleurs (H0 = 100!). These were largely resolved early this century thanks to the Hubble Space Telescope Key Project on the distance scale. Obtaining this measurement was the major motivation to launch HST in the first place. Finally, this long standing argument was resolved: nearly everyone agreed that H0 = 72 km/s/Mpc.
That agreement was long-lived by the standards of cosmology, but did not last forever. Here is an illustration of the time dependence of H0 measurements this century, from Freedman (2021):
There are many illustrations like this; I choose this one because it looks great and seems to have become the go-to for illustrating the situation. Indeed, it seems to inform the attitude of many scientists close to but not directly involved in the H0 debate. They seem to perceive this as a debate between Adam Riess and Wendy Freedman, who have become associated with the Cepheid and TRGB$ calibrations, respectively. This is a gross oversimplification, as they are not the only actors on a very big stage&. Even in this plot, the first Cepheid point is from Freedman’s HST Key Project. But this apparent dichotomy between calibrators and people seems to be how the subject is perceived by scientists who have neither time nor reason for closer scrutiny. Let’s scrutinize.
Fits to the acoustic power spectrum of the CMB agreed with astronomical measurements of H0 for the first decade of the century. Concordance was confirmed. The current tension appeared with the first CMB data from Planck. Suddenly the grey band of the CMB best-fit no longer overlapped with the blue band of astronomical measurements. This came as a shock. Then a new (red) band appears, distinguishing between the “local” H0 calibrated by the TRGB from that calibrated by Cepheids.
I think I mentioned that cosmology was an invitation to confirmation bias. If you put a lot of weight on CMB fits, as many cosmologists do, then it makes sense from that perspective that the TRGB measurement is the correct one and the Cepheid H0 must be wrong. This is easy to imagine given the history of systematic errors that plagued the subject throughout the twentieth century. This confirmation bias makes one inclined to give more credence to the new# TRGB calibration, which is only in modest tension with the CMB value. The narrative is then simplified to two astronomical methods that are subject to systematic uncertainty: one that agrees with the right answer and one that does not. Ergo, the Cepheid H0 is in systematic error.
This narrative oversimplifies that matter to the point of being actively misleading, and the plot above abets this by focusing on only two of the many local measurements. There is no perfect way to do this, but I had a go at it last year. In the plot below, I cobbled together all the data I could without going ridiculously far back, but chose to show only one point per independent group, the most recent one available from each, the idea being that the same people don’t get new votes every time they tweak their result – that’s basically what is illustrated above. The most recent points from above are labeled Cepheids & TRGB (the date of the TRGB goes to the full Chicago-Carnegie paper, not Freedman’s summary paper where the above plot can be found). See McGaugh (2024) for the references.
When I first made this plot, I discovered that many measurements of the Hubble constant are not all that precise: the plot was an indecipherable forest of error bars. So I chose to make a cut at a statistical uncertainty of 3 km/s/Mpc: worse than that, the data are shown as open symbols sans error bars; better than that, the datum gets explicit illustration of both its statistical and systematic uncertainty. One could make other choices, but the point is that this choice paints a different picture from the choice made above. One of these local measurements is not like the others, inviting a different version of confirmation bias: the TRGB point is the outlier, so perhaps it is the one that is wrong.
Recent measurements of the Hubble constant (left) and the calibration of the baryonic Tully-Fisher relation (right) underpinning one of those measurements.
I highlight the measurement our group made not to note that we’ve done this too so much as to highlight an underappreciated aspect of the apparent tension between Cepheid and TRGB calibrations. There are 50 galaxies that calibrate the baryonic Tully-Fisher relation, split nearly evenly between galaxies whose distance is known through Cepheids (blue points) and TRGB (red points). They give the same answer. There is no tension between Cepheids and the TRGB here.
Chasing this up, it appears to me that what happened was that Freedman’s group reanalyzed the data that calibrate the TRGB, and wound up with a slightly different answer. This difference does not appear to be in the calibration equation (the absolute magnitude of the tip of the red giant branch didn’t change that much), but in something to do with how the tip magnitude is extracted. Maybe, I guess? I couldn’t follow it all the way, and I got bad vibes reminding me of when I tried to sort through Sandage’s many corrections in the early ’90s. That doesn’t make it wrong, but the point is that the discrepancy is not between Cepheids and TRGB calibrations so much as it is between the TRGB as implemented by Freedman’s group and the TRGB as implemented by others. The depiction of the local Hubble constant debate as being between Cepheid and TRGB calibrations is not just misleading, it is wrong.
Can we get away from Cepheids and the TRGB entirely? Yes. The black points above are for megamasers and gravitational lensing. These are geometric methods that do not require intermediate calibrators like Cepheids at all. It’s straight trigonometry. Both indicate H0 > 70. Which way is our confirmation bias leaning now?
The way these things are presented has an impact on scientific consensus. A fascinating experiment on this has been done in a recent conference report. Sometimes people poll conference attendees in an attempt to gauge consensus; this report surveys conference attendees “to take a snapshot of the attitudes of physicists working on some of the most pressing questions in modern physics.” One of the topics queried is the Hubble tension. Survey says:
Table XII from arXiv:2503.15776 in which scientists at the 2024 conference Black Holes Inside and Out vote on their opinion about the most likely solution of the Hubble tension.
First, a shout out to the 1/4 of scientists who expressed no opinion. That’s the proper thing to do when you’re not close enough to a subject to make a well-informed judgement. Whether one knows enough to do this is itself a judgement call, and we often let our arrogance override our reluctance to over-share ill-informed opinions.
Second, a shout out to the folks who did the poll for including a line for systematics in the CMB. That is a logical possibility, even if only 3 of the 72 participants took it seriously. This corroborates the impression I have that most physicists seem to think the CMB is prefect like some kind of holy scripture written in fire on the primordial sky, so must be correct and cannot be questioned, amen. That’s silly; systematics are always a possibility in any observation of the sky. In the case of the CMB, I suspect it is not some instrumental systematic but the underlying assumption of LCDM FLRW that is the issue; once one assumes that, then indeed, the best fit to the Planck data as published is H0 = 67.4, with H0 > 68 being right out. (I’ve checked.)
A red flag that the CMB is where the problem lies is the systematic variation of the best-fit parameters along the trench of minimum χ2:
The time evolution of best-fit CMB cosmology parameters. These have steadily drifted away from the LCDM concordance window while the astronomical measurements that established it have not.
I’ve shown this plot and variations for other choices of H0 before, yet it never fails to come as a surprise when I show it to people who work closely on the subject. I’m gonna guess that extends to most of the people who participated in the survey above. Some red flags prove to be false alarms, some don’t, but one should at least be aware of them and take them into consideration when making a judgement like this.
The plurality (35%) of those polled selected “systematic error in supernova data” as the most likely cause of the Hubble tension. It is indeed a common attitude, as I mentioned above, that the Hubble tension is somehow a problem of systematic errors in astronomical data like back in the bad old days** of Sandage & de Vaucouleurs.
Let’s unpack this a bit. First, the framing: systematic error in supernova data is not the issue. There may, of course, be systematic uncertainties in supernova data, but that’s not a contender for what is causing the apparent Hubble tension. The debate over the local value of H0 is in the calibrators of supernovae. This is often expressed as a tension between Cepheid and TRGB calibrators, but as we’ve seen, even that is misleading. So posing the question this way is all kinds of revealing, including of some implicit confirmation bias. It’s like putting the right answer of a multiple choice question first and then making up some random alternatives.
So what do we learn from this poll for consensus? There is no overwhelming consensus, and the most popular choice appears to be ill-informed. This could be a meme. Tell me you’re not an expert on a subject by expressing an opinion as if you were.
The kicker here is that this was a conference on black hole physics. There seems to have been some fundamental gravitational and quantum physics discussed, which is all very interesting, but this is a community that is pretty far removed from the nitty-gritty of astronomical observations. There are many other polls reported in this conference report, many of them about esoteric aspects of black holes that I find interesting but would not myself venture an opinion on: it’s not my field. It appears that a plurality of participants at this particular conference might want to consider adopting that policy for fields beyond their own expertise.
I don’t want to be too harsh, but it seems like we are repeating the same mistakes we made in the 1980s. As I’ve related before, I came to astronomy from physics with the utter assurance that H0had to be 50. It was Known. Then I met astronomers who were actually involved in measuring H0 and they were like, “Maybe it is ~80?” This hurt my brain. It could not be so! and yet they turned out to be correct within the uncertainties of the time. Today, similar strong opinions are being expressed by the same community (and sometimes by the same people) who were wrong then, so it wouldn’t surprise me if they are wrong now. Putting how they think things should be ahead of how they are is how they roll.
There are other tensions besides the Hubble tension, but I’ll get to them in future posts. This is enough for now.
αAs I’ve related before, I date the genesis of concordance LCDM to the work of Ostriker & Steinhardt (1995), though there were many other contributions leading to it (e.g., Efstathiou et al. 1990). Certainly many of us anticipated that the Type Ia SN experiments would confirm or deny this picture. Since the issue of confirmation bias is ever-present in cosmic considerations, it is important to understand this context: the acceleration of the expansion rate that is often depicted as a novel discovery in 1998 was an expect result. So much so that at a conference in 1997 in Aspen I recall watching Michael Turner badger the SN presenters to Proclaim Lambda already. One of the representatives from the SN teams was Richard Ellis, who wasn’t having it: the SN data weren’t there yet even if the attitude was. Amusingly, I later heard Turner claim to have been completely surprised by the 1998 discovery, as if he hadn’t been pushing for it just the year before. Aspen is a good venue for discussion; I commented at the time that the need to rehabilitate the cosmological constant was a big stop sign in the sky. He glared at me, and I’ve been on his shit list ever since.
%I will not be entertaining assertions that the universe is not expanding in the comments: that’s beyond the scope of this post.
*Every time a paper corroborating a prediction of MOND is published, the usual suspects get on social media to complain that the referee(s) who reviewed the paper must be incompetent. This is a classic case of admitting you don’t understand how the process works by disparaging what happened in a process to which you weren’t privy. Anyone familiar with the practice of refereeing will appreciate that the opposite is true: claims that seem extraordinary are consistently held to a higher standard.
^Note that it is impossible to exclude the act of judgement. There are approaches to minimizing this in particular experiments, e.g., by doing a blind analysis of large scale structure data. But you’ve still assumed a paradigm in which to analyze those data; that’s a judgement call. It is also a judgement call to decide to believe only large scale data and ignore evidence below some scale.
+I felt this hard when MOND first cropped up in my data for low surface brightness galaxies. I remember thinking How can this stupid theory get any predictions right when there is so much evidence for dark matter? It took a while for me to realize that dark matter really meant mass discrepancies. The evidence merely indicates a problem, the misnomer presupposes the solution. I had been working so hard to interpret things in terms of dark matter that it came as a surprise that once I allowed myself to try interpreting things in terms of MOND I no longer had to work so hard: lots of observations suddenly made sense.
$TRGB = Tip of the Red Giant Branch. Low metallicity stars reach a consistent maximum luminosity as they evolve up the red giant branch, providing a convenient standard candle.
&Where the heck is Tully? He seldom seems to get acknowledged despite having played a crucial role in breaking the tyranny of H0 = 50 in the 1970s, having published steadily on the topic, and his group continues to provide accurate measurements to this day. Do physics-trained cosmologists even know who he is?
#The TRGB was a well-established method before it suddenly appears on this graph. That it appears this way shortly after the CMB told us what answer we should get is a more worrisome potential example of confirmation bias, reminiscent of the situation with the primordial deuterium abundance.
**Aside from the tension between the TRGB as implemented by Freedman’s group and the TRGB as implemented by others, I’m not aware of any serious hint of systematics in the calibration of the distance scale. Can it still happen? Sure! But people are well aware of the dangers and watch closely for them. At this juncture, there is ample evidence that we may indeed have gotten past this.
Ha! I knew the Riess reference off the top of my head, but lots of people have worked on this so I typed “hubble calibration not a systematic error” into Google to search for other papers only to have its AI overview confidently assert
The statement that Hubble calibration is not a systematic error is incorrect
Google AI
That gave me a good laugh. It’s bad enough when overconfident underachievers shout about this from the wrong peak of the Dunning-Kruger curve without AI adding its recycled opinion to the noise, especially since its “opinion” is constructed from the noise.
The best search engine for relevant academic papers is NASA ADS; putting the same text in the abstract box returns many hits that I’m not gonna wade through. (A well-structured ADS search doesn’t read so casually; apparently the same still applies to Google.)
This is a quick post to announce that on Monday, April 7 there will be a virtual panel discussion about dark matter and MOND involving Scott Dodelson and myself. It will be moderated by Orin Harris at Northeastern Illinois University starting at 3pm US Central time*. I asked Orin if I should advertise it more widely, and he said yes – apparently their Zoom set up has a capacity for a thousand attendees.
I’ve been busy, and a bit exhausted, since the longseries of posts on structureformation in the early universe. The thing I like about MOND is that it helps me understand – and successfully predict – the dynamics of galaxies. Specific galaxies that are real objects: one can observe this particular galaxy and predict that it should have thisrotation speed or velocity dispersion. In contrast, LCDM simulations can only make statistical statements about populations of galaxy-like numerical abstractions, they can never be equated to real-universe objects. Worse, they obfuscate rather than illuminate. In MOND, the observed centripetal acceleration follows directly from that predicted by the observed distribution of stars and gas. In simulations, this fundamental observation is left unaddressed, and we are left grasping at straws trying to comprehend how the observed kinematics follow from an invisible, massive dark matter halo that starts with the NFW form but somehow gets redistributed just so by inadequately modeled feedback processes.
Simply put, I do not understand galaxy dynamics in terms of dark matter, and not for want of trying. There are plenty of people who claim to do so, but they appear to be foolingthemselves. Nevertheless, what I don’t like about MOND is the same thing that they don’t like about MOND which is that I don’t understand the basics of cosmology with it.
Specifically, what I don’t understand about cosmology in modified dynamics is the expansion history and the geometry. That’s a lot, but not everything. The early universe is fine: the expanding universe went through an early hot phase that bequeathed us with the relic radiation field and the abundances of the light elements through big bang nucleosynthesis. There’s nothing about MOND that contradicts that, and arguably MOND is in better agreement with BBN than LCDM, there being no tension with the lithium abundance – this tension was not present in the 1990s, and was only imposed by the need to fit the amplitude of the second peak in the CMB.
But we’re still missing some basics that are well understood in the standard cosmology, and which are in good agreement with many (if not all) of the observations that lead us to LCDM. So I understand the reluctance to admit that maybe we don’t know as much about the universe as we think we do. Indeed, it provokes strong emotional reactions.
Screenshot from Dr. Strangelove paraphrasing Major Kong (original quote at top).
So, what might the expansion history be in MOND? I don’t know. There are some obvious things to consider, but I don’t find them satisfactory.
The Age of the Universe
Before I address the expansion history, I want to highlight some observations that pertain to the age of the universe. These provide some context that informs my thinking on the subject, and why I think LCDM hits pretty close to the mark in some important respects, like the time-redshift relation. That’s not to say I think we need to slavishly obey every detail of the LCDM expansion history when constructing other theories, but it does get some things right that need to be respected in any such effort.
One big thing I think we should respect are constraints on the age of the universe. The universe can’t be younger than the objects in it. It could of course be older, but it doesn’t appear to be much older, as there are multiple, independent lines of evidence that all point to pretty much the same age.
Expansion Age: The first basic is that if the universe is expanding, it has a finite age. You can imagine running the expansion in reverse, looking back in time to when the universe was progressively smaller, until you reach an incomprehensibly dense initial phase. A very long time, to be sure, but not infinite.
To put an exact number on the age of the universe, we need to know its detailed expansion history. That is something LCDM provides that MOND does not pretend to do. Setting aside theory, a good ball park age is the Hubble time, which is the inverse of the Hubble constant. This is how long it takes for a linearly expanding, “coasting” universe to get where it is today. For the measured H0 = 73 km/s/Mpc, the Hubble time is 13.4 Gyr. Keep that number in mind for later. This expansion age is the metric against which to compare the ages of measured objects, as discussed below.
Globular Clusters: The most famous of age constraints is provided by the ancient stars in globular clusters. One of the great accomplishments of 20th century astrophysics is a masterful understanding of the physics of stars as giant nuclear fusion reactors. This allows us to understand how stars of different mass and composition evolve. That, in turn, allows us to put an age on the stars in clusters. Globulars are the oldest of clusters, with a mean age of 13.5 Gyr (Valcin et al. 2021). Other estimates are similar, though I note that the age determinations depends on the distance scale, so keeping them rigorously separate from Hubble constant determinations has historically been a challenge. The covariance of age and distance renders the meaning of error bars rather suspect, but to give a flavor, the globular cluster M92 is estimated to have an age of 13.80±0.75 Gyr (Jiaqi et al. 2023).
Though globular clusters are the most famous in this regard, there are other constraints on the age of the contents of the universe.
White dwarfs: White dwarfs are the remnants of dead stars that were never massive enough to have exploded as supernova. The over/under line for that is about 8 solar mass; the oldest white dwarfs will be the remnants of the first stars that formed just below this threshold. Such stars don’t take long to evolve, around 100 Myr. That’s small compared to the age of the universe, so the first white dwarfs have just been cooling off ever since their progenitors burned out.
As the remnants of the incredibly hot cores of former stars, white dwarfs star off hot but cool quickly by radiating into space. The timescale to cool off can be crudely estimated from first principles just from the Stefan-Boltzmann law. As with so many situations in astrophysics, some detailed radiative transfer calculations are necessary to get the answer right in detail. But the ballpark of the back-of-the-envelope answer is not much different from the detailed calculation, giving some confidence in the procedure: we have a good idea of how long it takes white dwarfs to cool.
Since white dwarfs are not generating new energy but simply radiating into space, their luminosity fades over time as their surface temperature declines. This predicts that there will be a sharp drop in the numbers of white dwarfs corresponding to the oldest such objects: there simply hasn’t been enough time to cool further. The observational challenge then becomes finding the faint edge of the luminosity function for these intrinsically faint sources.
Despite the obvious challenges, people have done it, and after great effort, have found the expected edge. Translating that into an age, we get 12.5+1.4/-3.5 Gyr (Munn et al. 2017). This seems to hold up well now that we have Gaia data, which finds J1312-4728 to be the oldest known white dwarf at 12.41±0.22 Gyr (Torres et al. 2021). To get to the age of the universe, one does have to account for the time it takes to make a white dwarf in the first place, which is of order a Gyr or less, depending on the progenitor and when it formed in the early universe. This is pretty consistent with the ages of globular clusters, but comes from different physics: radiative cooling is the dominant effect rather than the hydrogen fusion budget of main sequence stars.
Radiochronometers: Some elements decay radioactively, so measuring their isotopic abundances provides a clock. Carbon-14 is a famous example: with a half-life of 5,730 years, its decay provides a great way to date the remains of prehistoric camp sites and bones. That’s great over some tens of thousands of years, but we need something with a half-life of order the age of the universe to constrain that. One such isotope is 232Thorium, with a half life of 14.05 Gyr.
Making this measurement requires that we first find stars that are both ancient and metal poor but with detectable Thorium and Europium (the latter providing a stable a reference). Then one has to obtain a high quality spectrum with which to do an abundance analysis. This is all hard work, but there are some examples known.
Sneden‘s star, CS 22892-052, fits the bill. Long story short, the measured Th/Eu ratio gives an age of 12.8±3 Gyr (Sneden et al. 2003). A similar result of ~13 Gyr (Frebel & Kratz 2009) is obtained from 238U (this “stable” isotope of uranium has a half-life of 4.5 Gyr, as opposed to the kind that can be provoked into exploding, 235U, which has a half-life of 700 Myr). While the search for the first stars and the secrets they may reveal is ongoing, the ages for individual stars estimated from radioactive decay are consistent with the ages of the oldest globular clusters indicated by stellar evolution.
Interstellar dust grains: The age of the solar system (4.56 Gyr) is well known from the analysis of isotopic abundances in meteorites. In addition to tracing the oldest material in the solar system, sometimes it is possible to identify dust grains of interstellar origin. One can do the same sort of analysis, and do the sum: how long did it take the star that made those elements to evolve, return them to the interstellar medium, get mixed in with the solar nebula, and lurk about in space until plunging to the ground as a meteorite that gets picked up by some scientifically-inclined human. This exercise has been done by Nittler et al. (2008), who estimate a total age of 13.7±1.3 Gyr
Taken in sum, all these different age indicators point to a similar, consistent age between 13 and 14 billion years. It might be 12, but not lower, nor is there reason to think it would be much higher: 15 is right out. I say that flippantly because I couldn’t resist the Monty Python reference, but the point is serious: you could in principle have a much older universe, but then why are all the oldest things pretty much the same age? Why would the universe sit around doing nothing for billions of years then suddenly decide to make lots of stars all at once? The more obvious interpretation is that the age of the universe is indeed in the ballpark of 13.something Gyr.
Expansion history
The expansion history in the standard FLRW universe is governed by the Friedmann equation, which we can write* as
H2(z) = H02 [Ωm(1+z)3+Ωk(1+z)2+ΩΛ]
where z is the redshift, H(z) is the Hubble parameter, H0 is its current value, and the various Ω are the mass-energy density of stuff relative to the critical density: the mass density Ωm, the geometry Ωk, and the cosmological constant ΩΛ. I’ve neglected radiation for clarity. One can make up other stuff X and add a term for it as ΩX which will have an associated (1+z) term that depends on the equation of state of X. For our purposes, both normal matter and non-baryonic cold dark matter (CDM) share the same equation of state (cold meaning non-relativisitic motions meaning rest-mass density but negligible pressure), so both contribute to the mass density Ωm = Ωb+ΩCDM.
Note that since H(z=0)=H0, the various Ω’s have to sum to unity. Thus a cosmology is geometrically flat with the curvature term Ωk = 0 if Ωm+ΩΛ = 1. Vanilla LCDM has Ωm = 0.3 and ΩΛ = 0.7. As a community, we’ve become very sure of this, but that the Friedmann equation is sufficient to describe the expansion history of the universe is an assumption based on (1) General Relativity providing a complete description, and (2) the cosmological principle (homogeneity and isotropy) holds. These seem like incredibly reasonable assumptions, but let’s bear in mind that we only know directly about 5% of the sum of Ω’s, the baryons. ΩCDM = 0.25 and ΩΛ = 0.7 are effectively fudge factors we need to make things works out given the stated assumptions. LCDM is viable if and only if cold dark matter actually exists.
Gravity is an attractive force, so the mass term Ωm acts to retard the expansion. Early on, we expected this to be the dominant term due to the (1+z)3 dependence. In the long-presumed+ absence of a cosmological constant, cosmology was the search for two numbers: once H0 and Ωm are specified, the entire expansion history is known. Such a universe can only decelerate, so only the region below the straight line in the graph below is accessible; an expansion history like the red one representing LCDM should be impossible. That lots of different data seemed to want this is what led us kicking and screaming to rehabilitate the cosmological constant, which acts as a form of anti-gravity to accelerate an expansion that ought to be decelerating.
The expansion factor maps how the universe has grown over time; it corresponds to 1/(1+z) in redshift so that z → ∞ as t → 0. The “coasting” limit of an empty universe (H0 = 73, Ωm = ΩΛ = 0) that expands linearly is shown as the straight line. The red line is the expansion history of vanilla LCDM (H0 = 70, Ωm = 0.3, ΩΛ = 0.7).
The over/under between acceleration/deceleration of the cosmic expansion rate is the coasting universe. This is the conceptually useful limit of a completely empty universe with Ωm = ΩΛ = 0. It expands at a steady rate that neither accelerates nor decelerates. The Hubble time is exactly equal to the age of such a universe, i.e., 13.4 Gyr for H0 = 73.
LCDM has a more complicated expansion history. The mass density dominates early on, so there is an early phase of deceleration – the red curve bends to the right. At late times, the cosmological constant begins to dominate, reversing the deceleration and transforming it into an acceleration. The inflection point when it switches from decelerating to accelerating is not too far in the past, which is a curious coincidence given that the entire future of such a universe will be spent accelerating towards the exponential expansion of the de Sitter limit. Why do we live anywhen close to this special time?
Lots of ink has been spilled on this subject, and the answer seems to boil down to the anthropic principle. I find this lame and won’t entertain it further. I do, however, want to point out a related strange coincidence: the current age of vanilla LCDM (13.5 Gyr) is the same as that of a coasting universe with the locally measured Hubble constant (13.4 Gyr). Why should these very different models be so close in age? LCDM decelerates, then accelerates; there’s only one moment in the expansion history of LCDM when the age is equal to the Hubble time, and we happen to be living just then.
This coincidence problem holds for any viable set of LCDM parameters, as they all have nearly the same age. Planck LCDM has an age of 13.7 Gyr, still basically the same as the Hubble time for the locally measured Hubble constant. The lower Planck Hubble value is balanced by a larger amount of early-time deceleration. The universe reaches its current point after 13.something Gyr in all of these models. That’s in good agreement with the ages of the oldest observed stars, which is encouraging, but it does nothing to help us resolve the Hubble tension, much less constrain alternative cosmologies.
Cosmic expansion in MOND
There is no equivalent to the Friedmann equation is in MOND. This is not satisfactory. As an extension of Newtonian theory, MOND doesn’t claim to encompass cosmic phenomena$ – hence the search for a deeper underlying theory. Lacking this, what can we try?
Felten (1984) tried to derive an equivalent to the Friedmann equation using the same trick that can be used with Newtonian theory to recover the expansion dynamics in the absence of a cosmological constant. This did not work. The result was unsatisfactory& for application to the whole universe because the presence of a0 in the equations makes the result scale-dependent. So how big the universe is matters in a way that the standard cosmology does not; there’s no way to generalize is to describe the whole enchilada.
In retrospect, what Felten had really obtained was a solution for the evolution of a top-hat over-density: the dynamics of a spherical region embedded in an expanding universe. This result is the basis for the successful prediction of early structure formation in MOND. But once again it only tells us about the dynamics of an object within the universe, not the universe itself.
In the absence of a complete theory, one makes an ansatz to proceed. If there is a grander theory that encompasses both General Relativity and MOND, then it must approach both in the appropriate limit, so an obvious ansatz to make is that the entire universe obeys the conventional Friedmann equation while the dynamics of smaller regions in the low acceleration regime obey MOND. Both Bob Sanders and I independently adopted this approach, and explicitly showed that it was consistent with the constraints that were known at the time. The first obvious guess for the mass density of such a cosmology is Ωm = Ωb = 0.04. (This was the high end of BBN estimates at the time, so back then we also considered lower values.) The expansion history of this low density, baryon-only universe is shown as the blue line below:
As above, but with the addition of a low density, baryon-dominated, no-CDM universe (H0 = 73, Ωm = Ωb = 0.04, ΩΛ = 0; blue line).
As before, there is not much to choose between these models in terms of age. The small but non-zero mass density does cause some early deceleration before the model approaches the coasting limit, so the current age is a bit lower: 12.6 Gyr. This is on the small side, but not problematically so, or even particularly concerning given the history of the subject. (I’m old enough to remember when we were pretty sure that globular clusters were 18 Gyr old.)
The time-redshift relation for the no-CDM, baryon-only universe is somewhat different from that of LCDM. If we adopt it, then we find that MOND-driven structure forms at somewhat higher redshift than in with the LCDM time-redshift relation. The benchmark time of 500 Myr for L* galaxy formation is reached at z = 15 rather than z = 9.5 as in LCDM. This isn’t a huge difference, but it does mean that an L* galaxy could in principle appear even earlier than so far seen. I’ve stuck with LCDM as the more conservative estimate of the time-redshift relation, but the plain fact is we don’t really know what the universe is doing at those early times, or if the ansatz we’ve made holds well enough to do this. Surely it must fail at some point, and it seems likely that we’re past that point.
There is a bigger problem with the no-CDM model above. Even if it is close to the right expansion history, it has a very large negative curvature. The geometry is nowhere close to the flat Robertson-Walker metric indicated by the angular diameter distance to the surface of last scattering (the CMB).
Geometry
Much of cosmology is obsessed with geometry, so I will not attempt to do the subject justice. Each set of FLRW parameters has a specific geometry that comes hand in hand with its expansion history. The most sensitive probe we have of the geometry is the CMB. The a priori prediction of LCDM was that its flat geometry required the first acoustic peak to have a maximum near one degree on the sky. That’s exactly what we observe.
Fig. 45 fromFamaey & McGaugh (21012): The acoustic power spectrum of the cosmic microwave background as observed by WMAP [229] together with the a priori predictions of ΛCDM (red line) and no-CDM (blue line) as they existed in 1999 [265] prior to observation of the acoustic peaks. ΛCDM correctly predicted the position of the first peak (the geometry is very nearly flat) but over-predicted the amplitude of both the second and third peak. The most favorable a priori case is shown; other plausible ΛCDM parameters [468] predicted an even larger second peak. The most important parameter adjustment necessary to obtain an a posteriori fit is an increase in the baryon density Ωb, above what had previously been expected from BBN. In contrast, the no-CDM model ansatz made as a proxy for MOND successfully predicted the correct amplitude ratio of the first to second peak with no parameter adjustment [268, 269]. The no-CDM model was subsequently shown to under-predict the amplitude of the third peak [442], so no model can explain these data without post-hoc adjustment.
In contrast, no-CDM made the correct prediction for the first-to-second peak amplitude ratio, but it is entirely ambivalent about the geometry. FLRW cosmology and MOND dynamics care about incommensurate things in the CMB data. That said, the naive prediction of the baryon-only model outlined above is that the first peak should occur around where the third peak is observed. That is obviously wrong.
Since the geometry is not a fundamental prediction of MOND, the position of the first peak is easily fit by invoking the same fudge factor used to fit it conventionally: the cosmological constant. We need a larger ΩΛ = 0.96, but so what? This parameter merely encodes our ignorance: we make no pretense to understand it, let alone vesting deep meaning in it. It is one of the things that a deeper theory must explain, and can be considered as a clue in its development.
So instead of a baryon-only universe, our FLRW proxy becomes a Lambda-baryon universe. That fits the geometry, and for an optical depth to the surface of last scattering of τ = 0.17, matches the amplitude of the CMB power spectrum and correctly predicts the cosmic dawn signal that EDGES claimed to detect. Sounds good, right? Well, not entirely. It doesn’t fit the CMB data at L > 600, but I expected to only get so far with the no-CDM, so it doesn’t bother me that you need a better underlying theory to fit the entire CMB. Worse, to my mind, is that the Lambda-baryon proxy universe is much, much older than everything in it: 22 Gyr instead of 13.something.
As above, but now with the addition of a low density, Lambda-dominated universe (H0 = 73, Ωm = Ωb = 0.04, ΩΛ = 0.96; dashed line).
This just don’t seem right. Or even close to right. Like, not even pointing in a direction that might lead to something that had a hope of being right.
Moreover, we have a weird tension between the baryon-only proxy and the Lambda-baryon proxy cosmology. The baryon-only proxy has a plausible expansion history but an unacceptable geometry. The Lambda-baryon proxy has a plausible geometry by an implausible expansion history. Technically, yes, it is OK for the universe to be much older than all of its contents, but it doesn’t make much sense. Why would the universe do nothing for 8 or 9 Gyr, then burst into a sudden frenzy of activity? It’s as if Genesis read “for the first 6 Gyr, God was a complete slacker and did nothing. In the seventh Gyr, he tried to pull an all-nighter only to discover it took a long time to build cosmic structure. Then He said ‘Screw it’ and fudged Creation with MOND.”
In the beginning the Universe was created. This has made a lot of people very angry and been widely regarded as a bad move.
So we can have a plausible geometry or we can have a plausible expansion history with a proxy FLRW model, but not both. That’s unpleasant, but not tragic: we know this approach has to fail somehow. But I had hoped for FLRW to be a more coherent first approximation to the underlying theory, whatever it may be. If there is such a theory, then both General Relativity and MOND are its limits in their respective regimes. As such, FLRW ought to be a good approximation to the underlying entity up to some point. That we have to invoke both non-baryonic dark matter and a cosmological constant is a hint that we’ve crossed that point. But I would have hoped that we crossed it in a more coherent fashion. Instead, we seem to get a little of this for the expansion history and a little of that for the geometry.
*There are other ways to write the Friedmann equation, but this is a useful form here. For the mathematically keen, the Hubble parameter is the time derivative of the expansion factor normalized by the expansion factor, which in terms of redshift is
H(z) = -(dz/dt)/(1+z)2.
This quantity evolves, leading us to expect evolution in Milgrom’s constant if we associate it with the numerical coincidence
2π a0 = cH0
If the Hubble parameter evolves, as it appears to do, it would seem to follow that so should a(z) ~ H(z) – otherwise the coincidence is just that: a coincidence that applies only now. There is, at present, no persuasive evidence that a0 evolves with redshift.
A similar order-of-magnitude association can be made with the cosmological constant,
2π a0 = c2Λ1/2
so conceivably the MOND acceleration scale appears as the result of vacuum effects. It is a matter of judgement whether these numerical coincidences are mere coincidences or profound clues towards a deeper theory. That the proportionality constant is very nearly 2π is certainly intriguing, but the constancy of any of these parameters (including Newton’s G) depends on how they emerge from the deeper theory.
+In January 2019, I was attending a workshop at Princeton when I had a chance encounter with Jim Peebles. He was not attending the workshop, but happened to be walking across campus at the same time I was. We got to talking, and he affirmed my recollection of just how incredibly unpopular the cosmological constant used to be. Unprompted, he went on to make the analogy of how similar that seemed to how unpopular MOND is now.
Peebles was awarded a long-overdue Nobel Prize later that year.
$This is one of the things that makes it tricky to compare LCDM and MOND. MOND is a theory of dynamics in the limit of low acceleration. It makes no pretense to be a cosmological theory. LCDM starts as a cosmological theory, but it also makes predictions about the dynamics of systems within it (or at least the dark matter halos in which visible galaxies are presumed to form). So if one starts by putting on a cosmology hat, there is nothing to talk about: LCDM is the only game in town. But from the perspective of dynamics, it’s the other way around, with LCDM repeatedly failing to satisfactorily explain, much less anticipate, phenomena that MOND predicted correctly in advance.
&An intriguing thing about Felten’s MOND universe is that it eventually recollapses irrespective of the mass density. There is no critical value of Ωm, hence no coincidence problem. MOND is strong enough to eventually reverse the expansion of the universe, it just takes a very long time to do so, depending on the density.
I’m surprised this aspect of the issue was overlooked. The coincidence problem (then mostly called the flatness problem) obsessed people at the time, so much so that its solution by Cosmic Inflation led to its widespread acceptance. That only works if Ωm = 1; LCDM makes the coincidence worse. I guess the timing was off, as Inflation had already captured the community’s imagination by that time, likely making it hard to recognize that MOND was a more natural solution. We’d already accepted the craziness that was Inflation and dark matter; MOND craziness was a bridge too far.
I guess. I’m not quite that old; I was still an undergraduate at the time. I did hear about Inflation then, in glowing terms, but not a thing about MOND.