Dark Matter or Modified Gravity? A virtual panel discussion

Dark Matter or Modified Gravity? A virtual panel discussion

This is a quick post to announce that on Monday, April 7 there will be a virtual panel discussion about dark matter and MOND involving Scott Dodelson and myself. It will be moderated by Orin Harris at Northeastern Illinois University starting at 3pm US Central time*. I asked Orin if I should advertise it more widely, and he said yes – apparently their Zoom set up has a capacity for a thousand attendees.

See their website for further details. If you wish to attend, you need to register in advance.


*That’s 4PM EDT to me, which is when I’m usually ready for a nap.

Things I don’t understand in modified dynamics (it’s cosmology)

Things I don’t understand in modified dynamics (it’s cosmology)

I’ve been busy, and a bit exhausted, since the long series of posts on structure formation in the early universe. The thing I like about MOND is that it helps me understand – and successfully predict – the dynamics of galaxies. Specific galaxies that are real objects: one can observe this particular galaxy and predict that it should have this rotation speed or velocity dispersion. In contrast, LCDM simulations can only make statistical statements about populations of galaxy-like numerical abstractions, they can never be equated to real-universe objects. Worse, they obfuscate rather than illuminate. In MOND, the observed centripetal acceleration follows directly from that predicted by the observed distribution of stars and gas. In simulations, this fundamental observation is left unaddressed, and we are left grasping at straws trying to comprehend how the observed kinematics follow from an invisible, massive dark matter halo that starts with the NFW form but somehow gets redistributed just so by inadequately modeled feedback processes.

Simply put, I do not understand galaxy dynamics in terms of dark matter, and not for want of trying. There are plenty of people who claim to do so, but they appear to be fooling themselves. Nevertheless, what I don’t like about MOND is the same thing that they don’t like about MOND which is that I don’t understand the basics of cosmology with it.

Specifically, what I don’t understand about cosmology in modified dynamics is the expansion history and the geometry. That’s a lot, but not everything. The early universe is fine: the expanding universe went through an early hot phase that bequeathed us with the relic radiation field and the abundances of the light elements through big bang nucleosynthesis. There’s nothing about MOND that contradicts that, and arguably MOND is in better agreement with BBN than LCDM, there being no tension with the lithium abundance – this tension was not present in the 1990s, and was only imposed by the need to fit the amplitude of the second peak in the CMB.

But we’re still missing some basics that are well understood in the standard cosmology, and which are in good agreement with many (if not all) of the observations that lead us to LCDM. So I understand the reluctance to admit that maybe we don’t know as much about the universe as we think we do. Indeed, it provokes strong emotional reactions.

Screenshot from Dr. Strangelove paraphrasing Major Kong (original quote at top).

So, what might the expansion history be in MOND? I don’t know. There are some obvious things to consider, but I don’t find them satisfactory.

The Age of the Universe

Before I address the expansion history, I want to highlight some observations that pertain to the age of the universe. These provide some context that informs my thinking on the subject, and why I think LCDM hits pretty close to the mark in some important respects, like the time-redshift relation. That’s not to say I think we need to slavishly obey every detail of the LCDM expansion history when constructing other theories, but it does get some things right that need to be respected in any such effort.

One big thing I think we should respect are constraints on the age of the universe. The universe can’t be younger than the objects in it. It could of course be older, but it doesn’t appear to be much older, as there are multiple, independent lines of evidence that all point to pretty much the same age.

Expansion Age: The first basic is that if the universe is expanding, it has a finite age. You can imagine running the expansion in reverse, looking back in time to when the universe was progressively smaller, until you reach an incomprehensibly dense initial phase. A very long time, to be sure, but not infinite.

To put an exact number on the age of the universe, we need to know its detailed expansion history. That is something LCDM provides that MOND does not pretend to do. Setting aside theory, a good ball park age is the Hubble time, which is the inverse of the Hubble constant. This is how long it takes for a linearly expanding, “coasting” universe to get where it is today. For the measured H0 = 73 km/s/Mpc, the Hubble time is 13.4 Gyr. Keep that number in mind for later. This expansion age is the metric against which to compare the ages of measured objects, as discussed below.

Globular Clusters: The most famous of age constraints is provided by the ancient stars in globular clusters. One of the great accomplishments of 20th century astrophysics is a masterful understanding of the physics of stars as giant nuclear fusion reactors. This allows us to understand how stars of different mass and composition evolve. That, in turn, allows us to put an age on the stars in clusters. Globulars are the oldest of clusters, with a mean age of 13.5 Gyr (Valcin et al. 2021). Other estimates are similar, though I note that the age determinations depends on the distance scale, so keeping them rigorously separate from Hubble constant determinations has historically been a challenge. The covariance of age and distance renders the meaning of error bars rather suspect, but to give a flavor, the globular cluster M92 is estimated to have an age of 13.80±0.75 Gyr (Jiaqi et al. 2023).

Though globular clusters are the most famous in this regard, there are other constraints on the age of the contents of the universe.

White dwarfs: White dwarfs are the remnants of dead stars that were never massive enough to have exploded as supernova. The over/under line for that is about 8 solar mass; the oldest white dwarfs will be the remnants of the first stars that formed just below this threshold. Such stars don’t take long to evolve, around 100 Myr. That’s small compared to the age of the universe, so the first white dwarfs have just been cooling off ever since their progenitors burned out.

As the remnants of the incredibly hot cores of former stars, white dwarfs star off hot but cool quickly by radiating into space. The timescale to cool off can be crudely estimated from first principles just from the Stefan-Boltzmann law. As with so many situations in astrophysics, some detailed radiative transfer calculations are necessary to get the answer right in detail. But the ballpark of the back-of-the-envelope answer is not much different from the detailed calculation, giving some confidence in the procedure: we have a good idea of how long it takes white dwarfs to cool.

Since white dwarfs are not generating new energy but simply radiating into space, their luminosity fades over time as their surface temperature declines. This predicts that there will be a sharp drop in the numbers of white dwarfs corresponding to the oldest such objects: there simply hasn’t been enough time to cool further. The observational challenge then becomes finding the faint edge of the luminosity function for these intrinsically faint sources.

Despite the obvious challenges, people have done it, and after great effort, have found the expected edge. Translating that into an age, we get 12.5+1.4/-3.5 Gyr (Munn et al. 2017). This seems to hold up well now that we have Gaia data, which finds J1312-4728 to be the oldest known white dwarf at 12.41±0.22 Gyr (Torres et al. 2021). To get to the age of the universe, one does have to account for the time it takes to make a white dwarf in the first place, which is of order a Gyr or less, depending on the progenitor and when it formed in the early universe. This is pretty consistent with the ages of globular clusters, but comes from different physics: radiative cooling is the dominant effect rather than the hydrogen fusion budget of main sequence stars.

Radiochronometers: Some elements decay radioactively, so measuring their isotopic abundances provides a clock. Carbon-14 is a famous example: with a half-life of 5,730 years, its decay provides a great way to date the remains of prehistoric camp sites and bones. That’s great over some tens of thousands of years, but we need something with a half-life of order the age of the universe to constrain that. One such isotope is 232Thorium, with a half life of 14.05 Gyr.

Making this measurement requires that we first find stars that are both ancient and metal poor but with detectable Thorium and Europium (the latter providing a stable a reference). Then one has to obtain a high quality spectrum with which to do an abundance analysis. This is all hard work, but there are some examples known.

Sneden‘s star, CS 22892-052, fits the bill. Long story short, the measured Th/Eu ratio gives an age of 12.8±3 Gyr (Sneden et al. 2003). A similar result of ~13 Gyr (Frebel & Kratz 2009) is obtained from 238U (this “stable” isotope of uranium has a half-life of 4.5 Gyr, as opposed to the kind that can be provoked into exploding, 235U, which has a half-life of 700 Myr). While the search for the first stars and the secrets they may reveal is ongoing, the ages for individual stars estimated from radioactive decay are consistent with the ages of the oldest globular clusters indicated by stellar evolution.

Interstellar dust grains: The age of the solar system (4.56 Gyr) is well known from the analysis of isotopic abundances in meteorites. In addition to tracing the oldest material in the solar system, sometimes it is possible to identify dust grains of interstellar origin. One can do the same sort of analysis, and do the sum: how long did it take the star that made those elements to evolve, return them to the interstellar medium, get mixed in with the solar nebula, and lurk about in space until plunging to the ground as a meteorite that gets picked up by some scientifically-inclined human. This exercise has been done by Nittler et al. (2008), who estimate a total age of 13.7±1.3 Gyr

Taken in sum, all these different age indicators point to a similar, consistent age between 13 and 14 billion years. It might be 12, but not lower, nor is there reason to think it would be much higher: 15 is right out. I say that flippantly because I couldn’t resist the Monty Python reference, but the point is serious: you could in principle have a much older universe, but then why are all the oldest things pretty much the same age? Why would the universe sit around doing nothing for billions of years then suddenly decide to make lots of stars all at once? The more obvious interpretation is that the age of the universe is indeed in the ballpark of 13.something Gyr.

Expansion history

The expansion history in the standard FLRW universe is governed by the Friedmann equation, which we can write* as

H2(z) = H02m(1+z)3k(1+z)2Λ]

where z is the redshift, H(z) is the Hubble parameter, H0 is its current value, and the various Ω are the mass-energy density of stuff relative to the critical density: the mass density Ωm, the geometry Ωk, and the cosmological constant ΩΛ. I’ve neglected radiation for clarity. One can make up other stuff X and add a term for it as ΩX which will have an associated (1+z) term that depends on the equation of state of X. For our purposes, both normal matter and non-baryonic cold dark matter (CDM) share the same equation of state (cold meaning non-relativisitic motions meaning rest-mass density but negligible pressure), so both contribute to the mass density Ωm = ΩbCDM.

Note that since H(z=0)=H0, the various Ω’s have to sum to unity. Thus a cosmology is geometrically flat with the curvature term Ωk = 0 if ΩmΛ = 1. Vanilla LCDM has Ωm = 0.3 and ΩΛ = 0.7. As a community, we’ve become very sure of this, but that the Friedmann equation is sufficient to describe the expansion history of the universe is an assumption based on (1) General Relativity providing a complete description, and (2) the cosmological principle (homogeneity and isotropy) holds. These seem like incredibly reasonable assumptions, but let’s bear in mind that we only know directly about 5% of the sum of Ω’s, the baryons. ΩCDM = 0.25 and ΩΛ = 0.7 are effectively fudge factors we need to make things works out given the stated assumptions. LCDM is viable if and only if cold dark matter actually exists.

Gravity is an attractive force, so the mass term Ωm acts to retard the expansion. Early on, we expected this to be the dominant term due to the (1+z)3 dependence. In the long-presumed+ absence of a cosmological constant, cosmology was the search for two numbers: once H0 and Ωm are specified, the entire expansion history is known. Such a universe can only decelerate, so only the region below the straight line in the graph below is accessible; an expansion history like the red one representing LCDM should be impossible. That lots of different data seemed to want this is what led us kicking and screaming to rehabilitate the cosmological constant, which acts as a form of anti-gravity to accelerate an expansion that ought to be decelerating.

The expansion factor maps how the universe has grown over time; it corresponds to 1/(1+z) in redshift so that z → ∞ as t → 0. The “coasting” limit of an empty universe (H0 = 73, Ωm = ΩΛ = 0) that expands linearly is shown as the straight line. The red line is the expansion history of vanilla LCDM (H0 = 70, Ωm = 0.3, ΩΛ = 0.7).

The over/under between acceleration/deceleration of the cosmic expansion rate is the coasting universe. This is the conceptually useful limit of a completely empty universe with Ωm = ΩΛ = 0. It expands at a steady rate that neither accelerates nor decelerates. The Hubble time is exactly equal to the age of such a universe, i.e., 13.4 Gyr for H0 = 73.

LCDM has a more complicated expansion history. The mass density dominates early on, so there is an early phase of deceleration – the red curve bends to the right. At late times, the cosmological constant begins to dominate, reversing the deceleration and transforming it into an acceleration. The inflection point when it switches from decelerating to accelerating is not too far in the past, which is a curious coincidence given that the entire future of such a universe will be spent accelerating towards the exponential expansion of the de Sitter limit. Why do we live anywhen close to this special time?

Lots of ink has been spilled on this subject, and the answer seems to boil down to the anthropic principle. I find this lame and won’t entertain it further. I do, however, want to point out a related strange coincidence: the current age of vanilla LCDM (13.5 Gyr) is the same as that of a coasting universe with the locally measured Hubble constant (13.4 Gyr). Why should these very different models be so close in age? LCDM decelerates, then accelerates; there’s only one moment in the expansion history of LCDM when the age is equal to the Hubble time, and we happen to be living just then.

This coincidence problem holds for any viable set of LCDM parameters, as they all have nearly the same age. Planck LCDM has an age of 13.7 Gyr, still basically the same as the Hubble time for the locally measured Hubble constant. The lower Planck Hubble value is balanced by a larger amount of early-time deceleration. The universe reaches its current point after 13.something Gyr in all of these models. That’s in good agreement with the ages of the oldest observed stars, which is encouraging, but it does nothing to help us resolve the Hubble tension, much less constrain alternative cosmologies.

Cosmic expansion in MOND

There is no equivalent to the Friedmann equation is in MOND. This is not satisfactory. As an extension of Newtonian theory, MOND doesn’t claim to encompass cosmic phenomena$ – hence the search for a deeper underlying theory. Lacking this, what can we try?

Felten (1984) tried to derive an equivalent to the Friedmann equation using the same trick that can be used with Newtonian theory to recover the expansion dynamics in the absence of a cosmological constant. This did not work. The result was unsatisfactory& for application to the whole universe because the presence of a0 in the equations makes the result scale-dependent. So how big the universe is matters in a way that the standard cosmology does not; there’s no way to generalize is to describe the whole enchilada.

In retrospect, what Felten had really obtained was a solution for the evolution of a top-hat over-density: the dynamics of a spherical region embedded in an expanding universe. This result is the basis for the successful prediction of early structure formation in MOND. But once again it only tells us about the dynamics of an object within the universe, not the universe itself.

In the absence of a complete theory, one makes an ansatz to proceed. If there is a grander theory that encompasses both General Relativity and MOND, then it must approach both in the appropriate limit, so an obvious ansatz to make is that the entire universe obeys the conventional Friedmann equation while the dynamics of smaller regions in the low acceleration regime obey MOND. Both Bob Sanders and I independently adopted this approach, and explicitly showed that it was consistent with the constraints that were known at the time. The first obvious guess for the mass density of such a cosmology is Ωm = Ωb = 0.04. (This was the high end of BBN estimates at the time, so back then we also considered lower values.) The expansion history of this low density, baryon-only universe is shown as the blue line below:

As above, but with the addition of a low density, baryon-dominated, no-CDM universe (H0 = 73, Ωm = Ωb = 0.04, ΩΛ = 0; blue line).

As before, there is not much to choose between these models in terms of age. The small but non-zero mass density does cause some early deceleration before the model approaches the coasting limit, so the current age is a bit lower: 12.6 Gyr. This is on the small side, but not problematically so, or even particularly concerning given the history of the subject. (I’m old enough to remember when we were pretty sure that globular clusters were 18 Gyr old.)

The time-redshift relation for the no-CDM, baryon-only universe is somewhat different from that of LCDM. If we adopt it, then we find that MOND-driven structure forms at somewhat higher redshift than in with the LCDM time-redshift relation. The benchmark time of 500 Myr for L* galaxy formation is reached at z = 15 rather than z = 9.5 as in LCDM. This isn’t a huge difference, but it does mean that an L* galaxy could in principle appear even earlier than so far seen. I’ve stuck with LCDM as the more conservative estimate of the time-redshift relation, but the plain fact is we don’t really know what the universe is doing at those early times, or if the ansatz we’ve made holds well enough to do this. Surely it must fail at some point, and it seems likely that we’re past that point.

There is a bigger problem with the no-CDM model above. Even if it is close to the right expansion history, it has a very large negative curvature. The geometry is nowhere close to the flat Robertson-Walker metric indicated by the angular diameter distance to the surface of last scattering (the CMB).

Geometry

Much of cosmology is obsessed with geometry, so I will not attempt to do the subject justice. Each set of FLRW parameters has a specific geometry that comes hand in hand with its expansion history. The most sensitive probe we have of the geometry is the CMB. The a priori prediction of LCDM was that its flat geometry required the first acoustic peak to have a maximum near one degree on the sky. That’s exactly what we observe.

Fig. 45 from Famaey & McGaugh (21012): The acoustic power spectrum of the cosmic microwave background as observed by WMAP [229] together with the a priori predictions of ΛCDM (red line) and no-CDM (blue line) as they existed in 1999 [265] prior to observation of the acoustic peaks. ΛCDM correctly predicted the position of the first peak (the geometry is very nearly flat) but over-predicted the amplitude of both the second and third peak. The most favorable a priori case is shown; other plausible ΛCDM parameters [468] predicted an even larger second peak. The most important parameter adjustment necessary to obtain an a posteriori fit is an increase in the baryon density Ωb, above what had previously been expected from BBN. In contrast, the no-CDM model ansatz made as a proxy for MOND successfully predicted the correct amplitude ratio of the first to second peak with no parameter adjustment [268, 269]. The no-CDM model was subsequently shown to under-predict the amplitude of the third peak [442], so no model can explain these data without post-hoc adjustment.

In contrast, no-CDM made the correct prediction for the first-to-second peak amplitude ratio, but it is entirely ambivalent about the geometry. FLRW cosmology and MOND dynamics care about incommensurate things in the CMB data. That said, the naive prediction of the baryon-only model outlined above is that the first peak should occur around where the third peak is observed. That is obviously wrong.

Since the geometry is not a fundamental prediction of MOND, the position of the first peak is easily fit by invoking the same fudge factor used to fit it conventionally: the cosmological constant. We need a larger ΩΛ = 0.96, but so what? This parameter merely encodes our ignorance: we make no pretense to understand it, let alone vesting deep meaning in it. It is one of the things that a deeper theory must explain, and can be considered as a clue in its development.

So instead of a baryon-only universe, our FLRW proxy becomes a Lambda-baryon universe. That fits the geometry, and for an optical depth to the surface of last scattering of τ = 0.17, matches the amplitude of the CMB power spectrum and correctly predicts the cosmic dawn signal that EDGES claimed to detect. Sounds good, right? Well, not entirely. It doesn’t fit the CMB data at L > 600, but I expected to only get so far with the no-CDM, so it doesn’t bother me that you need a better underlying theory to fit the entire CMB. Worse, to my mind, is that the Lambda-baryon proxy universe is much, much older than everything in it: 22 Gyr instead of 13.something.

As above, but now with the addition of a low density, Lambda-dominated universe (H0 = 73, Ωm = Ωb = 0.04, ΩΛ = 0.96; dashed line).

This just don’t seem right. Or even close to right. Like, not even pointing in a direction that might lead to something that had a hope of being right.

Moreover, we have a weird tension between the baryon-only proxy and the Lambda-baryon proxy cosmology. The baryon-only proxy has a plausible expansion history but an unacceptable geometry. The Lambda-baryon proxy has a plausible geometry by an implausible expansion history. Technically, yes, it is OK for the universe to be much older than all of its contents, but it doesn’t make much sense. Why would the universe do nothing for 8 or 9 Gyr, then burst into a sudden frenzy of activity? It’s as if Genesis read “for the first 6 Gyr, God was a complete slacker and did nothing. In the seventh Gyr, he tried to pull an all-nighter only to discover it took a long time to build cosmic structure. Then He said ‘Screw it’ and fudged Creation with MOND.”

In the beginning the Universe was created.
This has made a lot of people very angry and been widely regarded as a bad move.

Douglas Adams, The Restaurant at the End of the Universe

So we can have a plausible geometry or we can have a plausible expansion history with a proxy FLRW model, but not both. That’s unpleasant, but not tragic: we know this approach has to fail somehow. But I had hoped for FLRW to be a more coherent first approximation to the underlying theory, whatever it may be. If there is such a theory, then both General Relativity and MOND are its limits in their respective regimes. As such, FLRW ought to be a good approximation to the underlying entity up to some point. That we have to invoke both non-baryonic dark matter and a cosmological constant is a hint that we’ve crossed that point. But I would have hoped that we crossed it in a more coherent fashion. Instead, we seem to get a little of this for the expansion history and a little of that for the geometry.

I really don’t know what the solution is here, or even if there is one. At least I’m not fooling myself into presuming it must work out.


*There are other ways to write the Friedmann equation, but this is a useful form here. For the mathematically keen, the Hubble parameter is the time derivative of the expansion factor normalized by the expansion factor, which in terms of redshift is

H(z) = -(dz/dt)/(1+z)2.

This quantity evolves, leading us to expect evolution in Milgrom’s constant if we associate it with the numerical coincidence

2π a0 = cH0

If the Hubble parameter evolves, as it appears to do, it would seem to follow that so should a(z) ~ H(z) – otherwise the coincidence is just that: a coincidence that applies only now. There is, at present, no persuasive evidence that a0 evolves with redshift.

A similar order-of-magnitude association can be made with the cosmological constant,

2π a0 = c2Λ1/2

so conceivably the MOND acceleration scale appears as the result of vacuum effects. It is a matter of judgement whether these numerical coincidences are mere coincidences or profound clues towards a deeper theory. That the proportionality constant is very nearly 2π is certainly intriguing, but the constancy of any of these parameters (including Newton’s G) depends on how they emerge from the deeper theory.


+In January 2019, I was attending a workshop at Princeton when I had a chance encounter with Jim Peebles. He was not attending the workshop, but happened to be walking across campus at the same time I was. We got to talking, and he affirmed my recollection of just how incredibly unpopular the cosmological constant used to be. Unprompted, he went on to make the analogy of how similar that seemed to how unpopular MOND is now.

Peebles was awarded a long-overdue Nobel Prize later that year.


$This is one of the things that makes it tricky to compare LCDM and MOND. MOND is a theory of dynamics in the limit of low acceleration. It makes no pretense to be a cosmological theory. LCDM starts as a cosmological theory, but it also makes predictions about the dynamics of systems within it (or at least the dark matter halos in which visible galaxies are presumed to form). So if one starts by putting on a cosmology hat, there is nothing to talk about: LCDM is the only game in town. But from the perspective of dynamics, it’s the other way around, with LCDM repeatedly failing to satisfactorily explain, much less anticipate, phenomena that MOND predicted correctly in advance.


&An intriguing thing about Felten’s MOND universe is that it eventually recollapses irrespective of the mass density. There is no critical value of Ωm, hence no coincidence problem. MOND is strong enough to eventually reverse the expansion of the universe, it just takes a very long time to do so, depending on the density.

I’m surprised this aspect of the issue was overlooked. The coincidence problem (then mostly called the flatness problem) obsessed people at the time, so much so that its solution by Cosmic Inflation led to its widespread acceptance. That only works if Ωm = 1; LCDM makes the coincidence worse. I guess the timing was off, as Inflation had already captured the community’s imagination by that time, likely making it hard to recognize that MOND was a more natural solution. We’d already accepted the craziness that was Inflation and dark matter; MOND craziness was a bridge too far.

I guess. I’m not quite that old; I was still an undergraduate at the time. I did hear about Inflation then, in glowing terms, but not a thing about MOND.

Kinematics suggest large masses for high redshift galaxies

Kinematics suggest large masses for high redshift galaxies

This is what I hope will be the final installment in a series of posts describing the results published in McGaugh et al. (2024). I started by discussing the timescale for galaxy formation in LCDM and MOND which leads to different and distinct predictions. I then discussed the observations that constrain the growth of stellar mass over cosmic time and the related observation of stellar populations that are mature for the age of the universe. I then put on an LCDM hat to try to figure out ways to wriggle out of the obvious conclusion that galaxies grew too massive too fast. Exploring all the arguments that will be made is the hardest part, not because they are difficult to anticipate, but because there are so many* options to consider. This leads to many pages of minutiae that no one ever seems to read+, so one of the options I’ve discussed (e.g., super-efficient star formation) will likely emerge as the standard picture even if it comes pre-debunked.

The emphasis so far has been on the evolution of the stellar masses of galaxies because that is observationally most accessible. That gives us the opportunity to wriggle, because what we really want to measure to test LCDM is the growth of [dark] mass. This is well-predicted but invisible, so we can always play games to relate light to mass.

Mass assembly in LCDM from the IllustrisTNG50 simulation. The dark matter mass assembles hierarchically in the merger tree depicted at left; the size of the circles illustrates the dark matter halo mass. The corresponding stellar mass of the largest progenitor is shown at right as the red band. This does not keep pace with the apparent assembly of stellar mass (data points), but what is the underlying mass really doing?

Galaxy Kinematics

What we really want to know is the underlying mass. It is reasonable to expect that the light traces this mass, but is there another way to assess it? Yes: kinematics. The orbital speeds of objects in galaxies trace the total potential, including the dark matter. So, how massive were early galaxies? How does that evolve with redshift?

The rotation curve of NGC 6946 traced by stars at small radii and gas farther out. This is a typical flat rotation curve (data points) that exceeds what can be explained by the observed baryonic mass (red line deduced from the stars and gas pictured at right), leading to the inference of dark matter.

The rotation curve for NGC 6946 shows a number of well-established characteristics for nearby galaxies, including the dominance of baryons at small radii in high surface brightness galaxies and the famous flat outer portion of the rotation curve. Even when stars contribute as much mass as allowed by the inner rotation curve (“maximum disk“), there is a need for something extra further out (i.e., dark matter or MOND). In the case of dark matter, the amplitude of flat rotation is typically interpreted as being indicative& of halo mass.

So far, the rotation curves of high redshift galaxies look very much like those of low redshift galaxies. There are some fast rotators at high redshift as well. Here is an example observed by Neeleman et al. (2020), who measure a flat rotation speed of 272 km/s for DLA0817g at z = 4.26. That’s more massive than either the Milky Way (~200 km/s) or Andromeda (~230 km/s), if not quite as big as local heavyweight champion UGC 2885 (300 km/s). DLA0817g looks to be a disk galaxy that formed early and is sedately rotating only 1.4 Gyr after the Big Bang. It is already massive at this time: not at all the little nuggets we expect from the CDM merger tree above.

Fig. 1 from Neeleman et al. (2020): the velocity field (left) and position-velocity diagram (right) of DLA0817g. The velocity field looks like that of a rotating disk with the raw position-velocity diagram shows motions of ~200 km/s on either side of the center. When corrected for inclination, the flat rotation speed is 272 km/s, corresponding to a massive galaxy near the top of the Tully-Fisher relation.

This is anecdotal, of course, but there are a good number of similar cases that are already known. For example, the kinematics of ALESS 073.1 at z ≈ 5 indicate the presence of a massive stellar bulge as well as a rapidly rotating disk (Lelli et al. 2021). A similar case has been observed at z ≈ 6 (Tripodi et al. 2023). These kinematic observations indicate the presence of mature, massive disk galaxies well before they were expected to be in place (Pillepich et al. 2019; Wardlow 2021). The high rotation speeds observed in early disk galaxies sometimes exceed 250 (Neeleman et al. 2020) or even 300 km s−1 (Nestor Shachar et al. 2023; Wang et al. 2024), comparable to the most massive local spirals (Noordermeer et al. 2007; Di Teodoro et al. 2021, 2023). That such rapidly rotating galaxies exist at high redshift indicates that there is a lot of mass present, not just light. We can’t just tweak the mass-to-light ratio of the stars to explain the photometry and also explain the kinematics.

In a seminal galaxy formation paper, Mo, Mao, & White (1998) predicted that “present-day disks were assembled recently (at z ≤ 1).” Today, we see that spiral galaxies are ubiquitous in JWST images up to z ∼ 6 (Ferreira et al. 2022, 2023; Kuhn et al. 2024). The early appearance of massive, dynamically cold (Di Teodoro et al. 2016; Lelli et al. 2018, 2023; Rizzo et al. 2023) disks in the first few billion years after the Big Bang is contradictory the natural prediction of ΛCDM. Early disks are expected to be small and dynamically hot (Dekel & Burkert 2014; Zolotov et al. 2015; Krumholz et al. 2018; Pillepich et al. 2019), but they are observed to be massive and dynamically cold. (Hot or cold in this context means a high or low amplitude of the velocity dispersion relative to the rotation speed; the modern Milky Way is cold with σ ~ 20 km/s and Vc ~ 200 km/s.) Understanding the stability and longevity of dynamically cold spiral disks is foundational to the problem.

Kinematic Scaling Relations

Beyond anecdotal cases, we can check on kinematic scaling relations like Tully–Fisher. These are expected to emerge late and evolve significantly with redshift in LCDM (e.g., Glowacki et al. 2021). In MOND, the normalization of the baryonic Tully–Fisher relation is set by a0, so is immutable for all time if a0 is constant. Let’s see what the data say:

Figure 9 from McGaugh et al (2024)The baryonic Tully–Fisher (left) and dark matter fraction–surface brightness (right) relations. Local galaxy data (circles) are from Lelli et al. (2019; left) and Lelli et al. (2016; right). Higher-redshift data (squares) are from Nestor Shachar et al. (2023) in bins with equal numbers of galaxies color coded by redshift: 0.6 < z < 1.22 (blue), 1.22 < z < 2.14 (green), and 2.14 < z < 2.53 (red). Open squares with error bars illustrate the typical uncertainties. The relations known at low redshift also appear at higher redshift with no clear indication of evolution over a lookback time up to 11 Gyr.

Not much to see: the data from Nestor Shachar et al. (2023) show no clear indication of evolution. The same can be said for the dark matter fraction-surface brightness relation. (Glad to see that being plotted after I pointed it out.) The local relations are coincident with those at higher redshift for both relations within any sober assessment of the uncertainties – exactly what we measure and how matters at this level, and I’m not going to attempt to disentangle all that here. Neither am I about to attempt to assess the consistency (or lack thereof) with either LCDM or MOND; the data simply aren’t good enough for that yet. It is also not clear to me that everyone agrees on what LCDM predicts.

What I can do is check empirically how much evolution there is within the 100-galaxy data set of Nestor Shachar et al. (2023). To do that, I fit a line to their data (the left panel above) and measure the residuals: for a given rotation speed, how far is each galaxy from the expected mass? To compare this with the stellar masses discussed previously, I normalize those residuals to the same M** = 9 x 1010 M. If there is no evolution, the data will scatter around a constant value as function of redshift:

This figure reproduces the stellar mass-redshift data for L* galaxies (black points) and the monolithic (purple line) and LCDM (red and green lines) models discussed previously. The blue squares illustrate deviations of the data of Nestor Shachar et al. (2023) from the baryonic Tully-Fisher relation (dashed line, normalized to the same mass as the monolithic model). There is no indication of evolution in the baryonic Tully-Fisher relation, which was apparently established within the first few billion years after the Big Bang (z = 2.5 corresponds to a cosmic age of about 2.6 Gyr). The data are consistent with a monolithic galaxy formation model in which all the mass had been assembled into a single object early on.

The data scatter around a constant value as function of redshift: there is no perceptible evolution.

The kinematic data for rotating galaxies tells much the same story as the photometric data for galaxies in clusters. The are both consistent with a monolithic model that gathered together the bulk of the baryonic mass early on, and evolved as an island universe for most of the history of the cosmos. There is no hint of the decline in mass with redshift predicted by the LCDM simulations. Moreover, the kinematics trace mass, not just light. So while I am careful to consider the options for LCDM, I don’t know how we’re gonna get out of this one.

Empirically, it is an important observation that there is no apparent evolution in the baryonic Tully-Fisher relation out to z ~ 2.5. That’s a lookback time of ~11 Gyr, so most of cosmic history. That means that whatever physics sets the relation did so early. If the physics is MOND, this absence of evolution implies that a0 is constant. There is some wiggle room in that given all the uncertainties, but this already excludes the picture in which a0 evolves with the expansion rate through the coincidence a0 ~ cH0. That much evolution would be readily perceptible if H(z) evolves as it appears to do. In contrast, the coincidence a0 ~ c2Λ1/2 remains interesting since the cosmological constant is constant. Perhaps this is just a coincidence, or perhaps it is a hint that the anomalous acceleration of the expansion of the universe is somehow connected with the anomalous acceleration in galaxy dynamics.

Though I see no clear evidence for evolution in Tully-Fisher to date, it remains early days. For example, a very recent paper by Amvrosiadis et al. (2025) does show a hint of evolution in the sense of an offset in the normalization of the baryonic Tully-Fisher relation. This isn’t very significant, being different by less than 2σ; and again we find ourselves in a situation where we need to take a hard look at all the assumptions and population modeling and velocity measurements just to see if we’re talking about the same quantities before we even begin to assess consistency or the lack thereof. Nevertheless, it is an intriguing result. There is also another interesting anecdotal case: one of their highest redshift objects, ALESS 071.1 at z = 3.7, is also the most massive in the sample, with an estimated stellar mass of 2 x 1012 M. That is a crazy large number, comparable to or maybe larger than the entire dark matter halo of the Milky Way. It falls off the top of any of the graphs of stellar mass we discussed before. If correct, this one galaxy is an enormous problem for LCDM regardless of any other consideration. It is of course possible that this case will turn out to be wrong for some reason, so it remains early days for kinematics at high redshift.

Cluster Kinematics

It is even earlier days for cluster kinematics. First we have to find them, which was the focus of Jay Franck’s thesis. Once identified, we have to estimate their masses with the available data, which may or may not be up to the task. And of course we have to figure out what theory predicts.

LCDM makes a clear prediction for the growth of cluster mass. This work out OK at low redshift, in the sense that the cluster X-ray mass function is in good agreement with LCDM. Where the theory struggles is in the proclivity for the most massive clusters to appear sooner in cosmic history than anticipated. Like individual galaxies, they appear too big too soon. This trend persisted in Jay’s analysis, which identified candidate protoclusters at higher redshifts than expected. It also measured velocity dispersions that were consistently higher than found in simulations. That is, when Jay applied the search algorithm he used on the data to mock data from the Millennium simulation, the structures identified there had velocity dispersions on average a factor of two lower than seen in the data. That’s a big difference in terms of mass.

Figure 11 from McGaugh et al. (2024): Measured velocity dispersions of protocluster candidates (Franck & McGaugh 2016a, 2016b) as a function of redshift. Point size grows with the assessed probability that the identified overdensities correspond to a real structure: all objects are shown as small points, candidates with P > 50% are shown as light blue midsize points, and the large dark blue points meet this criterion and additionally have at least 10 spectroscopically confirmed members. The MOND mass for an equilibrium system in the low-acceleration regime is noted at right; these are comparable to cluster masses at low redshift.

At this juncture, there is no way to know if the protocluster candidates Jay identified are or will become bound structures. We made some probability estimates that can be summed up as “some are probably real, but some probably are not.” The relative probability is illustrated by the size of the points in the plot above; the big blue points are the most likely to be real clusters, having at least ten galaxies at the same place on the sky at the same redshift, all with spectroscopically measured redshifts. Here the spectra are critical; photometric redshifts typically are not accurate enough to indicate that galaxies that happen to be nearby to each other on the sky are also that close in redshift space.

The net upshot is that there are at least some good candidate clusters at high redshift, and these have higher velocity dispersions than expected in LCDM. I did the exercise of working out what the equivalent mass in MOND would be, and it is about the same as what we find for clusters at low redshift. This estimate assumes dynamical equilibrium, which is very far from guaranteed. But the time at which these structures appear is consistent with the timescale for cluster formation in MOND (a couple Gyr; z ~ 3), so maybe? Certainly there shouldn’t be lots of massive clusters in LCDM at z ~ 3.

Kinematic Takeaways

While it remains early days for kinematic observations at high redshift, so far these data do nothing to contradict the obvious interpretation of the photometric data. There are mature, dynamically cold, fast rotating spiral galaxies in the early universe that were predicted not to be there by LCDM. Moreover, kinematics traces mass, not just light, so all the wriggling we might try to explain the latter doesn’t help with the former. The most obvious interpretation of the kinematic data to date is the same as that for the photometric data: galaxies formed early and grew massive quickly, as predicted a priori by MOND.


*The papers I write that cover both theories always seem to wind up lopsided in favor of LCDM in terms of the bulk of their content. That happens because it takes many pages to discuss all the ins and outs. In contrast, MOND just gets it right the first time, so that section is short: there’s not much more to say than “Yep, that’s what it predicted.”

+I’ve yet not heard directly any criticisms of our paper. The criticisms that I’ve heard second or third hand so far almost all fall in the category of things we explicitly discussed. That’s a pretty clear tell that the person leveling the critique hasn’t bothered to read it. I don’t expect everyone to agree with our take on this or that, but a competent critic would at least evince awareness that we had addressed their concern, even if not to their satisfaction. We rarely seem to reach that level: it is much easier to libel and slander than engage with the issues.

The one complaint I’ve heard so far that doesn’t fall in the category of things-we-already-discussed is that we didn’t do hydrodynamic simulations of star formation in molecular gas. That is a red herring. To predict the growth of stellar mass, all we need is a prescription for assembling mass and converting baryons into stars; this is essentially a bookkeeping exercise that can be done analytically. If this were a serious concern, it should be noted that most cosmological hydro-simulations also fail to meet this standard: they don’t resolve star formation, so they typically adopt some semi-empirical (i.e., data-informed) bookkeeping prescription for this “subgrid physics.”

Though I have not myself attempted to numerically simulate galaxy formation in MOND, Sanders (2008) did. More recently, Eappen et al. (2022) have done so, including molecular gas and feedback$ and everything. They find a star formation history compatible with the analytic models we discuss in our paper.

$Related detail: Eappen et al find that different feedback schemes make little difference to the end result. The deus ex machina invoked to solve all problems in LCDM is largely irrelevant in MOND. There’s a good physical reason for this: gravity in MOND is sourced by what you see; how it came to have its observed distribution is irrelevant. If 90% of the baryons are swept entirely out of the galaxy by some intense galactic wind, then they’re gone BYE BYE and don’t matter any more. In contrast, that is one of the scenarios sometimes invoked to form cores in dark matter halos that are initially cuspy: the departure of all those baryons perturbs the orbits of the dark matter particles and rearranges the structure of the halo. While that might work to alter halo structure, how it results in MOND-like phenomenology has never been satisfactorily explained. Mostly that is not seen as even necessary; converting cusp to core is close enough!


&Though we typically associate the observed outer velocity with halo mass, an important caveat is that the radius also matters: M ~ RV2, and most data for high redshift galaxies do not extend very far out in radius. Nevertheless, it takes a lot of mass to make rotation speeds of order 200 km/s within a few kpc, so it hardly matters if this is or is not representative of the dark matter halo: if it is all stars, then the kinematics directly corroborate the interpretation of the photometric data that the stellar mass is large. If it is representative of the dark matter halo, then we expect the halo radius to scale with the halo velocity (R200 ~ V200) so M200 ~ V2003 and again it appears that there is too much mass in place too early.

On the timescale for galaxy formation

On the timescale for galaxy formation

I’ve been wanting to expand on the previous post ever since I wrote it, which is over a month ago now. It has been a busy end to the semester. Plus, there’s a lot to say – nothing that hasn’t been said before, somewhere, somehow, yet still a lot to cobble together into a coherent story – if that’s even possible. This will be a long post, and there will be more after to narrate the story of our big paper in the ApJ. My sole ambition here is to express the predictions of galaxy formation theory in LCDM and MOND in the broadest strokes.

A theory is only as good as its prior. We can always fudge things after the fact, so what matters most is what we predict in advance. What do we expect for the timescale of galaxy formation? To tell you what I’m going to tell you, it takes a long time to build a massive galaxy in LCDM, but it happens much faster in MOND.

Basic Considerations

What does it take to make a galaxy? A typical giant elliptical galaxy has a stellar mass of 9 x 1010 M. That’s a bit more than our own Milky Way, which has a stellar mass of 5 or 6 x 1010 M (depending who you ask) with another 1010 M or so in gas. So, in classic astronomy/cosmology style, let’s round off and say a big galaxy is about 1011 M. That’s a hundred billion stars, give or take.

An elliptical galaxy (NGC 3379, left) and two spiral galaxies (NGC 628 and NGC 891, right).

How much of the universe does it take to make one big galaxy? The critical density of the universe is the over/under point for whether an expanding universe expands forever, or has enough self-gravity to halt the expansion and ultimately recollapse. Numerically, this quantity is ρcrit = 3H02/(8πG), which for H0 = 73 km/s/Mpc works out to 10-29 g/cm3 or 1.5 x 10-7 M/pc3. This is a very small number, but provides the benchmark against which we measure densities in cosmology. The density of any substance X is ΩX = ρXcrit. The stars and gas in galaxies are made of baryons, and we know the baryon density pretty well from Big Bang Nucleosynthesis: Ωb = 0.04. That means the average density of normal matter is very low, only about 4 x 10-31 g/cm3. That’s less than one hydrogen atom per cubic meter – most of space is an excellent vacuum!

This being the case, we need to scoop up a large volume to make a big galaxy. Going through the math, to gather up enough mass to make a 1011 M galaxy, we need a sphere with a radius of 1.6 Mpc. That’s in today’s universe; in the past the universe was denser by (1+z)3, so at z = 10 that’s “only” 140 kpc. Still, modern galaxies are much smaller than that; the effective edge of the disk of the Milky Way is at a radius of about 20 kpc, and most of the baryonic mass is concentrated well inside that: the typical half-light radius of a 1011 M galaxy is around 6 kpc. That’s a long way to collapse.

Monolithic Galaxy Formation

Given this much information, an early concept was monolithic galaxy formation. We have a big ball of gas in the early universe that collapses to form a galaxy. Why and how this got started was fuzzy. But we knew how much mass we needed and the volume it had to come from, so we can consider what happens as the gas collapses to create a galaxy.

Here we hit a big astrophysical reality check. Just how does the gas collapse? It has to dissipate energy to do so, and cool to form stars. Once stars form, they may feed energy back into the surrounding gas, reheating it and potentially preventing the formation of more stars. These processes are nontrivial to compute ab initio, and attempting to do so obsesses much of the community. We don’t agree on how these things work, so they are the knobs theorists can turn to change an answer they don’t like.

Even if we don’t understand star formation in detail, we do observe that stars have formed, and can estimate how many. Moreover, we do understand pretty well how stars evolve once formed. Hence a common approach is to build stellar population models with some prescribed star formation history and see what works. Spiral galaxies like the Milky Way formed a lot of stars in the past, and continue to do so today. To make 5 x 1010 M of stars in 13 Gyr requires an average star formation rate of 4 M/yr. The current measured star formation rate of the Milky Way is estimated to be 2 ± 0.7 M/yr, so the star formation rate has been nearly constant (averaging over stochastic variations) over time, perhaps with a gradual decline. Giant elliptical galaxies, in contrast, are “red and dead”: they have no current star formation and appear to have made most of their stars long ago. Rather than a roughly constant rate of star formation, they peaked early and declined rapidly. The cessation of star formation is also called quenching.

A common way to formulate the star formation rate in galaxies as a whole is the exponential star formation rate, SFR(t) = SFR0 e-t/τ. A spiral galaxy has a low baseline star formation rate SFR0 and a long burn time τ ~ 10 Gyr while an elliptical galaxy has a high initial star formation rate and a short e-folding time like τ ~ 1 Gyr. Many variations on this theme are possible, and are of great interest astronomically, but this basic distinction suffices for our discussion here. From the perspective of the observed mass and stellar populations of local galaxies, the standard picture for a giant elliptical was a large, monolithic island universe that formed the vast majority of its stars early on then quenched with a short e-folding timescale.

Galaxies as Island Universes

The density parameter Ω provides another useful way to think about galaxy formation. As cosmologists, we obsess about the global value of Ω because it determines the expansion history and ultimate fate of the universe. Here it has a more modest application. We can think of the region in the early universe that will ultimately become a galaxy as its own little closed universe. With a density parameter Ω > 1, it is destined to recollapse.

A fun and funny fact of the Friedmann equation is that the matter density parameter Ωm → 1 at early times, so the early universe when galaxies form is matter dominated. It is also very uniform (more on that below). So any subset that is a bit more dense than average will have Ω > 1 just because the average is very close to Ω = 1. We can then treat this region as its own little universe (a “top-hat overdensity”) and use the Friedmann equation to solve for its evolution, as in this sketch:

The expansion of the early universe a(t) (blue line). A locally overdense region may behave as a closed universe, recollapsing in a finite time (red line) to potentially form a galaxy.

That’s great, right? We have a simple, analytic solution derived from first principles that explains how a galaxy forms. We can plug in the numbers to find how long it takes to form our basic, big 1011 M galaxy and… immediately encounter a problem. We need to know how overdense our protogalaxy starts out. Is its effective initial Ωm = 2? 10? What value, at what time? The higher it is, the faster the evolution from initially expanding along with the rest of the universe to decoupling from the Hubble flow to collapsing. We know the math but we still need to know the initial condition.

Annoying Initial Conditions

The initial condition for galaxy formation is observed in the cosmic microwave background (CMB) at z = 1090. Where today’s universe is remarkably lumpy, the early universe is incredibly uniform. It is so smooth that it is homogeneous and isotropic to one part in a hundred thousand. This is annoyingly smooth, in fact. It would help to have some lumps – primordial seeds with Ω > 1 – from which structure can grow. The observed seeds are too tiny; the typical initial amplitude is 10-5 so Ωm = 1.00001. That takes forever to decouple and recollapse; it hasn’t yet had time to happen.

The cosmic microwave background as observed by ESA’s Planck satellite. This is an all-sky picture of the relic radiation field – essentially a snapshot of the universe when it was just a few hundred thousand years old. The variations in color are variations in temperature which correspond to variations in density. These variations are tiny, only about one part in 100,000. The early universe was very uniform; the real picture is a boring blank grayscale. We have to crank the contrast way up to see these minute variations.

We would like to know how the big galaxies of today – enormous agglomerations of stars and gas and dust separated by inconceivably vast distances – came to be. How can this happen starting from such homogeneous initial conditions, where all the mass is equally distributed? Gravity is an attractive force that makes the rich get richer, so it will grow the slight initial differences in density, but it is also weak and slow to act. A basic result in gravitational perturbation theory is that overdensities grow at the same rate the universe expands, which is inversely related to redshift. So if we see tiny fluctuations in density with amplitude 10-5 at z = 1000, they should have only grown by a factor of 1000 and still be small today (10-2 at z = 0). But we see structures of much higher contrast than that. You can’t here from there.

The rich large scale structure we see today is impossible starting from the smooth observed initial conditions. Yet here we are, so we have to do something to goose the process. This is one of the original motivations for invoking cold dark matter (CDM). If there is a substance that does not interact with photons, it can start to clump up early without leaving too large a mark on the relic radiation field. In effect, the initial fluctuations in mass are larger, just in the invisible substance. (That’s not to say the CDM doesn’t leave a mark on the CMB; it does, but it is subtle and entirely another story.) So the idea is that dark matter forms gravitational structures first, and the baryons fall in later to make galaxies.

An illustration of the the linear growth of overdensities. Structure can grow in the dark matter (long dashed lines) with the baryons catching up only after decoupling (short dashed line). In effect, the dark matter gives structure formation a head start, nicely explaining the apparently impossible growth factor. This has been standard picture for what seems like forever (illustration from Schramm 1992).

With the right amount of CDM – and it has to be just the right amount of a dynamically cold form of non-baryonic dark matter (stuff we still don’t know actually exists) – we can explain how the growth factor is 105 since recombination instead of a mere 103. The dark matter got a head start over the stuff we can see; it looks like 105 because the normal matter lagged behind, being entangled with the radiation field in a way the dark matter was not.

This has been the imperative need in structure formation theory for so long that it has become undisputed lore; an element of the belief system so deeply embedded that it is practically impossible to question. I risk getting ahead of the story, but it is important to point out that, like the interpretation of so much of the relevant astrophysical data, this belief assumes that gravity is normal. This assumption dictates the growth rate of structure, which in turn dictates the need to invoke CDM to allow structure to form in the available time. If we drop this assumption, then we have to work out what happens in each and every alternative that we might consider. That definitely gets ahead of the story, so first let’s understand what we should expect in LCDM.

Hierarchical Galaxy formation in LCDM

LCDM predicts some things remarkably well but others not so much. The dark matter is well-behaved, responding only to gravity. Baryons, on the other hand, are messy – one has to worry about hydrodynamics in the gas, star formation, feedback, dust, and probably even magnetic fields. In a nutshell, LCDM simulations are very good at predicting the assembly of dark mass, but converting that into observational predictions relies on our incomplete knowledge of messy astrophysics. We know what the mass should be doing, but we don’t know so well how that translates to what we see. Mass good, light bad.

Starting with the assembly of mass, the first thing we learn is that the story of monolithic galaxy formation outlined above has to be wrong. Early density fluctuations start out tiny, even in dark matter. God didn’t plunk down island universes of galaxy mass then say “let there be galaxies!” The annoying initial conditions mean that little dark matter halos form first. These subsequently merge hierarchically to make ever bigger halos. Rather than top-down monolithic galaxy formation, we have the bottom-up hierarchical formation of dark matter halos.

The hierarchical agglomeration of dark matter halos into ever larger objects is often depicted as a merger tree. Here are four examples from the high resolution Illustris TNG50 simulation (Pillepich et al. 2019; Nelson et al. 2019).

Examples of merger trees from the TNG50-1 simulation (Pillepich et al. 2019; Nelson et al. 2019). Objects have been selected to have very nearly the same stellar mass at z=0. Mass is built up through a series of mergers. One large dark matter halo today (at top) has many antecedents (small halos at bottom). These merge hierarchically as illustrated by the connecting lines. The size of the symbol is proportional to the halo mass. I have added redshift and the corresponding age of the universe for vanilla LCDM in a more legible font. The color bar illustrates the specific star formation rate: the top row has objects that are still actively star forming like spirals; those in the bottom row are “red and dead” – things that have stopped forming stars, like giant elliptical galaxies. In all cases, there is a lot of merging and a modest rate of growth, with the typical object taking about half a Hubble time (~7 Gyr) to assemble half of its final stellar mass.

The hierarchical assembly of mass is generic in CDM. Indeed, it is one of its most robust predictions. Dark matter halos start small, and grow larger by a succession of many mergers. This gradual agglomeration is slow: note how tiny the dark matter halos at z = 10 are.

Strictly speaking, it isn’t even meaningful to talk about a single galaxy over the span of a Hubble time. It is hard to avoid this mental trap: surely the Milky Way has always been the Milky Way? so one imagines its evolution over time. This is monolithic thinking. Hierarchically, “the galaxy” refers at best to the largest progenitor, the object that traces the left edge of the merger trees above. But the other protogalactic chunks that eventually merge together are as much part of the final galaxy as the progenitor that happens to be largest.

This complicated picture is complicated further by what we can see being stars, not mass. The luminosity we observe forms through a combination of in situ growth (star formation in the largest progenitor) and ex situ growth through merging. There is no reason for some preferred set of protogalaxies to form stars faster than the others (though of course there is some scatter about the mean), so presumably the light traces the mass of stars formed traces the underlying dark mass. Presumably.

That we should see lots of little protogalaxies at high redshift is nicely illustrated by this lookback cone from Yung et al (2022). Here the color and size of each point corresponds to the stellar mass. Massive objects are common at low redshift but become progressively rare at high redshift, petering out at z > 4 and basically absent at z = 10. This realization of the observable stellar mass tracks the assembly of dark mass seen in merger trees.

Fig. 2 from Yung et al. (2022) illustrating what an observer would see looking back through their simulation to high redshift.

This is what we expect to see in LCDM: lots of small protogalaxies at high redshift; the building blocks of later galaxies that had not yet merged. The observation of galaxies much brighter than this at high redshift by JWST poses a fundamental challenge to the paradigm: mass appears not to be subdivided as expected. So it is entirely justifiable that people have been freaking out that what we see are bright galaxies that are apparently already massive. That shouldn’t happen; it wasn’t predicted to happen; how can this be happening?

That’s all background that is assumed knowledge for our ApJ paper, so we’re only now getting to its Figure 1. This combines one of the merger trees above with its stellar mass evolution. The left panel shows the assembly of dark mass; the right pane shows the growth of stellar mass in the largest progenitor. This is what we expect to see in observations.


Fig. 1 from McGaugh et al (2024): A merger tree for a model galaxy from the TNG50-1 simulation (Pillepich et al. 2019; Nelson et al. 2019, left panel) selected to have M ≈ 9 × 1010 M at z = 0; i.e., the stellar mass of a local L giant elliptical galaxy (Driver et al. 2022). Mass assembles hierarchically, starting from small halos at high redshift (bottom edge) with the largest progenitor traced along the left of edge of the merger tree. The growth of stellar mass of the largest progenitor is shown in the right panel. This example (jagged line) is close to the median (dashed line) of comparable mass objects (Rodriguez-Gomez et al. 2016), and within the range of the scatter (the shaded band shows the 16th – 84th percentiles). A monolithic model that forms at zf = 10 and evolves with an exponentially declining star formation rate with τ = 1 Gyr (purple line) is shown for comparison. The latter model forms most of its stars earlier than occurs in the simulation.

For comparison, we also show the stellar mass growth of a monolithic model for a giant elliptical galaxy. This is the classic picture we had for such galaxies before we realized that galaxy formation had to be hierarchical. This particular monolithic model forms at zf = 10 and follows an exponential star formation rate with τ = 1 Gyr. It is one of the models published by Franck & McGaugh (2017). It is, in fact, the first model I asked Jay to construct when he started the project. Not because we expected it to best describe the data, as it turns out to do, but because the simple exponential model is a touchstone of stellar population modeling. It was a starter model: do this basic thing first to make sure you’re doing it right. We chose τ = 1 Gyr because that was the typical number bandied about for elliptical galaxies, and zf = 10 because that seemed ridiculously early for a massive galaxy to form. At the time we built the model, it was ludicrously early to imagine a massive galaxy would form, from an LCDM perspective. A formation redshift zf = 10 was, less than a decade ago, practically indistinguishable from the beginning of time, so we expected it to provide a limit that the data would not possibly approach.

In a remarkably short period, JWST has transformed z = 10 from inconceivable to run of the mill. I’m not going to go into the data yet – this all-theory post is already a lot – but to offer one spoiler: the data are consistent with this monolithic model. If we want to “fix” LCDM, we have to make the red line into the purple line for enough objects to explain the data. That proves to be challenging. But that’s moving the goalposts; the prediction was that we should see little protogalaxies at high redshift, not massive, monolith-style objects. Just look at the merger trees at z = 10!

Accelerated Structure Formation in MOND

In order to address these issues in MOND, we have to go back to the beginning. What is the evolution of a spherical region (a top-hat overdensity) that might collapse to form a galaxy? How does a spherical region under the influence of MOND evolve within an expanding universe?

The solution to this problem was first found by Felten (1984), who was trying to play the Newtonian cosmology trick in MOND. In conventional dynamics, one can solve the equation of motion for a point on the surface of a uniform sphere that is initially expanding and recover the essence of the Friedmann equation. It was reasonable to check if cosmology might be that simple in MOND. It was not. The appearance of a0 as a physical scale makes the solution scale-dependent: there is no general solution that one can imagine applies to the universe as a whole.

Felten reasonably saw this as a failure. There were, however, some appealing aspects of his solution. For one, there was no such thing as a critical density. All MOND universes would eventually recollapse irrespective of their density (in the absence of the repulsion provided by a cosmological constant). It could take a very long time, which depended on the density, but the ultimate fate was always the same. There was no special value of Ω, and hence no flatness problem. The latter obsessed people at the time, so I’m somewhat surprised that no one seems to have made this connection. Too soon*, I guess.

There it sat for many years, an obscure solution for an obscure theory to which no one gave credence. When I became interested in the problem a decade later, I started methodically checking all the classic results. I was surprised to find how many things we needed dark matter to explain were just as well (or better) explained by MOND. My exact quote was “surprised the bejeepers out of us.” So, what about galaxy formation?

I started with the top-hat overdensity, and had the epiphany that Felten had already obtained the solution. He had been trying to solve all of cosmology, which didn’t work. But he had solved the evolution of a spherical region that starts out expanding with the rest of the universe but subsequently collapses under the influence of MOND. The overdensity didn’t need to be large, it just needed to be in the low acceleration regime. Something like the red cycloidal line in the second plot above could happen in a finite time. But how much?

The solution depends on scale and needs to be solved numerically. I am not the greatest programmer, and I had a lot else on my plate at the time. I was in no rush, as I figured I was the only one working on it. This is usually a good assumption with MOND, but not in this case. Bob Sanders had had the same epiphany around the same time, which I discovered when I received his manuscript to referee. So all credit is due to Bob: he said these things first.

First, he noted that galaxy formation in MOND is still hierarchical. Small things form first. Crudely speaking, structure formation is very similar to the conventional case, but now the goose comes from the change in the force law rather than extra dark mass. MOND is nonlinear, so the whole process gets accelerated. To compare with the linear growth of CDM:

A sketch of how structures grow over time under the influence of cold dark matter (left, from Schramm 1992, same as above) and MOND (right, from Sanders & McGaugh 2002; see also this further discussion and previous post). The slow linear growth of CDM (long-dashed line, left panel) is replaced by a rapid, nonlinear growth in MOND (solid lines at right; numbers correspond to different scales). Nonlinear growth moderates after cosmic expansion begins to accelerate (dashed vertical line in right panel).

The net effect is the same. A cosmic web of large scale structure emerges. They look qualitatively similar, but everything happens faster in MOND. This is why observations have persistently revealed structures that are more massive and were in place earlier than expected in contemporaneous LCDM models.

Simulated structure formation in ΛCDM (top) and MOND (bottom) showing the more rapid emergence of similar structures in MOND (note the redshift of each panel). From McGaugh (2015).

In MOND, small objects like globular clusters form first, but galaxies of a range of masses all collapse on a relatively short cosmic timescale. How short? Let’s consider our typical 1011 M galaxy. Solving Felten’s equation for the evolution of a sphere numerically, peak expansion is reached after 300 Myr and collapse happens in a similar time. The whole galaxy is in place speedy quick, and the initial conditions don’t really matter: a uniform, initially expanding sphere in the low acceleration regime will behave this way. From our distant vantage point thirteen billion years later, the whole process looks almost monolithic (the purple line above) even though it is a chaotic hierarchical mess for the first few hundred million years (z > 14). In particular, it is easy to form half of the stellar mass early on: the mass is already assembled.

The evolution of a 1011 M sphere that starts out expanding with the universe but decouples and collapses under the influence of MOND (dotted line). It reaches maximum expansion after 300 Myr and recollapses in a similar time, so the entire object is in place after 600 Myr. (A version of this plot with a logarithmic time axis appears as Fig. 2 in our paper.) The inset shows the evolution of smaller shells within such an object (Fig. 2 from Sanders 2008). The inner regions collapse first followed by outer shells. These oscillate and cross, mixing and ultimately forming a reasonable size galaxy – see Sanders’s Table 1 and also his Fig. 4 for the collapse times for objects of other masses. These early results are corroborated by Eappen et al. (2022), who further demonstrate that the details of feedback are not important in MOND, unlike LCDM.

This is what JWST sees: galaxies that are already massive when the universe is just half a billion years old. I’m sure I should say more but I’m exhausted now and you may be too, so I’m gonna stop here by noting that in 1998, when Bob Sanders predicted that “Objects of galaxy mass are the first virialized objects to form (by z=10),” the contemporaneous prediction of LCDM was that “present-day disc [galaxies] were assembled recently (at z<=1)” and “there is nothing above redshift 7.” One of these predictions has been realized. It is rare in science that such a clear a priori prediction comes true, let alone one that seemed so unreasonable at the time, and which took a quarter century to corroborate.


*I am not quite this old: I was still an undergraduate in 1984. I hadn’t even decided to be an astronomer at that point; I certainly hadn’t started following the literature. The first time I heard of MOND was in a graduate course taught by Doug Richstone in 1988. He only mentioned it in passing while talking about dark matter, writing the equation on the board and saying maybe it could be this. I recall staring at it for a long few seconds, then shaking my head and muttering “no way.” I then completely forgot about it, not thinking about it again until it came up in our data for low surface brightness galaxies. I expect most other professionals have the same initial reaction, which is fair. The test of character comes when it crops up in their data, as it is doing now for the high redshift galaxy community.

What if we never find dark matter?

Some people have asked me to comment on the Scientific American article What if We Never Find Dark Matter? by Slatyer & Tait. For the most part, I find it unobjectionable – from a certain point of view. It is revealing to examine this point of view, starting with the title, which frames the subject in a way that gives us permission to believe in dark matter while never finding it. This framing is profoundly unscientific, as it invites a form of magical thinking that could usher in a thousand years of dark epicycles (feedback being the modern epicycle) on top of the decades it has already sustained.

The article does recognize that a modification of gravity is at least a logical possibility. The mere mention of this is progress, if grudging and slow. They can’t bring themselves to name a specific theory: they never say MOND and only allude obliquely to a single relativistic theory as if saying its name out loud would bring a curse% upon their house.

Of course, they mention modified gravity merely to dismiss it:

A universe without dark matter would require striking modifications to the laws of gravity… [which] seems exceptionally difficult.

Yes it is. But it has also proven exceptionally difficult to detect dark matter. That hasn’t stopped people from making valiant efforts to do so. So the argument is that we should try really hard to accomplish the exceptionally difficult task of detecting dark matter, but we shouldn’t bother trying to modify gravity because doing so would be exceptionally difficult.

This speaks to motivations – is one idea better motivated? In the 1980s, cold dark matter was motivated by both astronomical observations and physical theory. Absent the radical thought of modifying gravity, we had a clear need for unseen mass. Some of that unseen mass could simply have been undetected normal matter, but most of it needed to be some form of non-baryonic dark matter that exceeded the baryon density allowed by Big Bang Nucleosynthesis and did not interact directly with photons. That meant entirely new physics from beyond the Standard Model of particle physics: no particle in the known stable of particles suffices. This new physics was seen as a good thing, because particle physicists already had the feeling that there should be something more than the Standard Model. There was a desire for Grand Unified Theories (GUTs) and supersymmetry (SUSY). SUSY naturally provides a home for particles that could be the dark matter, in particular the Weakly Interacting Massive Particles (WIMPs) that are the prime target for the vast majority of experiments that are working to achieve the exceptionally difficult task of detecting them. So there was a confluence of reasons from very different perspectives to make the search for WIMPs very well motivated.

That was then. Fast forward a few decades, and the search for WIMPs has failed. Repeatedly. Continuing to pursue it is an example of the sunk cost fallacy. We keep doing it because we’ve already done so much of it that surely we should keep going. So I feel the need to comment on this seemingly innocuous remark:

although many versions of supersymmetry predict WIMP dark matter, the converse isn’t true; WIMPs are viable dark matter candidates even in a universe without supersymmetry.

Strictly speaking, this is correct. It is also weak sauce. The neutrino is an example of a weakly interacting particle that has some mass. We know neutrinos exist, and they reside in the Standard Model – no need for supersymmetry. We also know that they cannot be the dark matter, so it would be disingenuous to conflate the two. Beyond that, it is possible to imagine a practically infinite variety of particles that are weakly interacting by not part of supersymmetry. That’s just throwing mud at the wall. SUSY WIMPs were extraordinarily well motivated, with the WIMP miracle being the beautiful argument that launched a thousand experiments. But lacking SUSY – which seems practically dead at this juncture – WIMPS as originally motivated are dead along with it. The motivation for more generic WIMPs is lacking, so the above statement is nothing more than an assertion that runs interference for the fact that we no longer have good reason to expect WIMPs at all.

There is also an element of disciplinary-centric thinking: if you’re a particle physicist, you can build a dark matter detector and maybe make a major discovery or at least get great gobs of grants in the effort to do so. If instead what is going on is really a modification of gravity, then your expertise is irrelevant and there is no reason to keep shoveling money into your field. Worse, a career spent at the bottom of a mine shaft working on dark matter detectors is a waste of effort. I can understand why people don’t want to hear that message, but that just brings us back to the sunk cost fallacy.

Speaking of money, I occasionally get scientists who come up to me Big Mad that grant money gets spent on MOND research, as that would be a waste of taxpayer money. I can assure them that no government dollars have been harmed in the pursuit of MOND research. Certainly not in the U.S., at any rate. But lots and lots of tax dollars have been burned in the search for dark matter, and the article we’re discussing advocates spending a whole lot more to search for dark matter candidates that are nowhere near as well motivated as WIMPs were. That’s why I keep asking: how do we know when to stop? I don’t expect other scientists to agree to my interpretation of the data, but I do expect them to have a criterion whereby they would accede that dark matter is incorrect. If we lack any notion of how we could figure out that we are wrong, then we’ve made the leap from science to religion. So far, such criteria are sadly lacking, and I see precious little evidence of people rising to the challenge. Indeed, I frequently get the opposite, as other scientists have frequently asserted to me that they would only consider MOND as a last resort. OK, when does that happen? There’s always another particle we can think up, so the answer seems to be “never.”

I wrote long ago that “After WIMPs, the next obvious candidate is axions.” Sure enough, this article spills a lot of ink discussing axions. Rather than dwell on this different doomed idea for dark matter, let’s take a gander at the remarkable art made to accompany the article, because we are visual animals and graphical representations are important.

Artwork by Olena Shmahalo that accompanies the article by Slatyer & Tait.

Where to start? Right in the center is a scroll of an old-timey star chart. On top of that are several depictions of what I guess are meant to be galaxies*. Around those is an ethereal dragon representing the unknown dark matter. The depiction of dark matter as an unfathomable monster is at once both spot on and weirdly anthropomorphic. Is this a fabled beast the adventurous hero is supposed to seek out and slay? or befriend? or maybe it is a tale in which he grows during the journey to realize he has been on the wrong path the whole time? I love the dragon as art, but as a representation of a scientific subject it imparts an aura of teleological biology to something that is literally out of this world, residing in a dark sector that is not part of our daily experience and may be entirely inaccessible to our terrestrial experimentation. Off the edge of the map and on into extra dimensions: here there be monsters.

The representations here are fantastic. There is the coffee mug and the candle to represent the hard work of those of us who burn the candle at both ends wrestling with the dark matter problem. There’s a magnifying glass to represent how hard the experimentalists have looked for the dark matter. Scattered around are various totems, like the Polaroid-style picture at right depicting the gravitational lensing around a black hole. This is cool, but has squat to do with the missing mass problem. It’s more a nod to General Relativity and the Faith we have therein, albeit in a regime many orders of magnitude removed from the one that concerns us here. On the left is an old newspaper article about WIMPs, complete with a sketch of a Feynman diagram that depicts how we might detect them. And at the top, peeking out of a book, as it were a thought made long ago now seeking new relevance, a note saying Axions!

I can save everyone a lot of time, effort, and expense. It ain’t WIMPs and it ain’t axions. Nor is the dark matter any of the plethora of other ideas illustrated in the eye-watering depiction of the landscape of particle possibilities in the article. These simply add mass while providing no explanation of the observed MOND phenomenology. This phenomenology is fundamental to the problem, so any approach that ignores it is doomed to failure. I’m happy to consider explanations based on dark matter, but these need to have a direct connection to baryons baked-in to be viable. None of the ideas they discuss meet this minimum criterion.

Of course it could be that MOND – either as modified gravity or modified inertia, an important possibility that usually gets overlooked – is essentially correct and that’s why it keeps having predictions come true. That’s what motivates considering it now: repeated and sustained predictive success, particularly for phenomena that dark matter does not provide a satisfactory explanation for.

Of course, this article advocating dark matter is at pains to dismiss modified gravity as a possibility:

The changes [of modified gravity] would have to mimic the effects of dark matter in astrophysical systems ranging from giant clusters of galaxies to the Milky Way’s smallest satellite galaxies. In other words, they would need to apply across an enormous range of scales in distance and time, without contradicting the host of other precise measurements we’ve gathered about how gravity works. The modifications would also need to explain why, if dark matter is just a modification to gravity—which is universally associated with all matter—not all galaxies and clusters appear to contain dark matter. Moreover, the most sophisticated attempts to formulate self-consistent theories of modified gravity to explain away dark matter end up invoking a type of dark matter anyway, to match the ripples we observe in the cosmic microwave background, leftover light from the big bang.

That’s a lot, so let’s break it down. First, that modified gravity “would have to mimic the effects of dark matter” gets it exactly backwards. It is dark matter that has to mimic the effects of MOND. That’s an easy call: dark matter plus baryons could combine in a large variety of ways that might bear no resemblance to MOND. Indeed, they should do that: the obvious prediction of LCDM-like theories is an exponential disk in an NFW halo. In contrast, there is one and only one thing that can happen in MOND since there is a single effective force law that connects the dynamics to the observed distribution of baryons. Galaxies didn’t have to do that, shouldn’t do that, but remarkably they do. The uniqueness of this relation poses a problem for dark matter that has been known since the previous century:

Reluctant conclusions from McGaugh & de Blok (1998). As we said at the time, “This result surprised the bejeepers out of us, too.”

This basic conclusion has not changed over the years, only gotten stronger. The equation coupling dark to luminous matter I wrote down in all generality in McGaugh (2004) and again in McGaugh et al. (2016). The latter paper is published in Physical Review Letters, arguably the most prominent physics journal, and is in the top percentile of citation rates, so it isn’t some minuscule detail buried in an obscure astronomical journal that might have eluded the attention of particle physicists. It is the implication that conclusion [1] could be correct that bounces off a protective shell of cognitive dissonance so hard that the necessary corollary [2] gets overlooked.

OK, that’s just the first sentence. Let’s carry on with “[the modification] would need to apply across an enormous range of scales in distance and time, without contradicting the host of other precise measurements we’ve gathered about how gravity works.” Well, duh. That’s the first thing I checked. Thoroughly and repeatedly. I’ve written many reviews on the subject. They’re either unaware of some well-established results, or choose to ignore them.

The reason MOND doesn’t contradict the host of other constraints about how gravity works is simple. It happens in the low acceleration regime, where the only test of gravity is provided by the data that evince the mass discrepancy. If we had posed galaxy observations as a test of GR, we would have concluded that it fails at low accelerations. Of course we didn’t do that; we observed galaxies because we were interested in how they worked, then inferred the need for dark matter when gravity as we currently know it failed to explain the data. Other tests, regardless how precise, are irrelevant if they probe accelerations higher than Milgrom’s constant (1.2 x 10-10 m/s/s).

Continuing on, there is the complaint that “modifications would also need to explain why… not all galaxies and clusters appear to contain dark matter.” Yep, you gotta explain all the data. That starts with the vast majority of the data that do follow the radial acceleration relation, which is not satisfactorily explained by dark matter. They skip+ past that part, preferring to ignore the forest in order to complain about a few outlying trees. There are some interesting cases, to be sure, but this complaint about objects lacking dark matter is misplaced for deeper reasons. It makes no sense in terms of dark matter that there are objects without dark matter. That shouldn’t happen in LCDM any more than in MOND$. One winds up invoking non-equilibrium effects, which we can do in MOND just as we do in dark matter. It is not satisfactory in either case, but it is weird to complain about it for one theory while not for the other. This line of argument is perilously close to the a priori fallacy.

The last line, “the most sophisticated attempts to formulate self-consistent theories of modified gravity to explain away dark matter end up invoking a type of dark matter anyway, to match the ripples we observe in the cosmic microwave background” actually has some merit. The theory they’re talking about is Aether-Scalar-Tensor (AeST) theory, which I guess earns the badge of “most sophisticated” because it fits the power spectrum of the cosmic microwave background (CMB).

I’ve discussed the CMB in detail before, so won’t belabor it here. I will note that the microwave background is only one piece of many lines of evidence, and the conclusion one reaches depends on how one chooses to weigh the various incommensurate evidence. That they choose to emphasize this one thing while entirely eliding the predictive successes of MOND is typical, but does not encourage me to take this as a serious argument, especially when I had more success predicting important aspects of the microwave background than did the entire community that persistently cites the microwave background to the exclusion of all else.

It is also a bit strange to complain that AeST “explain[s] away dark matter [but] end[s] up invoking a type of dark matter.” I think what they mean here is true at the level of quantum field theory where all particles are fields and all fields are particles, but beyond that, they aren’t the same thing at all. It is common for modified gravity theories to invoke scalar fields#, and this is an important degree of freedom that enables AeST to fit the CMB. TeVeS also added a scalar and tensor field, but could not fit the CMB, so this approach isn’t guaranteed to work. But are these a type of dark matter? Or are our ideas of dark matter mimicking a scalar field? It seems like this argument could cut either way, and we’re just granting dark matter priority as a concept because we thought of it first. I don’t think nature cares about the order of our thoughts.

None of this addresses the question of the year. Why does MOND get any predictions right? Just saying “dark matter does it” is not sufficient. Until scientists engage seriously with this question, they’re doomed to chasing phantoms that aren’t there to catch.


%From what I’ve seen, they’re probably right to fear the curses of their colleagues for such blasphemy. Very objective, very scientific.

*Galaxies are nature’s artwork; human imitations never seem adequate. These look more like fried eggs to me. On the whole, this art is exceptionally well informed by science, or at least by particle physics, but not so much by astronomy. And therein lies the greater problem: there is a whole field of physics devoted to dark matter that is entirely motivated by astronomical observations yet its practitioners are, by and large, remarkably ignorant of anything more than the most rudimentary aspects of the data that motivate their field’s existence.

+There seems to be a common misconception that anything we observe is automatically explained by dark matter. That’s only true at the level of inference: any excess gravity is attributable to unseen mass. That’s why a hypothesis is only as good as its prior; a mere inference isn’t science, you have to make a prediction. Once you do that, you find dark matter might do lots of things that are not at all like the MONDian phenomenology that we observe. While I would hope the need for predictions is obvious, many scientists seem to conflate observation with prediction – if we observe it, that’s what dark matter must predict!

$The discrepancy should only appear below the critical acceleration scale in MOND. So strictly speaking, MOND does predict that there should be objects without dark matter: systems that are high acceleration. The central regions of globular clusters and elliptical galaxies are such regions, and MOND fares well there. In contrast, it is rather hard to build a sensible dark matter model that is as baryon dominated as observed. So this is an example of MOND explaining the absence of dark matter better than dark matter theory. This is related to the observation that the apparent need for dark matter only appears at low accelerations, at a scale that dark matter knows nothing about.

#I, personally, am skeptical of this approach, as it seems too generic (let’s add some new freedom!) when it feels like we’re missing something fundamental, perhaps along the lines of Mach’s Principle. However, I also recognize that this is a feeling on my part; it is outside my training to have a meaningful opinion.

Sociology in the hunt for dark matter

Sociology in the hunt for dark matter

Who we give prizes to is more a matter of sociology than science. Good science is a prerequisite, but after that it is a matter of which results we value in the here and now. Results that are guaranteed to get a Nobel prize, like the detection of dark matter, attract many suitors who pursue them vigorously. Results that come as a surprise can be more important than the expected results, but it takes a lot longer to recognize and appreciate them.

When there are expected results with big stakes, sociology kicks into hyperdrive. Let’s examine the attitudes in some recent quotes:

In Science, Hunt for dark matter particles bags nothing—again (24 Aug 2024): Chamkaur Ghag says

If WIMPs were there, we have the sensitivity to have seen them

which is true. WIMP detection experiments have succeeded in failing. They have explored the predicted parameter space. But in the same paragraph, it is said that it is too early to “give up hope of detecting WIMPs.” That is a pretty vague assertion, and is precisely why I’ve been asking other scientists to define a criterion by which we could agree that enough was enough already. How do we know when to stop looking?

The same paragraph ends with

This is our first real foray into discovery territory

which is not true. We’ve explored the region in which WIMPs were predicted to reside over and over and over again. This was already excruciatingly old news when I wrote about it in 2008. The only way to spin this as a factual statement is to admit that the discovery territory is practically infinite, in which case we can assert that every foray is our first “real” foray because we’ll never get anywhere relative to infinity. It sounds bad when put that way, which is the opposite of the positivity the spokespeople for huge experiments are appointed to project.

And that’s where the sociology kicks in. The people who do the experiments want to keep doing the experiments until they discover dark matter and win the Nobel prize. It’s disappointing that this hasn’t happened already, but it is an expected result. It’s what they do, so it’s natural to want to keep at it.

On the one hand, I’d like to see these experiments continue until they reach the neutrino fog, at which point they will provide interesting astrophysical information. Says Michael Murra (in Science News, 25 July 2024)

It’s very cool to see that we can turn this detector into a neutrino observatory

Yes, it is. But that wasn’t the point, was it?

On the other hand, I do not expect these experiments to ever detect dark matter. That’s because I understand that the astronomical data contain self-contradictions to their interpretation in terms of dark matter. Any particle physicist will tell you that astronomical data require dark matter. But they’re not experts on that topic, I am. I’ve talked to enough of them at this point to conclude that the typical physicist working on dark matter has only a cartoonish understanding of the data that motivates their whole field. After all,

It is difficult to get a man to understand something, when his salary depends on his not understanding it.

Upton Sinclair

 

 

Decision Trees & Philosophical Blunders

Decision Trees & Philosophical Blunders

Given recent developments in the long-running hunt for dark matter and the difficulty interpreting what this means, it seems like a good juncture to re-up* this:


The history of science is a decision tree. Vertices appear where we must take one or another branching. Sometimes, we take the wrong road for the right reasons.

A good example is the geocentric vs. heliocentric cosmology. The ancient Greeks knew that in many ways it made more sense for the earth to revolve around the sun than vice-versa. Yet they were very clever. Ptolemy and others tested for the signature of the earth’s orbit in the seasonal wobbling in the positions of stars, or parallax. If the earth is moving around the sun, nearby stars should appear to move on the sky as the earth moves from one side of the sun to the other. Try blinking back and forth between your left and right eyes to see this effect, noting how nearby objects appear to move relative to distant ones.

Problem is, Ptolemy did not find the parallax. Quite reasonably, he inferred that the earth stayed put. We know now that this was the wrong branch to choose, but it persisted as the standard world view for many centuries. It turns out that even the nearest stars are so distant that their angular parallax is tiny (the angle of parallax is inversely proportional to distance). Precision sufficient for measuring the parallax was not achieved until the 19th century, by which time astronomers were already convinced it must happen.

Ptolemy was probably aware of this possibility, though it must have seemed quite unreasonable to conjecture at that time that the stars could be so very remote. The fact was that parallax was not observed. Either the earth did not move, or the stars were ridiculously distant. Which sounds more reasonable to you?

So, science took the wrong branch. Once this happened, sociology kicked in. Generation after generation of intelligent scholars confirmed the lack of parallax until the opposing branch seemed so unlikely that it became heretical to even discuss. It is very hard to reverse back up the decision tree and re-assess what seems to be such a firm conclusion. It took the Copernican revolution to return to that ancient decision branch and try the other one.

Cosmology today faces a similar need to take a few steps back on the decision tree. The problem now is the issue of the mass discrepancy, typically attributed to dark matter. When it first became apparent that things didn’t add up when one applied the usual Law of Gravity to the observed dynamics of galaxies, there was a choice. Either lots of matter is present which happens to be dark, or the Law of Gravity has to be amended. Which sounds more reasonable to you?

Having traveled down the road dictated by the Dark Matter decision branch, cosmologists find themselves trapped in a web of circular logic entirely analogous to the famous Ptolemaic epicycles. Not many of them realize it yet, much less admit that this is what is going on. But if you take a few steps back up the decision branch, you find a few attempts to alter the equations of gravity. Most of these failed almost immediately, encouraging cosmologists down the dark matter path just as Ptolemy wisely chose a geocentric cosmology. However, one of these theories is not only consistent with the data, it actually predicts many important new results. This theory is known as MOND (MOdified Newtonian Dynamics). It was introduced in 1983 by Moti Milgrom of the Weizmann Institute in Israel.

MOND accurately describes the effective force law in galaxies based only on the observed stars and gas. What this means is unclear, but it clearly means something! It is conceivable that dark and luminous matter somehow interact to mimic the behavior stipulated by MOND. This is not expected, and requires a lot of epicyclic thinking to arrange. The more straightforward interpretation is that MOND is correct, and we took the wrong branch of the decision tree back in the ’70s.

MOND has dire implications for much modern cosmological thought which has developed symbiotically with dark matter. As yet, no one has succeeded in writing down a theory which encompasses both MOND and General Relativity. This leaves open many questions in cosmology that were thought to be solved, such as the expansion history of the universe. There is nothing a scientist hates to do more than unlearn what was thought to be well established. It is this sociological phenomenon that makes it so difficult to climb back up the decision tree to the faulty branching.

Once one returns and takes the correct branch, the way forward is not necessarily obvious. The host of questions which had been assigned seemingly reasonable explanations along the faulty branch must be addressed anew. And there will always be those incapable of surrendering the old world view irrespective of the evidence.

In my opinion, the new successes of MOND can not occur by accident. They are a strong sign that we are barking up the wrong tree with dark matter. A grander theory encompassing both MOND and General Relativity must exist, even if no one has as yet been clever enough to figure it out (few have tried).

These all combine to make life as a cosmologist interesting. Sometimes it is exciting. Often it is frustrating. Most of the time, “interesting” takes on the meaning implied by the old Chinese curse:

MAY YOU LIVE IN INTERESTING TIMES

Like it or not, we do.


*I wrote this in 2000. I leave it to the reader to decide how much progress has been made since then.

Why’d it have to be MOND?

Why’d it have to be MOND?

I want to take another step back in perspective from the last post to say a few words about what the radial acceleration relation (RAR) means and what it doesn’t mean. Here it is again:

The Radial Acceleration Relation over many decades. The grey region is forbidden – there cannot be less acceleration than caused by the observed baryons. The entire region above the diagonal line (yellow) is accessible to dark matter models as the sum of baryons and however much dark matter the model prescribes. MOND is the blue line.

This information was not available when the dark matter paradigm was developed. We observed excess motion, like flat rotation curves, and inferred the existence of extra mass. That was perfectly reasonable given the information available at the time. It is not now: we need to reassess as we learn more.

There is a clear organization to the data at both high and low acceleration. No objective observer with a well-developed physical intuition would look at this and think “dark matter.” The observed behavior does not follow from one force law plus some arbitrary amount of invisible mass. That could do literally anything in the yellow region above, and beyond the bounds of the plot, both upwards and to the left. Indeed, there is no obvious reason why the data don’t fall all over the place. One of the lingering, niggling concerns is the 5:1 ratio of dark matter:baryons – why is it in the same ballpark, when it could be pretty much anything? Why should the data organize in terms of acceleration? There is no reason for dark matter to do this.

Plausible dark matter models have been predicted to do a variety of things – things other than what we observe. The problem for dark matter is that real objects only occupy a tiny line through the vast region available to them in the plot above. This is a fine-tuning problem: why do the data reside only where they do when they could be all over the place? I recognized this as a problem for dark matter before I became aware$ of MOND. That it turns out that the data follow the line uniquely predicted* by MOND is just chef’s kiss: there is a fine-tuning problem for dark matter because MOND is the effective force law.

The argument against dark matter is that the data could reside anywhere in the yellow region above, but don’t. The argument against MOND is that a small portion of the data fall a little off the blue line. Arguing that such objects, be they clusters of galaxies or particular individual galaxies, falsify MOND while ignoring the fine-tuning problem faced by dark matter is a case of refusing to see the forest for a few outlying trees.%

So to return to the question posed in the title of this post, I don’t know why it had to be MOND. That’s just what we observe. Pretending dark matter does the same thing is a false presumption.


$I’d heard of MOND only vaguely, and, like most other scientists in the field, had paid it no mind until it reared its ugly head in my own data.

*I talk about MOND here because I believe in giving credit where credit is due. MOND predicted this; no other theory did so. Dark matter theories did not predict this. My dark matter-based galaxy formation theory did not predict this. Other dark matter-based galaxy formation theories (including simulations) continue to fail to explain this. Other hypotheses of modified gravity also did not predict what is observed. Who+ ordered this?

Modified Dynamics. Very dangerous. You go first.

Many people in the field hate MOND, often with an irrational intensity that has the texture of religion. It’s not as if I woke up one morning and decided to like MOND – sometimes I wish I had never heard of it – but disliking a theory doesn’t make it wrong, and ignoring it doesn’t make it go away. MOND and only MOND predicted the observed RAR a priori. So far, MOND and only MOND provides a satisfactory explanation of thereof. We might not like it, but there it is in the data. We’re not going to progress until we get over our fear of MOND and cope with it. Imagining that it will somehow fall out of simulations with just the right baryonic feedback prescription is a form of magical thinking, not science.

MOND. Why’d it have to be MOND?

+Milgrom. Milgrom ordered this.


%I expect many cosmologists would argue the same in reverse for the cosmic microwave background (CMB) and other cosmological constraints. I have some sympathy for this. The fit to the power spectrum of the CMB seems too good to be an accident, and it points to the same parameters as other constraints. Well, mostly – the Hubble tension might be a clue that things could unravel, as if they haven’t already. The situation is not symmetric – where MOND predicted what we observe a priori with a minimum of assumptions, LCDM is an amalgam of one free parameter after another after another: dark matter and dark energy are, after all, auxiliary hypotheses we invented to save FLRW cosmology. When they don’t suffice, we invent more. Feedback is single word that represents a whole Pandora’s box of extra degrees of freedom, and we can invent crazier things as needed. The results is a Frankenstein’s monster of a cosmology that we all agree is the same entity, but when we examine it closely the pieces don’t fit, and one cosmologist’s LCDM is not really the same as that of the next. They just seem to agree because they use the same words to mean somewhat different things. Simply agreeing that there has to be non-baryonic dark matter has not helped us conjure up detections of the dark matter particles in the laboratory, or given us the clairvoyance to explain# what MOND predicted a prioi. So rather than agree that dark matter must exist because cosmology works so well, I think the appearance of working well is a chimera of many moving parts. Rather, cosmology, as we currently understand it, works if and only if non-baryonic dark matter exists in the right amount. That requires a laboratory detection to confirm.

#I have a disturbing lack of faith that a satisfactory explanation can be found.

The Radial Acceleration Relation starting from high accelerations

The Radial Acceleration Relation starting from high accelerations

In the previous post, we discussed how lensing data extend the Radial Acceleration Relation (RAR) seen in galaxy kinematics to very low accelerations. Let’s zoom out now, and look at things at higher accelerations and from a historical perspective.

This all started with Kepler’s Laws of Planetary Motion, which are explained by Newton’s Universal Gravitation – the inverse square law gbar = GM/r2 is exactly what is needed to explain the observed centripetal acceleration, gobs = V2/r. It also explains the surface gravity of the Earth. Indeed, it was the famous falling apple that is reputed to have given Newton the epiphany that it was the same force that made the apple fall to the ground that made the Moon circle the Earth that made the planets revolve around the sun.

The inverse square law holds over more than six decades of observed acceleration in the solar system, from the one gee we feel here on the surface of the Earth to the outskirts patrolled by Neptune.

Planetary motion in the radial acceleration plane. The dotted line is Newton’s inverse square law of universal gravity.*

The inverse square force law is what it takes to make the planetary data line up. A different force law would give a line with a different slope in this plot. No force law at all would give chaos, with planets all over the place in this plot, if, say, the solar system were run by a series of deferents and epicycles as envisioned for Ptolemaic cosmologies. In such a system, there is no reason to expect the organization seen above. It would require considerable contrivance to make it so.

Newtonian gravity and General Relativity are exquisitely well-tested in the solar system. There are also some very precise tests at higher accelerations that GR passes with flying colors. The story to lower accelerations is another matter. The most remote solar system probes we’ve launched are the Voyger and Pioneer missions. These probe down to ~10-6 m/s/s; below that is uncharted territory.

The RAR extended from high solar system accelerations to much low accelerations typical of galaxies – not the change in scale. Some early rotation curves (of NGC 55, NGC 801, NGC 2403, NGC 2841, & UGC 2885) are shown as lines. These probed an entirely new regime of acceleration. The departure of these lines from the dotted line are the flat rotation curves indicating the acceleration discrepancy/need for dark matter. This discrepancy was clear by the end of the 1970s, but the amplitude of the discrepancy then was modest.

Galaxies (and extragalactic data in general) probe an acceleration range that is unprecedented from the perspective of solar system tests. General Relativity has passed so many precise tests that the usual presumption is that is applies at all scales. But it is an assumption that it applies to scales where it hasn’t been tested. Galaxies and cosmology pose such a test. That we need to invoke dark matter to save the phenomenon would be interpreted as a failure if we had set out to test the theory rather than assume it applied.

It was clear from flat rotation curves that something extra was needed. However, when we invented the dark matter paradigm, it was not clear that the data were organized in terms of acceleration. As the data continued to improve, it became clear that the vast majority of galaxies adhered to a single, apparently universal+ radial acceleration relation. What had been a hint of systematic behavior in early data became clean and clear. The data did not exhibit the scatter that as was expected from a sum of a baryonic disk and a non-baryonic dark matter halo – there is no reason that these two distinct components should sum to the single effective force law that is observed.

The RAR with modern data for both early (red triangles) and late (cyan circles) morphological types. The blue line is the prediction of MOND: there is a transition at an acceleration scale to a force law that is universal but no longer inverse-square.

The observed force-law happened to already have a name: MOND. If it had been something else, then we could have claimed to discover something new. But instead we were obliged to admit that the unexpected thing we had found had in fact been predicted by Milgrom.

This predictive power now extends to much lower accelerations. Again, only MOND got this prediction right in advance.

The RAR as above, extended by weak gravitational lensing observations. These follow the prediction of MOND as far as they are credible.

The data could have done many different things here. It could have continued along the dotted line, in which case we’d have need for no dark matter or modified gravity. It could have scattered all over the place – this is the natural expectation of dark matter theories, as there is no reason to expect the gravitational potential of the dominant dark matter halo to be dictated by the distribution of baryons. One expects that not to happen. Yet the data evince the exceptional degree of organization seen above.

It requires considerable contrivance to explain the RAR with dark matter. No viable explanation yet exists, despite many unconvincing claims to this effect. I have worked more on trying to explain this in terms of dark matter than I have on MOND, and all I can tell you is what doesn’t work. Every explanation I’ve seen so far is a special case of a model I had previously considered and rejected as obviously unworkable. At this point, I don’t see how dark matter can ever plausibly do what the data require.

I worry that dark matter has become an epicycle theory. We’re sure it is right, so whatever we observe, no matter how awkward or unexpected, must be what it does. But what if it is wrong, and it does not exist? How do we ever disabuse ourselves of the notion that there is invisible mass once we’ve convinced ourselves that there has to be?

Of course, MOND has its own problems. Clusters of galaxies are systems$ for which it persistently fails to explain the amplitude of the observed acceleration discrepancy. So let’s add those to the plot as well:

As above, with clusters of galaxies added (x: Sanders 2003; +: Li et al. 2023).

So: do clusters violate the RAR, or follow it? I’d say yes and yes – the offset, thought modest in amplitude in this depiction, is statistically significant. But there is also a similar scaling with acceleration, only the amplitude is off. The former makes no sense in MOND; the latter makes no sense in terms of dark matter which did not predict a RAR at all.

Clusters are the strongest evidence against MOND. Just being evidence against MOND doesn’t automatically make it evidence in favor of dark matter. I often pose myself the question: which theory requires me to disbelieve the least amount of data? When I first came to the problem, I was shocked to find that the answer was clearly MOND. Since then, it has gone back and forth, but rather than a clear answer emerging, what has happened is more a divergence of different lines of evidence: that which favors the standard cosmology is incommensurate with that which favors MOND. This leads to considerable cognitive dissonance.

One way to cope with cognitive dissonance is to engage with a problem from different perspectives. If I put on a MOND hat, I worry about the offset seen above for clusters. If I put on a dark matter hat, I worry about the same kind of offset for every system that is not a rich cluster of galaxies. Most critics of MOND seem unconcerned about this problem for dark matter, so how much should a critic of dark matter worry about it in MOND?


*For the hyper-pedantic: the eccentricity of each orbit causes the exact location of each planet in the first plot to oscillate up and down along the dotted line. The extent of this oscillation is smaller than the size of each symbol with the exception of Mercury, which has a relatively high eccentricity (but nowhere near enough to reach Venus).

+There are a few exceptions, of course – there are always exceptions in astronomy. The issue is whether these are physically meaningful, or the result of systematic uncertainties or non-equilibrium processes. The claimed discrepancies range from dubious to unconvincing to obviously wrong.

$I’ve heard some people criticize MOND because the centroid of the lensing signal does not peak around the gas in the Bullet cluster. This assumes that the gas represents the majority of the baryons. We know the is not the case, and that there is some missing mass in clusters. Whatever it is, it is clearly more centrally concentrated than the gas, so we don’t expect the lensing signal to peak where the gas is. All the Bullet cluster teaches us is that whatever this stuff is, it is collisionless. So this particular complaint is a logical fallacy of the a red herring and/or straw man variety born of not understanding MOND well enough to criticize it accurately. Why bother to do that when you come to the problem already sure that MOND is wrong? I understand this line of thought extraordinarily well, because that’s the attitude I started with, and I’ve seen it repeated by many colleagues. The difference is that I bothered to educate myself.

A personal note – I will be on vacation next week, so won’t be quick to respond to comments.

The Radial Acceleration Relation to very low accelerations

The Radial Acceleration Relation to very low accelerations

Flat rotation curves and the Baryonic Tully-Fisher relation (BTFR) both follow from the Radial Acceleration Relation (RAR). In Mistele et al. (2024b) we emphasize the exciting aspects of the former; these follow from the RAR in the Mistele et al. (2024a). It is worth understanding the connection.

First, the basic result:


Figure 2 from Mistele et al. (2024a). The RAR from weak lensing data (yellow diamonds) is shown together with the binned kinematic RAR from Lelli et al. (2017, gray circles). The solid line is Newtonian gravity without dark matter (gobs = gbar). The shaded region at gbar < 10−13 m/s2 indicates where the isolation criterion may be less reliable according to the estimate by Brouwer et al. (2021). Our results suggest that late type galaxies (LTGs) may be sufficiently isolated down to gbar ≈ 10−14 m/s2. We shade this region where LTGs may still be reliable in a lighter color.

The RAR of weak lensing extends the RAR from kinematics to much lower accelerations. How low we can trust we’ll come back to, but certainly to gbar ≈ 10−13 m/s2 and probably to gbar ≈ 10−14 m/s2. For the mass of the typical galaxy in the KiDS sample, this corresponds to a radius of 300 kpc and 1.1 Mpc, respectively. Hence our claim that the effective gravitational potentials of isolated galaxies are consistent with rotation curves that remain flat indefinitely far out: a million light years at least, and perhaps a million parsecs.

Note that the kinematic and lensing data overlap at log(gbar) = -11.5. These independent methods give the same result. Moreover, this region corresponds to the regions in galaxies where atomic gas rather than stars dominates the baryonic mass budget, which minimizes the systematic uncertainty due to stellar population mass estimates. The lensing results still depend on these, but they agree with the gas-dominated portion of the RAR, and merge smoothly into the star-dominated portion of the kinematic data when the same stellar pop models are used for both. To wit: the agreement is really good.

A flat rotation curve projects into the log(gobs)-log(gbar) plane as a line with slope 1/2. The data adhere closely to this slope, so I knew as soon as I saw the lensing RAR that the implied rotation curves remained flat indefinitely. How far, in radius, depends on galaxy mass, since for a point mass (a good approximation at radii beyond 100 kpc), gbar = GMbar/R2. We can split the lensing data into different mass bins, for which the RAR looks like


Figure 5 from Mistele et al. (2024a). The RAR implied by weak lensing for four baryonic mass bins. The dashed line has the slope a flat rotation curve has when projected into the acceleration plane. That different masses follow the same RAR implies the Baryonic Tully-Fisher relation.

Most dark matter models that I’ve seen or constructed myself predict a mass-dependent shift in the RAR, if they predict a RAR at all (many do not). We see no such shift. But the math is such that the flat rotation speed implied by the slope 1/2 RAR varies with mass in such a way that they only fall on the same RAR, as observed, if there is a Baryonic Tully-Fisher relation with slope 4. So I knew from examination of the above figure that the BTFR was sure to follow, but that’s because I’ve been working on these things for a long time. It isn’t necessarily obvious to everyone else, so it was worth explicitly showing.

Our result differs from the original of Brouwer et al. in two subtle but important ways. The first is that we use stellar population models that are the same as we use for the kinematic data. This self-consistency is important to the continuity of the data. We (especially Jim Schombert) took a deep dive into this, and the models used by Brouwer et al. are consistent with ours for late type (spiral) galaxies (LTGs). However, ours are somewhat heavier^ for early type galaxies (ETGs). That’s part of the reason that they find an offset in the RAR between morphological types and we do not.

Another important difference is the strictness of the isolation criterion. We are trying to ascertain the average gravitational potential of isolated galaxies, those with no big neighbors to compound the lensing signal. Brouwer et al. required that there be no galaxies more than a tenth of the luminosity of the primary within 3 Mpc. That seems reasonable, but we explored lots of variations on both aspects of that limit. It seems to be fine for LTGs, but insufficient for ETGs. That in itself is not surprising, as ETGs are known to be more strongly clustered than LTGs, so it is harder to find isolated examples.

To illustrate this, we show the deviation of the data from the kinematic RAR fit as a function of the isolation criterion:


Figure 4 from Mistele et al. (2024a). Top: the difference between the radial accelerations inferred from weak lensing and the RAR fitting function, measured in sigmas, as a function of how isolated the lenses are, quantified by Risol. We separately show the result for ETGs (red) and LTGs (blue) as well as for small (triangles with dashed lines) and large accelerations (diamonds with solid lines). LTGs are mostly unaffected by making the isolation criterion stricter. In contrast, ETGs do depend on Risol, but tend towards with increasing Risol. Middle and bottom: the accelerations behind these sigma values for Risol = 3 Mpc/h70 and Risol = 4 Mpc/h70
.

The top panel shows that LTGs do not deviate from the RAR as we vary the radius of isolation. In contrast, ETGs deviate a lot for small Risol. This is what Brouwer et al. found, and it would be a problem for MOND if LTGs and ETGs genuinely formed different sequences: it would be as if they were both obeying their own version of a similar but distinct MOND-like force law rather than a single universal force law.

That said, the ETGs converge towards the same RAR as the LTGs as we make the isolation criterion more strict. The distinction between ETGs and LTGs that appears to be clear for the Risol = 3 Mpc/h70 used by Brouwer et al. (middle panel) goes away when Risol = 4 Mpc/h70 (bottom panel). The random errors grow because fewer galaxies+ meet the stricter criterion, but this seems a price well worth paying to be rid of the systematic variation seen in the top panel. This also dictates how far out we can trust the data, which show no clear deviation from the RAR until below the limit gbar = 10−14 m/s2.

Regardless of the underlying theory, the data paint a consistent picture. This can be summarized by three empirical laws of galactic rotation:

  • Rotation curves become approximately* flat at large radii and remain so indefinitely.
  • The amplitude of the flat rotation speed scales with the baryonic mass as Mbar ~ Vf4 (the BTFR).
  • The observed centripetal acceleration follows from that predicted by the baryons (the RAR).

These are the galactic analogs of Kepler’s Laws for planetary motion. There is no theory in these statements; their just a description of what the data do. That’s useful, as they provide an empirical touchstone that has to be satisfactorily explained by any theory for it to be considered viable. No dark matter-based theory currently does that.


^The difference is well within the expected variance for stellar population models. We can reproduce their numbers if we treat ETGs as if they were just red LTGs. I don’t know if that’s what they did, but it ain’t right.

+For the record, the isolated fraction of the entire sample is 16%: most galaxies have neighbors. As a function of mass, the isolation criterion leaves a fraction of 8%, 18%, 30%, and 42% of LTG lenses and 9%, 14%, and 22% of ETG lenses, respectively, in each mass bin. The fraction of isolated LTGs is generally higher than ETGs, as expected. There is also a trend for the isolation fraction to increase as mass decreases. In part this is real; more luminous galaxies are more clustered. It may also be that it is easier for objects that exceed 10% of the primary mass (really luminosity) to evade detection as the primaries get fainter so 10% of that is harder to reach.

*Some people take “flat” way too seriously in this context. While it is often true that rotation curves look pretty darn flat over an extended radial range, I say approximately flat because we never measure, and can never measure, exactly a slope of dV/dR = 0.000. As a practical matter, we have adopted a variation of < 5% from point to point as a working definition. The scatter in Tully-Fisher naturally goes up if one adopts a weaker criterion; what one gets for the scatter is all about data quality.