The evolution of the luminosity density

The evolution of the luminosity density

The results from the high redshift universe keep pouring in from JWST. It is a full time job, and then some, just to keep track. One intriguing aspect is the luminosity density of the universe at z > 10. I had not thought this to be problematic for LCDM, as it only depends on the overall number density of stars, not whether they’re in big or small galaxies. I checked this a couple of years ago, and it was fine. At that point we were limited to z < 10, so what about higher redshift?

It helps to have in mind the contrasting predictions of distinct hypotheses, so a quick reminder. LCDM predicts a gradual build up of the dark matter halo mass function that should presumably be tracked by the galaxies within these halos. MOND predicts that galaxies of a wide range of masses form abruptly, including the biggest ones. The big distinction I’ve focused on is the formation epoch of the most massive galaxies. These take a long time to build up in LCDM, which typically takes half a Hubble time (~7 billion years; z < 1) for a giant elliptical to build up half its final stellar mass. Baryonic mass assembly is considerably more rapid in MOND, so this benchmark can be attained much earlier, even within the first billion years after the Big Bang (z > 5).

In both theories, astrophysics plays a role. How does gas condense into galaxies, and then form into stars? Gravity just tells us when we can assemble the mass, not how it becomes luminous. So the critical question is whether the high redshift galaxies JWST sees are indeed massive. They’re much brighter than had been predicted by LCDM, and in-line with the simplest models evolutionary models one can build in MOND, so the latter is the more natural interpretation. However, it is much harder to predict how many galaxies form in MOND; it is straightforward to show that they should form fast but much harder to figure out how many do so – i.e., how many baryons get incorporated into collapsed objects, and how many get left behind, stranded in the intergalactic medium? Consequently, the luminosity density – the total number of stars, regardless of what size galaxies they’re in – did not seem like a straight-up test the way the masses of individual galaxies is.

It is not difficult to produce lots of stars at high redshift in LCDM. But those stars should be in many protogalactic fragments, not individually massive galaxies. As a reminder, here is the merger tree for a galaxy that becomes a bright elliptical at low redshift:

Merger tree from De Lucia & Blaizot 2007 showing the hierarchical build-up of massive galaxies from many protogalactic fragments.

At large lookback times, i.e., high redshift, galaxies are small protogalactic fragments that have not yet assembled into a large island universe. This happens much faster in MOND, so we expect that for many (not necessarily all) galaxies, this process is basically complete after a mere billion years or so, often less. In both theories, your mileage will vary: each galaxy will have its own unique formation history. Nevertheless, that’s the basic difference: big galaxies form quickly in MOND while they should still be little chunks at high z in LCDM.

The hierarchical formation of structure is a fundamental prediction of LCDM, so this is in principle a place it can break. That is why many people are following the usual script of blaming astrophysics, i.e., how stars form, not how mass assembles. The latter is fundamental while the former is fungible.

Gradual mass assembly is so fundamental that its failure would break LCDM. Indeed, it is so deeply embedded in the mental framework of people working on it that it doesn’t seem to occur to most of them to consider the possibility that it could work any other way. It simply has to work that way; we were taught so in grad school!

Here is a sketch of how structures grow over time under the influence of cold dark matter (left, from Schramm 1992) and MOND (right, from Sanders & McGaugh 2002; see also this further discussion). The slow linear growth of CDM (long-dashed line, left panel) is replaced by a rapid, nonlinear growth in MOND (solid lines at right; numbers correspond to different scales). Nonlinear growth moderates after cosmic expansion begins to accelerate (dashed vertical line in right panel).

A principle result in perturbation theory applied to density fluctuations in an expanding universe governed by General Relativity is that the growth rate of these proto-objects is proportional to the expansion rate of the universe – hence the linear long-dashed line in the left diagram. The baryons cannot match the observations by themselves because the universe has “only” expanded by a factor of a thousand since recombination while structure has grown by a factor of a hundred thousand. This was one of the primary motivations for inventing cold dark matter in the first place: it can grow at the theory-specified rate without obliterating the observed isotropy% of the microwave background. The skeletal structure of the cosmic web grows in cold dark matter first; the baryons fall in afterwards (short-dashed line in left panel).

That’s how it works. Without dark matter, structure cannot form, so we needn’t consider MOND nor speak of it ever again forever and ever, amen.

Except, of course, that isn’t necessarily how structure formation works in MOND. Like every other inference of dark matter, the slow growth of perturbations assumes that gravity is normal. If we consider a different force law, then we have to revisit this basic result. Exactly how structure formation works in MOND is not a settled subject, but the panel at right illustrates how I think it might work. One seemingly unavoidable aspect is that MOND is nonlinear, so the growth rate becomes nonlinear at some point, which is rather early on if Milgrom’s constant a0 does not evolve. Rather than needing dark matter to achieve a growth factory of 105, the boost to the force law enables baryons do it on their own. That, in a nutshell, is why MOND predicts the early formation of big galaxies.

The same nonlinearity that makes structure grow fast in MOND also makes it very hard to predict the mass function. My nominal expectation is that the present-day galaxy baryonic mass function is established early and galaxies mostly evolve as closed boxes after that. Not exclusively; mergers still occasionally happen, as might continued gas accretion. In addition to the big galaxies that form their stars rapidly and eventually become giant elliptical galaxies, there will also be a population for which gas accretion is gradual^ enough to settle into a preferred plane and evolve into a spiral galaxy. But that is all gas physics and hand waving; for the mass function I simply don’t know how to extract a prediction from a nonlinear version of the Press-Schechter formalism. Somebody smarter than me should try that.

We do know how to do it for LCDM, at least for the dark matter halos, so there is a testable prediction there. The observable test depends on the messy astrophysics of forming stars and the shape of the mass function. The total luminosity density integrates over the shape, so is a rather forgiving test, as it doesn’t distinguish between stars in lots of tiny galaxies or the same number in a few big ones. Consequently, I hadn’t put much stock in it. But it is also a more robustly measured quantity, so perhaps it is more interesting than I gave it credit for, at least once we get to such high redshift that there should be hardly any stars.

Here is a plot of the ultraviolet (UV) luminosity density from Adams et al. (2023):

Fig. 8 from Adams et al. (2023) showing the integrated UV luminosity density as a function of redshift. UV light is produced by short-lived, massive stars, so makes a good proxy for the star formation rate (right axis).

The lower line is one+ a priori prediction of LCDM. I checked this back when JWST was launched, and saw no issues up to z=10, which remains true. However, the data now available at higher redshift are systematically higher than the prediction. The reason for this is simple, and the same as we’ve discussed before: dark matter halos are just beginning to get big; they don’t have enough baryons in them to make that many stars – at least not for the usual assumptions, or even just from extrapolating what we know quasi-empirically. (I say “quasi” because the extrapolation requires a theory-dependent rate of mass growth.)

The dashed line is what I consider to be a reasonable adjustment of the a priori prediction. Putting on an LCDM hat, it is actually closer to what I would have predicted myself because it has a constant star formation efficiency which is one of the knobs I prefer to fix empirically and then not touch. With that, everything is good up to z=10.5, maybe even to z=12 if we only believe* the data with uncertainties. But the bulk of the high redshift data sit well above the plausible expectation of LCDM, so grasping at the dangling ends of the biggest error bars seems unlikely to save us from a fall.

Ignoring the model lines, the data flatten out at z > 10, which is another way of saying that the UV luminosity function isn’t evolving when it should be. This redshift range does not correspond to much cosmic time, only a few hundred million years, so it makes the empiricist in me uncomfortable to invoke astrophysical causes. We have to imagine that the physical conditions change rapidly in the first sliver of cosmic time at just the right fine-tuned rate to make it look like there is no evolution at all, then settle down into a star formation efficiency that remains constant in perpetuity thereafter.

Harikane et al. (2023) also come to the conclusion that there is too much star formation going on at high redshift (their Fig. 18 is like that of Adams above, but extending all the way to z=0). Like many, they appear to be unaware that the early onset of structure formation had been predicted, so discuss three conventional astrophysical solutions as if these were the only possibilities. Translating from their section 6, the astrophysical options are:

  • Star formation was more efficient early on
  • Active Galactic Nuclei (AGN)
  • A top heavy IMF

This is a pretty broad view of the things that are being considered currently, though I’m sure people will add to this list as time goes forward and entropy increases.

Taking these in reverse order, the idea of a top heavy IMF is that preferentially more massive stars form early on. These produce more light per unit mass, so one gets brighter galaxies than predicted with a normal IMF. This is an idea that recurs every so often; see, e.g., section 3.1.1 of McGaugh (2004) where I discuss it in the related context of trying to get LCDM models to reionize the universe early enough. Supermassive Population III stars were all the rage back then. Changing the mass spectrum& with which stars form is one of those uber-free parameters that good modelers refrain from twiddling because it gives too much freedom. It is not a single knob so much as a Pandora’s box full of knobs that invoke a thousand Salpeter’s demons to do nearly anything at the price of understanding nothing.

As it happens, the option of a grossly variable IMF is already disfavored by the existence of quenched galaxies at z~3 that formed a normal stellar population at much higher redshift (z~11). These galaxies are composed of stars that have the spectral signatures appropriate for a population that formed with a normal IMF and evolved as stars do. This is exactly what we expect for galaxies that form early and evolve passively. Adjusting the IMF to explain the obvious makes a mockery of Occam’s razor.

AGN is a catchall term for objects like quasars that are powered by supermassive black holes at the centers of galaxies. This is a light source that is non-stellar, so we’ll overestimate the stellar mass if we mistake some light from AGN# as being from stars. In addition, we know that AGN were more prolific in the early universe. That in itself is also a problem: just as forming galaxies early is hard, so too is it hard to form enough supermassive black holes that early. So this just becomes the same problem in a different guise. Besides, the resolution of JWST is good enough to see where the light is coming from, and it ain’t all from unresolved AGN. Harikane et al. estimate that the AGN contribution is only ~10%.

That leaves the star formation efficiency, which is certainly another knob to twiddle. On the one hand, this is a reasonable thing to do, since we don’t really know what the star formation efficiency in the early universe was. On the other, we expected the opposite: star formation should, if anything, be less efficient at high redshift when the metallicity was low so there were few ways for gas to cool, which is widely considered to be a prerequisite for initiating star formation. Indeed, inefficient cooling was an argument in favor of a top-heavy IMF (perhaps stars need to be more massive to overcome higher temperatures in the gas from which they form), so these two possibilities contradict one another: we can have one but not both.

To me, the star formation efficiency is the most obvious knob to twiddle, but it has to be rather fine-tuned. There isn’t much cosmic time over which the variation must occur, and yet it has to change rapidly and in such a way as to precisely balance the non-evolving UV luminosity function against a rapidly evolving dark matter halo mass function. Once again, we’re in the position of having to invoke astrophysics that we don’t understand to make up for a manifest deficit the behavior of dark matter. Funny how those messy baryons always cover up for that clean, pure, simple dark matter.

I could go on about these possibilities at great length (and did in the 2004 paper cited above). I decline to do so any farther: we keep digging this hole just to fill it again. These ideas only seem reasonable as knobs to turn if one doesn’t see any other way out, which is what happens if one has absolute faith in structure formation theory and is blissfully unaware of the predictions of MOND. So I can already see the community tromping down the familiar path of persuading ourselves that the unreasonable is reasonable, that what was not predicted is what we should have expected all along, that everything is fine with cosmology when it is anything but. We’ve done it so many times before.


Initially I had the cat stuffed back in the bag image here, but that was really for a theoretical paper that I didn’t quite make it to in this post. You’ll see it again soon. The observations discussed here are by observers doing their best in the context they know, so it doesn’t seem appropriate to that.


%We were convinced of the need for non-baryonic dark matter before any fluctuations in the microwave background were detected; their absence at the level of one part in a thousand sufficed.

^The assembly of baryonic mass can and in most cases should be rapid. It is the settling of gas into a rotationally supported structure that takes time – this is influenced by gas physics, not just gravity. Regardless of gravity theory, gas needs to settle gently into a rotating disk in order for spiral galaxies to exist.

+There are other predictions that differ in detail, but this is a reasonable representative of the basic expectation.

*This is not necessarily unreasonable, as there is some proclivity to underestimate the uncertainties. That’s a general statement about the field; I have made no attempt to assess how reasonable these particular error bars are.

&Top-heavy refers to there being more than the usual complement of bright but short-lived (tens of millions of years) stars. These stars are individually high mass (bigger than the sun), while long-lived stars are low mass. Though individually low in mass, these faint stars are very numerous. When one integrates over the population, one finds that most of the total stellar mass resides in the faint, low mass stars while much of the light is produced by the high mass stars. So a top heavy IMF explains high redshift galaxies by making them out of the brightest stars that require little mass to build. However, these stars will explode and go away on a short time scale, leaving little behind. If we don’t outright truncate the mass function (so many knobs here!), there could be some longer-lived stars leftover, but they must be few enough for the whole galaxy to fade to invisibility or we haven’t gained anything. So it is surprising, from this perspective, to see massive galaxies that appear to have evolved normally without any of these knobs getting twiddled.

#Excess AGN were one possibility Jay Franck considered in his thesis as the explanation for what we then considered to be hyperluminous galaxies, but the known luminosity function of AGN up to z = 4 couldn’t explain the entire excess. With the clarity of hindsight, we were just seeing the same sorts of bright, early galaxies that JWST has brought into sharper focus.

Clusters of galaxies ruin everything

Clusters of galaxies ruin everything

A common refrain I hear is that MOND works well in galaxies, but not in clusters of galaxies. The oft-unspoken but absolutely intended implication is that we can therefore dismiss MOND and never speak of it again. That’s silly.

Even if MOND is wrong, that it works as well as it does is surely telling us something. I would like to know why that is. Perhaps it has something to do with the nature of dark matter, but we need to engage with it to make sense of it. We will never make progress if we ignore it.

Like the seventeenth century cleric Paul Gerhardt, I’m a stickler for intellectual honesty:

“When a man lies, he murders some part of the world.”

Paul Gerhardt

I would extend this to ignoring facts. One should not only be truthful, but also as complete as possible. It does not suffice to be truthful about things that support a particular position while eliding unpleasant or unpopular facts* that point in another direction. By ignoring the successes of MOND, we murder a part of the world.

Clusters of galaxies are problematic in different ways for different paradigms. Here I’ll recap three ways in which they point in different directions.

1. Cluster baryon fractions

An unpleasant fact for MOND is that it does not suffice to explain the mass discrepancy in clusters of galaxies. When we apply Milgrom’s formula to galaxies, it explains the discrepancy that is conventionally attributed to dark matter. When we apply MOND clusters, it comes up short. This has been known for a long time; here is a figure from the review Sanders & McGaugh (2002):

Figure 10 from Sanders & McGaugh (2002): (Left) the Newtonian dynamical mass of clusters of galaxies within an observed cutoff radius (rout) vs. the total observable mass in 93 X-ray-emitting clusters of galaxies (White et al. 1997). The solid line corresponds to Mdyn = Mobs (no discrepancy). (Right) the MOND dynamical mass within rout vs. the total observable mass for the same X-ray-emitting clusters. From Sanders (1999).

The Newtonian dynamical mass exceeds what is seen in baryons (left). There is a missing mass problem in clusters. The inference is that the difference is made up by dark matter – presumably the same non-baryonic cold dark matter that we need in cosmology.

When we apply MOND, the data do not fall on the line of equality as they should (right panel). There is still excess mass. MOND suffers a missing baryon problem in clusters.

The common line of reasoning is that MOND still needs dark matter in clusters, so why consider it further? The whole point of MOND is to do away with the need of dark matter, so it is terrible if we need both! Why not just have dark matter?

This attitude was reinforced by the discovery of the Bullet Cluster. You can “see” the dark matter.

An artistic rendition of data for the Bullet Cluster. Pink represents hot X-ray emitting gas, blue the mass concentration inferred through gravitational lensing, and the optical image shows many galaxies. There are two clumps of galaxies that collided and passed through one another, getting ahead of the gas which shocked on impact and lags behind as a result. The gas of the smaller “bullet” subcluster shows a distinctive shock wave.

Of course, we can’t really see the dark matter. What we see is that the mass required by gravitational lensing observations exceeds what we see in normal matter: this is the same discrepancy that Zwicky first noticed in the 1930s. The important thing about the Bullet Cluster is that the mass is associated with the location of the galaxies, not with the gas.

The baryons that we know about in clusters are mostly in the gas, which outweighs the stars by roughly an order of magnitude. So we might expect, in a modified gravity theory like MOND, that the lensing signal would peak up on the gas, not the stars. That would be true, if the gas we see were indeed the majority of the baryons. We already knew from the first plot above that this is not the case.

I use the term missing baryons above intentionally. If one already believes in dark matter, then it is perfectly reasonable to infer that the unseen mass in clusters is the non-baryonic cold dark matter. But there is nothing about the data for clusters that requires this. There is also no reason to expect every baryon to be detected. So the unseen mass in clusters could just be ordinary matter that does not happen to be in a form we can readily detect.

I do not like the missing baryon hypothesis for clusters in MOND. I struggle to imagine how we could hide the required amount of baryonic mass, which is comparable to or exceeds the gas mass. But we know from the first figure that such a component is indicated. Indeed, the Bullet Cluster falls at the top end of the plots above, being one of the most massive objects known. From that perspective, it is perfectly ordinary: it shows the same discrepancy every other cluster shows. So the discovery of the Bullet was neither here nor there to me; it was just another example of the same problem. Indeed, it would have been weird if it hadn’t shown the same discrepancy that every other cluster showed. That it does so in a nifty visual is, well, nifty, but so what? I’m more concerned that the entire population of clusters shows a discrepancy than that this one nifty case does so.

The one new thing that the Bullet Cluster did teach us is that whatever the missing mass is, it is collisionless. The gas shocked when it collided, and lags behind the galaxies. Whatever the unseen mass is, is passed through unscathed, just like the galaxies. Anything with mass separated by lots of space will do that: stars, galaxies, cold dark matter particles, hard-to-see baryonic objects like brown dwarfs or black holes, or even massive [potentially sterile] neutrinos. All of those are logical possibilities, though none of them make a heck of a lot of sense.

As much as I dislike the possibility of unseen baryons, it is important to keep the history of the subject in mind. When Zwicky discovered the need for dark matter in clusters, the discrepancy was huge: a factor of a thousand. Some of that was due to having the distance scale wrong, but most of it was due to seeing only stars. It wasn’t until 40 some years later that we started to recognize that there was intracluster gas, and that it outweighed the stars. So for a long time, the mass ratio of dark to luminous mass was around 70:1 (using a modern distance scale), and we didn’t worry much about the absurd size of this number; mostly we just cited it as evidence that there had to be something massive and non-baryonic out there.

Really there were two missing mass problems in clusters: a baryonic missing mass problem, and a dynamical missing mass problem. Most of the baryons turned out to be in the form of intracluster gas, not stars. So the 70:1 ratio changed to 7:1. That’s a big change! It brings the ratio down from a silly number to something that is temptingly close to the universal baryon fraction of cosmology. Consequently, it becomes reasonable to believe that clusters are fair samples of the universe. All the baryons have been detected, and the remaining discrepancy is entirely due to non-baryonic cold dark matter.

That’s a relatively recent realization. For decades, we didn’t recognize that most of the normal matter in clusters was in an as-yet unseen form. There had been two distinct missing mass problems. Could it happen again? Have we really detected all the baryons, or are there still more lurking there to be discovered? I think it unlikely, but fifty years ago I would also have thought it unlikely that there would have been more mass in intracluster gas than in stars in galaxies. I was ten years old then, but it is clear from the literature that no one else was seriously worried about this at the time. Heck, when I first read Milgrom’s original paper on clusters, I thought he was engaging in wishful thinking to invoke the X-ray gas as possibly containing a lot of the mass. Turns out he was right; it just isn’t quite enough.

All that said, I nevertheless think the residual missing baryon problem MOND suffers in clusters is a serious one. I do not see a reasonable solution. Unfortunately, as I’ve discussed before, LCDM suffers an analogous missing baryon problem in galaxies, so pick your poison.

It is reasonable to imagine in LCDM that some of the missing baryons on galaxy scales are present in the form of warm/hot circum-galactic gas. We’ve been looking for that for a while, and have had some success – at least for bright galaxies where the discrepancy is modest. But the problem gets progressively worse for lower mass galaxies, so it is a bold presumption that the check-sum will work out. There is no indication (beyond faith) that it will, and the fact that it gets progressively worse for lower masses is a direct consequence of the data for galaxies looking like MOND rather than LCDM.

Consequently, both paradigms suffer a residual missing baryon problem. One is seen as fatal while the other is barely seen.

2. Cluster collision speeds

A novel thing the Bullet Cluster provides is a way to estimate the speed at which its subclusters collided. You can see the shock front in the X-ray gas in the picture above. The morphology of this feature is sensitive to the speed and other details of the collision. In order to reproduce it, the two subclusters had to collide head-on, in the plane of the sky (practically all the motion is transverse), and fast. I mean, really fast: nominally 4700 km/s. That is more than the virial speed of either cluster, and more than you would expect from dropping one object onto the other. How likely is this to happen?

There is now an enormous literature on this subject, which I won’t attempt to review. It was recognized early on that the high apparent collision speed was unlikely in LCDM. The chances of observing the bullet cluster even once in an LCDM universe range from merely unlikely (~10%) to completely absurd (< 3 x 10-9). Answers this varied follow from what aspects of both observation and theory are considered, and the annoying fact that the distribution of collision speed probabilities plummets like a stone so that slightly different estimates of the “true” collision speed make a big difference to the inferred probability. What the “true” gravitationally induced collision speed is is somewhat uncertain because the hydrodynamics of the gas plays a role in shaping the shock morphology. There is a long debate about this which bores me; it boils down to it being easy to explain a few hundred extra km/s but hard to get up to the extra 1000 km/s that is needed.

At its simplest, we can imagine the two subclusters forming in the early universe, initially expanding apart along with the Hubble flow like everything else. At some point, their mutual attraction overcomes the expansion, and the two start to fall together. How fast can they get going in the time allotted?

The Bullet Cluster is one of the most massive systems in the universe, so there is lots of dark mass to accelerate the subclusters towards each other. The object is less massive in MOND, even spotting it some unseen baryons, but the long-range force is stronger. Which effect wins?

Gary Angus wrote a code to address this simple question both conventionally and in MOND. Turns out, the longer range force wins this race. MOND is good at making things go fast. While the collision speed of the Bullet Cluster is problematic for LCDM, it is rather natural in MOND. Here is a comparison:

A reasonable answer falls out of MOND with no fuss and no muss. There is room for some hydrodynamical+ high jinx, but it isn’t needed, and the amount that is reasonable makes an already reasonable result more reasonable, boosting the collision speed from the edge of the observed band to pretty much smack in the middle. This is the sort of thing that keeps me puzzled: much as I’d like to go with the flow and just accept that it has to be dark matter that’s correct, it seems like every time there is a big surprise in LCDM, MOND just does it. Why? This must be telling us something.

3. Cluster formation times

Structure is predicted to form earlier in MOND than in LCDM. This is true for both galaxies and clusters of galaxies. In his thesis, Jay Franck found lots of candidate clusters at redshifts higher than expected. Even groups of clusters:

Figure 7 from Franck & McGaugh (2016). A group of four protocluster candidates at z = 3.5 that are proximate in space. The left panel is the sky association of the candidates, while the right panel shows their galaxy distribution along the LOS. The ellipses/boxes show the search volume boundaries (Rsearch = 20 cMpc, Δz ± 20 cMpc). Three of these (CCPC-z34-005, CCPC-z34-006, CCPC-z35-003) exist in a chain along the LOS stretching ≤120 cMpc. This may become a supercluster-sized structure at z = 0.

The cluster candidates at high redshift that Jay found are more common in the real universe than seen with mock observations made using the same techniques within the Millennium simulation. Their velocity dispersions are also larger than comparable simulated objects. This implies that the amount of mass that has assembled is larger than expected at that time in LCDM, or that speeds are boosted by something like MOND, or nothing has settled into anything like equilibrium yet. The last option seems most likely to me, but that doesn’t reconcile matters with LCDM, as we don’t see the same effect in the simulation.

MOND also predicts the early emergence of the cosmic web, which would explain the early appearance of very extended structures like the “big ring.” While some of these very large scale structures are probably not real, there seem to be a lot of such things being noted for all of them to be an illusion. The knee-jerk denials of all such structures reminds me of the shock cosmologists expressed at seeing quasars at redshifts as high as 4 (even 4.9! how can it be so?) or clusters are redshift 2, or the original CfA stickman, which surprised the bejeepers out of everybody in 1987. So many times I’ve been told that a thing can’t be true because it violates theoretician’s preconceptions, only for them to prove to be true, ultimately to be something the theorists expected all along.

Well, which is it?

So, as the title says, clusters ruin everything. The residual missing baryon problem that MOND suffers in clusters is both pernicious and persistent. It isn’t the outright falsification that many people presume it to be, but is sure don’t sit right. On the other hand, both the collision speeds of clusters (there are more examples now than just the Bullet Cluster) and the early appearance of clusters at high redshift is considerably more natural in MOND than In LCDM. So the data for clusters cuts both ways. Taking the most obvious interpretation of the Bullet Cluster data, this one object falsifies both LCDM and MOND.

As always, the conclusion one draws depends on how one weighs the different lines of evidence. This is always an invitation to the bane of cognitive dissonance, accepting that which supports our pre-existing world view and rejecting the validity of evidence that calls it into question. That’s why we have the scientific method. It was application of the scientific method that caused me to change my mind: maybe I was wrong to be so sure of the existence of cold dark matter? Maybe I’m wrong now to take MOND seriously? That’s why I’ve set criteria by which I would change my mind. What are yours?


*In the discussion associated with a debate held at KITP in 2018, one particle physicist said “We should just stop talking about rotation curves.” Straight-up said it out loud! No notes, no irony, no recognition that the dark matter paradigm faces problems beyond rotation curves.

+There are now multiple examples of colliding cluster systems known. They’re a mess (Abell 520 is also called “the train wreck cluster“), so I won’t attempt to describe them all. In Angus & McGaugh (2008) we did note that MOND predicted that high collision speeds would be more frequent than in LCDM, and I have seen nothing to make me doubt that. Indeed, Xavier Hernandez pointed out to me that supersonic shocks like that of the Bullet Cluster are often observed, but basically never occur in cosmological simulations.