This clickbait title is inspired by the clickbait title of a recent story about high redshift galaxies observed by JWST. To speak in the same vernacular:
This story is one variation on the work of Labbe et al. that has been making the rounds since it appeared in Nature in late February. The concern is that these high redshift galaxies are big and bright. They got too big too soon.
Six high redshift galaxies from the JWST CEERS survey, as reported by Labbe et al. (2023). Not much to look at, but bear in mind that these objects are pushing the edge of the observable universe. By that standard, they are both bright and disarmingly obvious.
Stellar masses and redshifts of galaxies from Labbe et al. The pink squares are the initial estimates that appeared in their first preprint in July 2022. The black squares with error bars are from the version published in February 2023. The shaded regions represent where galaxies are too massive too early for LCDM. The lighter region is where very few galaxies were expectedto exist; the darker region is a hard no.
The results here are mixed. On the one hand, we were right to be concerned about the initial analysis. This was based in part on a ground-based calibration of the telescope before it was launched. That’s not the same as performance on the sky, which is usually a bit worse than in the lab. JWST breaks that mold, as it is actually performing better than expected. That means the bright-looking galaxies aren’t quite as intrinsically bright as was initially thought.
The correct calibration reduces both the masses and the redshifts of these galaxies. The change isn’t subtle: galaxies are less massive (the mass scale is logarithmic!) and at lower redshift than initially thought. Amusingly, only one galaxy is above redshift 9 when the early talking point was big galaxies at z = 10. (There are othercrediblecandidates for that.) Nevertheless, the objects are clearly there, and bright (i.e., massive). They are also early. We like to obsess about redshift, but there is an inverse relation between redshift and time, so there is not much difference in clock time between z = 7 and 10. Redshift 10 is just under 500 million years after the big bang; redshift 7 just under 750 million years. Those are both in the first billion years out of a current age of over thirteen billion years. The universe was still in its infancy for both.
Regardless of your perspective on cosmic time scales, the observed galaxies remain well into LCDM’s danger zone, even with the revised calibration. They are no longer fully in the no-go zone, so I’m sure we’ll see lots of papers explaining how the danger zone isn’t so dangerous after all, and that we should have expected it all along. That’s why it matters more what we predict before an observation than after the answer is known.
*I emphasize science here because one of the reactions I get when I point out that this was predicted is some variation on “That doesn’t count! [because I don’t understand the way it was done.]” And yet, the predictions made and published in advance of the observations keep coming true. It’s almost as if there might be something to this so-called scientific method.
On the one hand, I understand the visceral negative reaction. It is the same reaction I had when MOND first reared its ugly head in my own data for low surface brightness galaxies. This is apparently a psychological phasethrough which we must pass. On the other hand, the community seems stuck in this rut: it is high time to get past it. I’ve been trying to educate a reluctant audience for over a quarter century now. I know how it pains them because I shared that pain. I got over it. If you’re a scientist still struggling to do so, that’s on you.
There are some things we have to figure out for ourselves. If you don’t believe me, fine, but then get on with doing it yourself instead of burying your head in the sand. The first thing you have to do is give MOND a chance. When I allowed that possibility, I suddenly found myself working less hard than when I was desperately trying to save dark matter. If you come to the problem sure MOND is wrong+, you’ll always get the answer you want.
+I’ve been meaning to write a post (again) about the very real problems MOND suffers in clusters of galaxies. This is an important concern. It is also just one of hundreds of things to consider in the balance. We seem willing to give LCDM infinite mulligans while any problem MOND encounters is immediately seen as fatal. If we hold them to the same standard, both are falsified. If all we care about is explanatory power, LCDM always has that covered. If we care more about successful a priori predictions, MOND is less falsified than LCDM.
There is an important debate to be had on these issues, but we’re not having it. Instead, I frequently encounter people whose first response to any mention of MOND is to cite the bullet cluster in order to shut down discussion. They are unwilling to accept that there is a debate to be had, and are inevitably surprised to learn that LCDM has trouble explaining the bullet cluster too, let alone other clusters. It’s almost as if they are just looking for an excuse to not have to engage in serious thought that might challenge their belief system.
Cosmology is challenged at present by two apparently unrelated problems: the apparent formation of large galaxies at unexpectedly high redshift observed by JWST, and the tension between the value of the Hubble constant obtained by traditional methods and that found in multi-parameter fits to the acoustic power spectrum of the cosmic microwave background (CMB).
Early results in precision cosmology from WMAP obtained estimates of the Hubble constant h = 0.73 ± 0.03 [I adopt the convention h = H0/(100 km s-1 Mpc-1) so as not to have to have to write the units every time.] This was in good agreement with contemporaneous local estimates from the Hubble Space Telescope Key Project to Measure the Hubble Constant: h = 0.72 ± 0.08. This is what Hubble was built to do. It did it, and the vast majority of us were satisfied* at the time that it had succeeded in doing so.
Since that time, a tension has emerged as accuracy has improved. Precise local measures** give h = 0.73 ± 0.01 while fits to the Planck CMB data give h = 0.6736 ± 0.0054. This is around the 5 sigma threshold for believing there is a real difference. Our own results exclude h < 0.705 at 95% confidence. A value as low as 67 is right out.
Given the history of the distance scale, it is tempting to suppose that local measures are at fault. This seems to be the prevailing presumption, and it is just a matter of figuring out what went wrong this time. Of course, things can go wrong with the CMB too, so this way of thinking raises the ever-present danger of confirmation bias, ever a scourge in cosmology. Looking at the history of H0 determinations, it is not local estimates of H0 but rather those from CMB fits that have diverged from the concordance region.
The cosmic mass density parameter and Hubble constant. These covary in CMB fits along the line Ωmh3 = 0.09633 ± 0.00029 (red). Also shown are best-fit values from CMB experiments over time, as labeled (WMAP3 is the earliest shown; Planck2018 the most recent). These all fall along the line of constant Ωmh3, but have diverged over time from concordance with local data. There are many examples of local constraints; for illustration I show examples from Cole et al. (2005), Mohayaee & Tully (2005), Tully et al. (2016), and Riess et al. (2001). The divergence has occurred as finer angular scales have been observed in the CMB power spectrum and correspondingly higher multiples ℓ have been incorporated into fits.
The divergence between local and CMB-determined H0 has occurred as finer angular scales have been observed in the CMB power spectrum and correspondingly higher multiples ℓ have been incorporated into fits. That suggests that the issue resides in the high-ℓ part of the CMB data*** rather than in some systematic in the local determinations. Indeed, if one restricts the analysis of the Planck (“TT”) data to ℓ < 801, one obtains h = 0.70 ± 0.02 (see their Fig. 22), consistent with earlier CMB estimates as well as with local ones.
Photons must traverse the entire universe to reach us from the surface of last scattering. Along the way, they are subject to 21 cm absorption by neutral hydrogen, Thomson scattering by free electrons after reionization, blue and redshifting from traversing gravitational potentials in an expanding universe (the late ISW effect, aka the Rees-Sciama effect), and deflection by gravitational lensing. Lensing is a subtle effect that blurs the surface of last scattering and adds a source of fluctuations not intrinsic to it. The amount of lensing can be calculated from the growth rate of structure; anomalously fast galaxy formation would induce extra power at high ℓ.
Early Galaxy Formation
JWST observations evince the early emergence of massive galaxies at z ≈ 10. This came as a great surprise theoretically, but the empirical result extends previous observations that galaxies grew too bigtoo fast. Taking the data at face value, more structure appears to exist in the early universe than anticipated in the standard calculation. This would cause excess lensing and an anomalous source of power on fine scales. This would be a real, physical anomaly (new physics), not some mistake in the processing of CMB data (which may of course happen, just as with any other sort of data). Here are the Planck data:
Unbinned Planck data with the best-fit power spectrum (red line) and a model (blue line) with h=0.73 and Ωm adjusted to maintain constant Ωmh3. The ratio of the models is shown at bottom, that with = 0.67 divided by the model with h = 0.73. The difference is real; h = 0.67 gives the better fit****. The ratio illustrates the subtle need for slightly greater power with increasing ℓ than provided by the model with h = 0.73. Perhaps this high-ℓ power has a contribution from anomalous gravitational lensing that skews the fit and drives the Hubble tension.
If excess lensing by early massive galaxies occurs but goes unrecognized, fits to the CMB data would be subtly skewed. There would be more power at high ℓ than there should be. Fitting this extra power would drive up Ωm and other relevant parameters*****. In response, it would be necessary to reduce h to maintain a constant Ωmh3. This would explain the temporal evolution of the best fit values, so I posit that this effect may be driving the Hubble tension.
The early formation of massive galaxies would represent a real, physical anomaly. This is unexpected in ΛCDM but not unanticipated. Sanders (1998) explicitly predicted the formation of massive galaxies by z = 10. Excess gravitational lensing by these early galaxies is a natural consequence of his prediction. Other things follow as well: early reionization, an enhanced ISW/Rees-Sciama effect, and high redshift21 cm absorption. In short, everything that is puzzling about the early universe from the ΛCDM perspective was anticipated and often explicitly predicted in advance.
The new physics driving the prediction of Sanders (1998) is MOND. This is the same driver of anomalies in galaxy dynamics, and perhaps now also of the Hubble tension. These predictive successes must be telling us something, and highlight the need for a deeper theory. Whether this finally breaks ΛCDM or we find yet another unsatisfactory out is up to others to decide.
*Indeed, the ± 0.08 rather undersells the accuracy of the result. I quote that because the Key Project team gave it as their bottom line. However, if you read the paper, you see statements like h = 0.71 ± 0.02 (random) ± 0.06 (systematic). The first is the statistical error of the experiment, while the latter is an estimate of how badly it might go wrong (e.g., susceptibility to a recalibration of the Cepheid scale). With the benefit of hindsight, we can say now that the Cepheid calibration has not changed that much: they did indeed get it right to something more like ± 0.02 than ± 0.08.
***I recall being at a conference when the Planck data were fresh where people were visibly puzzled at the divergence of their fit from the local concordance region. It was obvious to everyone that this had come about when the high ℓ data were incorporated. We had no idea why, and people were reluctant to contradict the Authority of the CMB fit, but it didn’t sit right. Since that time, the Planck result has been normalized to the point where I hear its specific determination of cosmic parameters used interchangeably with ΛCDM. And indeed, the best fit is best for good reason; determinations that are in conflict with Planck are either wrong or indicate new physics.
****The sharp eye will also notice a slight offset in the absolute scale. This is fungible with the optical depth due to reionization, which acts as a light fog covering the whole sky: higher optical depth τ depresses the observed amplitude of the CMB. The need to fit the absolute scale as well as the tip in the shape of the power spectrum would explain another temporal evolution in the best-fit CMB parameters, that of declining optical depth from WMAP and early (2013) Planck (τ = 0.09) to 2018 Planck (τ = 0.0544).
*****The amplitude of the power spectrum σ8 would also be affected. Perhaps unsurprisingly, there is also a tension between local and CMB determinations of this parameter. All parameters must be fit simultaneously, so how it comes out in the wash depends on the details of the history of the nonlinear growth of structure. Such a calculation is beyond the scope of this note. Indeed, I hope someone else takes up the challenge, as I tire of solving all the problems only to have them ignored. Better if everyone else comes to grip with this for themselves.
I noted last time that in the rush to analyze the first of the JWST data, that “some of these candidate high redshift galaxies will fall by the wayside.” As Maurice Aabe notes in the comments there, this has already happened.
I was concerned because of previous work with Jay Franck in which we found that photometric redshifts were simply not adequately precise to identify the clusters and protoclusters we were looking for. Consequently, we made it a selection criterion when constructing the CCPC to require spectroscopic redshifts. The issue then was that it wasn’t good enough to have a rough idea of the redshift, as the photometric method often provides (what exactly it provides depends in a complicated way on the redshift range, the stellar population modeling, and the wavelength range covered by the observational data that is available). To identify a candidate protocluster, you want to know that all the potential member galaxies are really at the same redshift.
This requirement is somewhat relaxed for the field population, in which a common approach is to ask broader questions of the data like “how many galaxies are at z ~ 6? z ~ 7?” etc. Photometric redshifts, when done properly, ought to suffice for this. However, I had noticed in Jay’s work that there were times when apparently reasonable photometric redshift estimates went badly wrong. So it made the ganglia twitch when I noticed that in early JWST work – specifically Table 2 of the first version of a paper by Adams et al. – there were seven objects with candidate photometric redshifts, and three already had a preexisting spectroscopic redshift. The photometric redshifts were mostly around z ~ 9.7, but the three spectroscopic redshifts were all smaller: two z ~ 7.6, one 8.5.
Three objects are not enough to infer a systematic bias, so I made a mental note and moved on. But given our previous experience, it did not inspire confidence that all the available cases disagreed, and that all the spectroscopic redshifts were lower than the photometric estimates. These things combined to give this observer a serious case of “the heebie-jeebies.”
Adams et al have now posted a revised analysis in which many (not all) redshifts change, and change by a lot. Here is their new Table 4:
There are some cases here that appear to confirm and improve the initial estimate of a high redshift. For example, SMACS-z11e had a very uncertain initial redshift estimate. In the revised analysis, it is still at z~11, but with much higher confidence.
That said, it is hard to put a positive spin on these numbers. 23 of 31 redshifts change, and many change drastically. Those that change all become smaller. The highest surviving redshift estimate is z ~ 15 for SMACS-z16b. Among the objects with very high candidate redshifts, some are practically local (e.g., SMACS-z12a, F150DB-075, F150DA-058).
So… I had expected that this could go wrong, but I didn’t think it would go this wrong. I was concerned about the photometric redshift method – how well we can model stellar populations, especially at young ages dominated by short lived stars that in the early universe are presumably lower metallicity than well-studied nearby examples, the degeneracies between galaxies at very different redshifts but presenting similar colors over a finite range of observed passbands, dust (the eternal scourge of observational astronomy, expected to be an especially severe affliction in the ultraviolet that gets redshifted into the near-IR for high-z objects, both because dust is very efficient at scattering UV photons and because this efficiency varies a lot with metallicity and the exact gran size distribution of the dust), when is a dropout really a dropout indicating the location of the Lyman break and when is it just a lousy upper limit of a shabby detection, etc. – I could go on, but I think I already have. It will take time to sort these things out, even in the best of worlds.
We do not live in the best of worlds.
It appears that a big part of the current uncertainty is a calibration error. There is a pipeline for handling JWST data that has an in-built calibration for how many counts in a JWST image correspond to what astronomical magnitude. The JWST instrument team warned us that the initial estimate of this calibration would “improve as we go deeper into Cycle 1” – see slide 13 of Jane Rigby’s AAS presentation.
I was not previously aware of this caveat, though I’m certainly not surprised by it. This is how these things work – one makes an initial estimate based on the available data, and one improves it as more data become available. Apparently, JWST is outperforming its specs, so it is seeing as much as 0.3 magnitudes deeper than anticipated. This means that people were inferring objects to be that much too bright, hence the appearance of lots of galaxies that seem to be brighter than expected, and an apparent systematic bias to high z for photometric redshift estimators.
I was not at the AAS meeting, let alone Dr. Rigby’s presentation there. Even if I had been, I’m not sure I would have appreciated the potential impact of that last bullet point on nearly the last slide. So I’m not the least bit surprised that this error has propagated into the literature. This is unfortunate, but at least this time it didn’t lead to something as bad as the Challenger space shuttle disaster in which the relevant warning from the engineers was reputed to have been buried in an obscure bullet point list.
So now we need to take a deep breath and do things right. I understand the urgency to get the first exciting results out, and they are still exciting. There are still some interesting high z candidate galaxies, and lots of empirical evidence predating JWST indicating that galaxies may have become too big too soon. However, we can only begin to argue about the interpretation of this once we agree to what the facts are. At this juncture, it is more important to get the numbers right than to post early, potentially ill-advised takes on arXiv.
That said, I’d like to go back to writing my own ill-advised take to post on arXiv now.
There has been a veritable feeding frenzy going on with the first JWST data. This is to be expected. Also to be expected is that some of these early results will ultimately prove to have been premature. So – caveat emptor! That said, I want to highlight one important aspect of these early results, there being too many to do all them all justice.
The basic theme is that people are finding very faint yet surprisingly bright galaxies that are consistent with being at redshift 9 and above. The universe has expanded by a factor of ten since then, when it was barely half a billion years old. That’s a long time to you and me, and even to a geologist, but it is a relatively short time for a universe that is now over 13 billion years old, and it isn’t a lot of time for objects as large as galaxies to form.
In the standard LCDM cosmogony, we expect large galaxies to build up from the merger of many smaller galaxies. These smaller galaxies form first, and many of the stars that end up in big galaxies may have formed in these smaller galaxies prior to merging. So when we look to high redshift, we expect to catch this formation-by-merging process in action. We should see lots of small, actively star forming protogalactic fragments (Searle-Zinn fragments in Old School speak) before they’ve had time to assemble into the large galaxies we see relatively nearby to us at low redshift.
So what are we seeing? Here is one example from Labbe et al.:
JWST images of a candidate galaxy at z~10 in different filters, ordered by increasing wavelength from optical light (left) to the mid-infrared (right). Image credit:Labbe et al.
Not much to look at, is it? But really it is pretty awesome for light that has been traveling 13 billion years to get to us and had its wavelength stretched by a factor of ten. Measuring the brightness in these various passbands enables us to estimate both its redshift and stellar mass:
The JWST data plotted as a spectrum (points) with template stellar population models (lines) that indicate a mass of nearly 85 billion suns at z=9.92. Image credit:Labbe et al.
Eighty five billion solar masses is a lot of stars. It’s a bit bigger than the Milky Way, which has had the full 13+ billion years to make its complement of roughly 60 billion solar masses of stars. Object 19424 is a big galaxy, and it grew up fast.
In LCDM, it is not particularly hard to build a model that forms a lot of stars early on. What is challenging is assembling this many into a single object. We should see lots of much smaller fragments (and may yet still) but we shouldn’t see many really big objects like this already in place. How many there are is a critical question.
Labbe et al. make an estimate of the stellar mass density in massive high redshift galaxies, and find it to be rather a lot. This is a fraught exercise in the best of circumstances when one has excellent data for thousands of galaxies. Here we have only a handful. We must also assume that the small region surveyed is typical, which it may not be. Moreover, the photometric redshift method illustrated above is fraught. It looks convincing. It is convincing. It also gives me the heebie-jeebies. Many times I have seen photometric redshifts turn out to be wrong when good spectroscopic data are obtained. But usually the method works, and it’s what we got so far, so let’s see where this ride takes us.
A short paper that nicely illustrates the prime issue is provided by Prof. Boylan-Kolchin. His key figure:
The integrated mass density of stars as a function of the stellar mass of individual galaxies, or equivalently, the baryons available to form stars in their dark matter halos. The data of Labbe et al. reside in the forbidden region (shaded) where there are more stars than there is normal matter from which to make them. Image credit: Boylan-Kolchin.
The basic issue is that there are too many stars in these big galaxies. There are many astrophysical uncertainties about how stars form: how fast, how efficiently, with what mass distribution, etc., etc. – much of the literature is obsessed with these issues. In contrast, once the parameters of cosmology are known, as we think them to be, it is relatively straightforward to calculate the number density of dark matter halos as a function of mass at a given redshift. This is the dark skeleton on which large scale structure depends; getting this right is absolutely fundamental to the cold dark matter picture.
Every dark matter halo should host a universal fraction of normal matter. The baryon fraction (fb) is known to be very close to 16% in LCDM. Prof. Boylan-Kolchin points out that this sets an important upper limit on how many stars could possibly form. The shaded region in the figure above is excluded: there simply isn’t enough normal matter to make that many stars. The data of Labbe et al. fall in this region, which should be impossible.
The data only fall a little way into the excluded region, so maybe it doesn’t look that bad, but the real situation is more dire. Star formation is very inefficient, but the shaded region assumes that all the available material has been converted into stars. A more realistic expectation is closer to the gray line (ε = 0.1), not the hard limit where all the available material has been magically turned into stars with a cosmic snap of the fingers.
Indeed, I would argue that the real efficiency ε is likely lower than 0.1 as it is locally. This runs into problems with precursors of the JWST result, so we’ve already been under pressure to tweak this free parameter upwards. Turning it up to eleven is just the inevitable consequence of needing to get more stars to form in the first big halos to appear sooner than the theory naturally predicts.
So, does this spell doom for LCDM? I doubt it. There are too many uncertainties at present. It is an intriguing result, but it will take a lot of follow-up work to sort out. I expect some of these candidate high redshift galaxies will fall by the wayside, and turn out to be objects at lower redshift. How many, and how that impacts the basic result, remains to be determined.
After years of testing LCDM, it would be ironic if it could be falsified by this one simple (expensive, technologically amazing) observation. Still, it is something important to watch, as it is at least conceivable that we could measure a stellar mass density that is impossibly high. Wither then?
I went on a bit of a twitter bender yesterday about the early claims about high mass galaxies at high redshift, which went on long enough I thought I should share it here.
For those watching the astro community freak out about bright, high redshift galaxies being detected by JWST, some historical context in an amusing anecdote…
The 1998 October conference was titled “After the dark ages, when galaxies were young (the universe at 2 < z < 5).” That right there tells you what we were expecting. Redshift 5 was high – when the universe was a mere billion years old. Before that, not much going on (dark ages).
This was when the now famous SN Ia results corroborating the acceleration of the expansion rate predicted by concordance LCDM were shiny and new. Many of us already strongly suspected we needed to put the Lambda back in cosmology; the SN results sealed the deal.
One of the many lines of evidence leading to the rehabilitation of Lambda – previously anathema – was that we needed a bit more time to get observed structures to form. One wants the universe to be older than its contents, an off and on problem with globular clusters for forever.
A natural question that arises is just how early do galaxies form? The horizon of z=7 came up in discussion at lunch, with those of us who were observers wondering how we might access that (JWST being the answer long in the making).
Famed simulator Carlos Frenk was there, and assured us not to worry. He had already done LCDM simulations, and knew the timing.
“There is nothing above redshift 7.”
He also added “don’t quote me on that,” which I’ve respected until now, but I think the statute of limitations has expired.
Everyone present immediately pulled out their wallet and chipped in $5 to endow the “7-up” prize for the first persuasive detection of an object at or above redshift seven.
A committee was formed to evaluate claims that might appear in the literature, composed of Carlos, Vera Rubin, and Bruce Partridge. They made it clear that they would require a high standard of evidence: at least two well-identified lines; no dropouts or photo-z’s.
That standard wasn’t met for over a decade, with z=6.96 being the record holder for a while. The 7-up prize was entirely tongue in cheek, and everyone forgot about it. Marv Leventhal had offered to hold the money; I guess he ended up pocketing it.
I believe the winner of the 7-up prize should have been Nial Tanvir for GRB090423 at z~8.2, but I haven’t checked if there might be other credible claims, and I can’t speak for the committee.
At any rate, I don’t think anyone would now seriously dispute that there are galaxies at z>7. The question is how big do they get, how early? And the eternal mobile goalpost, what does LCDM really predict?
Carlos was not wrong. There is no hard cutoff, so I won’t quibble about arbitrary boundaries like z=7. It takes time to assemble big galaxies, & LCDM does make a reasonably clear prediction about the timeline for that to occur. Basically, they shouldn’t be all that big that soon.
Here is a figure adapted from the thesis Jay Franck wrote here 5 years ago using Spitzer data (round points). It shows the characteristic brightness (Schechter M*) of galaxies as a function of redshift. The data diverge from the LCDM prediction (squares) as redshift increases.
The divergence happens because real galaxies are brighter (more stellar mass has assembled into a single object) than predicted by the hierarchical timeline expected in LCDM.
Remarkably, the data roughly follow the green line, which is an L* galaxy magically put in place at the inconceivably high redshift of z=10. Galaxies seem to have gotten big impossibly early. This is why you see us astronomers flipping our lids at the JWST results. Can’t happen.
Except that it can, and was predicted to do so by Bob Sanders a quarter century ago: “Objects of galaxy mass are the first virialized objects to form (by z=10) and larger structure develops rapidly.”
The reason is MOND. After decoupling, the baryons find themselves bereft of radiation support and suddenly deep in the low acceleration regime. Structure grows fast and becomes nonlinear almost immediately. It’s as if there is tons more dark matter than we infer nowadays.
I referreed that paper, and was a bit disappointed that Bob had beat me to it: I was doing something similar at the time, with similar results. Instead of being hard to form structure quickly as in LCDM, it’s practically impossible to avoid in MOND.
He beat me to it, so I abandoned writing that paper. No need to say the same thing twice! Didn’t think we’d have to wait so long to test it.
I’ve reviewed this many times. Most recently in January, in anticipation of JWST, on my blog.
But you get the point. Every time you see someone describe the big galaxies JWST is seeing as unexpected, what they mean is unexpected in LCDM. It doesn’t surprise me at all. It is entirely expected in MOND, and was predicted a priori.
The really interesting thing to me, though, remains what LCDM really predicts. I already see people rationalizing excuses. I’ve seen this happen before. Many times. That’s why the field is in a rut.
Progress towards the dark land.
So are we gonna talk our way out of it this time? I’m no longer interested in how; I’m sure someone will suggest something that will gain traction no matter how unsatisfactory.
Special pleading.
The only interesting question is if LCDM makes a prediction here that can’t be fudged. If it does, then it can be falsified. If it doesn’t, it isn’t science.
Experimentalist with no clue what he has signed up for about to find out how hard it is to hunt down an invisible target.
But can we? Is LCDM subject to falsification? Or will we yet again gaslight ourselves into believing that we knew it all along?
The fine-tuning problem encountered by dark matter models that I talked about last time is generic. The knee-jerk reaction of most workers seems to be “let’s build a more sophisticated model.” That’s reasonable – if there is any hope of recovery. The attitude is that dark matter has to be right so something has to work out. This fails to even contemplate the existential challenge that the fine-tuning problem imposes.
Perhaps I am wrong to be pessimistic, but my concern is well informed by years upon years trying to avoid this conclusion. Most of the claims I have seen to the contrary are just specialized versions of the generic models I had already built: they contain the same failings, but these go unrecognized because the presumption is that something has to work out, so people are often quick to declare “close enough!”
In my experience, fixing one thing in a model often breaks something else. It becomes a game of cosmic whack-a-mole. If you succeed in suppressing the scatter in one relation, it pops out somewhere else. A model that seems like it passes the test you built it to pass flunks as soon as you confront it with another test.
Our efforts to evade one fine-tuning problem often lead to another. This has been my general experience in many efforts to construct viable dark matter models. It is like squeezing a tube of toothpaste: every time we smooth out the problems in one part of the tube, we simply squeeze them into a different part. There are many published claims to solve this problem or that, but they frequently fail to acknowledge (or notice) that the purported solution to one problem creates another.
One example is provided by Courteau and Rix (1999). They invoke dark matter domination to explain the lack of residuals in the Tully-Fisher relation. In this limit, Mb/R ≪ MDM/R and the baryons leave no mark on the rotation curve. This can reconcile the model with the Tully-Fisher relation, but it makes a strong prediction. It is not just the flat rotation speed that is the same for galaxies of the same mass, but the entirety of the rotation curve, V(R) at all radii. The stars are just convenient tracers of the dark matter halo in this limit; the dynamics are entirely dominated by the dark matter. The hypothesized solution fixes the problem that is addressed, but creates another problem that is not addressed, in this case the observed variation in rotation curve shape.
The limit of complete dark matter domination is not consistent with the shapes of rotation curves. Galaxies of the same baryonic mass have the same flat outer velocity (Tully-Fisher), but the shapes of their rotation curves vary systematically with surface brightness (de Blok & McGaugh, 1996; Tully and Verheijen, 1997; McGaugh and de Blok, 1998a,b; Swaters et al., 2009, 2012; Lelli et al., 2013, 2016c). High surface brightness galaxies have steeply rising rotation curves while LSB galaxies have slowly rising rotation curves (Fig. 6). This systematic dependence of the inner rotation curve shape on the baryon distribution excludes the SH hypothesis in the limit of dark matter domination: the distribution of the baryons clearly has an impact on the dynamics.
Fig. 6. Rotation curve shapes and surface density. The left panel shows the rotation curves of two galaxies, one HSB (NGC 2403, open circles) and one LSB (UGC 128, filled circles) (de Blok & McGaugh, 1996; Verheijen and de Blok, 1999; Kuzio de Naray et al., 2008). These galaxies have very nearly the same baryonic mass (~ 1010 M⊙), and asymptote to approximately the same flat rotation speed (~ 130 km s−1). Consequently, they are indistinguishable in the Tully-Fisher plane (Fig. 4). However, the inner shapes of the rotation curves are readily distinguishable: the HSB galaxy has a steeply rising rotation curve while the LSB galaxy has a more gradual rise. This is a general phenomenon, as illustrated by the central density relation (right panel: Lelli et al., 2016c) where each point is one galaxy; NGC 2403 and UGC 128 are highlighted as open points. The central dynamical mass surface density (Σdyn) measured by the rate of rise of the rotation curve (Toomre, 1963) correlates with the central surface density of the stars (Σ0) measured by their surface brightness. The line shows 1:1 correspondence: no dark matter is required near the centers of HSB galaxies. The need for dark matter appears below 1000 M⊙ pc−2 and grows systematically greater to lower surface brightness. This is the origin of the statement that LSB galaxies are dark matter dominated.
A more recent example of this toothpaste tube problem for SH-type models is provided by the EAGLE simulations (Schaye et al., 2015). These are claimed (Ludlow et al., 2017) to explain one aspect of the observations, the radial acceleration relation (McGaugh et al., 2016), but fail to explain another, the central density relation (Lelli et al., 2016c) seen in Fig. 6. This was called the ‘diversity’ problem by Oman et al. (2015), who note that the rotation velocity at a specific, small radius (2 kpc) varies considerably from galaxy to galaxy observationally (Fig. 6), while simulated galaxies show essentially no variation, with only a small amount of scatter. This diversity problem is exactly the same problem that was pointed out before [compare Fig. 5 of Oman et al. (2015) to Fig. 14 of McGaugh and de Blok (1998a)].
There is no single, universally accepted standard galaxy formation model, but a common touchstone is provided by Mo et al. (1998). Their base model has a constant ratio of luminous to dark mass md [their assumption (i)], which provides a reasonable description of the sizes of galaxies as a function of mass or rotation speed (Fig. 7). However, this model predicts the wrong slope (3 rather than 4) for the Tully-Fisher relation. This is easily remedied by making the luminous mass fraction proportional to the rotation speed (md ∝ Vf), which then provides an adequate fit to the Tully-Fisher4 relation. This has the undesirable effect of destroying the consistency of the size-mass relation. We can have one or the other, but not both.
Fig. 7. Galaxy size (as measured by the exponential disk scale length, left) and mass (right) as a function of rotation velocity. The latter is the Baryonic Tully-Fisher relation; the data are the same as in Fig. 4. The solid lines are Mo et al. (1998) models with constant md (their equations 12 and 16). This is in reasonable agreement with the size-speed relation but not the BTFR. The latter may be fit by adopting a variable md ∝ Vf (dashed lines), but this ruins agreement with the size-speed relation. This is typical of dark matter models in which fixing one thing breaks another.
This failure of the Mo et al. (1998) model provides another example of the toothpaste tube problem. By fixing one problem, we create another. The only way forward is to consider more complex models with additional degrees of freedom.
Feedback
It has become conventional to invoke ‘feedback’ to address the various problems that afflict galaxy formation theory (Bullock & Boylan-Kolchin, 2017; De Baerdemaker and Boyd, 2020). It goes by other monikers as well, variously being called ‘gastrophysics’5 for gas phase astrophysics, or simply ‘baryonic physics’ for any process that might intervene between the relatively simple (and calculable) physics of collisionless cold dark matter and messy observational reality (which is entirely illuminated by the baryons). This proliferation of terminology obfuscates the boundaries of the subject and precludes a comprehensive discussion.
Feedback is not a single process, but rather a family of distinct processes. The common feature of different forms of feedback is the deposition of energy from compact sources into the surrounding gas of the interstellar medium. This can, at least in principle, heat gas and drive large-scale winds, either preventing gas from cooling and forming too many stars, or ejecting it from a galaxy outright. This in turn might affect the distribution of dark matter, though the effect is weak: one must move a lot of baryons for their gravity to impact the dark matter distribution.
There are many kinds of feedback, and many devils in the details. Massive, short-lived stars produce copious amounts of ultraviolet radiation that heats and ionizes the surrounding gas and erodes interstellar dust. These stars also produce strong winds through much of their short (~ 10 Myr) lives, and ultimately explode as Type II supernovae. These three mechanisms each act in a distinct way on different time scales. That’s just the feedback associated with massive stars; there are many other mechanisms (e.g., Type Ia supernovae are distinct from Type II supernovae, and Active Galactic Nuclei are a completely different beast entirely). The situation is extremely complicated. While the various forms of stellar feedback are readily apparent on the small scales of stars, it is far from obvious that they have the desired impact on the much larger scales of entire galaxies.
For any one kind of feedback, there can be many substantially different implementations in galaxy formation simulations. Independent numerical codes do not generally return compatible results for identical initial conditions (Scannapieco et al., 2012): there is no consensus on how feedback works. Among the many different computational implementations of feedback, at most one can be correct.
Most galaxy formation codes do not resolve the scale of single stars where stellar feedback occurs. They rely on some empirically calibrated, analytic approximation to model this ‘sub-grid physics’ — which is to say, they don’t simulate feedback at all. Rather, they simulate the accumulation of gas in one resolution element, then follow some prescription for what happens inside that unresolved box. This provides ample opportunity for disputes over the implementation and effects of feedback. For example, feedback is often cited as a way to address the cusp-core problem — or not, depending on the implementation (e.g., Benítez-Llambay et al., 2019; Bose et al., 2019; Di Cintio et al., 2014; Governato et al., 2012; Madau et al., 2014; Read et al., 2019). High resolution simulations (Bland-Hawthorn et al., 2015) indicate that the gas of the interstellar medium is less affected by feedback effects than assumed by typical sub-grid prescriptions: most of the energy is funneled through the lowest density gas — the course of least resistance — and is lost to the intergalactic medium without much impacting the galaxy in which it originates.
From the perspective of the philosophy of science, feedback is an auxiliary hypothesis invoked to patch up theories of galaxy formation. Indeed, since there are many distinct flavors of feedback that are invoked to carry out a variety of different tasks, feedback is really a suite of auxiliary hypotheses. This violates parsimony to an extreme and brutal degree.
This concern for parsimony is not specific to any particular feedback scheme; it is not just a matter of which feedback prescription is best. The entire approach is to invoke as many free parameters as necessary to solve any and all problems that might be encountered. There is little doubt that such models can be constructed to match the data, even data that bear little resemblance to the obvious predictions of the paradigm (McGaugh and de Blok, 1998a; Mo et al., 1998). So the concern is not whether ΛCDM galaxy formation models can explain the data; it is that they can’t not.
One could go on at much greater length about feedback and its impact on galaxy formation. This is pointless. It is a form of magical thinking to expect that the combined effects of numerous complicated feedback effects are going to always add up to looking like MOND in each and every galaxy. It is also the working presumption of an entire field of modern science.
OK, basicreview is over. Shit’s gonna get real. Here I give a short recounting of the primary reason I came to doubt the dark matter paradigm. This is entirely conventional – my concern about the viability of dark matter is a contradiction within its own context. It had nothing to do with MOND, which I was blissfully ignorant of when I ran head-long into this problem in 1994. Most of the community chooses to remain blissfully ignorant, which I understand: it’s way more comfortable. It is also why the field has remained mired in the ’90s, with all the apparent progress since then being nothing more than the perpetual reinvention of the same square wheel.
To make a completely generic point that does not depend on the specifics of dark matter halo profiles or the details of baryonic assembly, I discuss two basic hypotheses for the distribution of disk galaxy size at a given mass. These broad categories I label SH (Same Halo) and DD (Density begets Density) following McGaugh and de Blok (1998a). In both cases, galaxies of a given baryonic mass are assumed to reside in dark matter halos of a corresponding total mass. Hence, at a given halo mass, the baryonic mass is the same, and variations in galaxy size follow from one of two basic effects:
SH: variations in size follow from variations in the spin of the parent dark matter halo.
DD: variations in surface brightness follow from variations in the density of the dark matter halo.
Recall that at a given luminosity, size and surface brightness are not independent, so variation in one corresponds to variation in the other. Consequently, we have two distinct ideas for why galaxies of the same mass vary in size. In SH, the halo may have the same density profile ρ(r), and it is only variations in angular momentum that dictate variations in the disk size. In DD, variations in the surface brightness of the luminous disk are reflections of variations in the density profile ρ(r) of the dark matter halo. In principle, one could have a combination of both effects, but we will keep them separate for this discussion, and note that mixing them defeats the virtues of each without curing their ills.
The SH hypothesis traces back to at least Fall and Efstathiou (1980). The notion is simple: variations in the size of disks correspond to variations in the angular momentum of their host dark matter halos. The mass destined to become a dark matter halo initially expands with the rest of the universe, reaching some maximum radius before collapsing to form a gravitationally bound object. At the point of maximum expansion, the nascent dark matter halos torque one another, inducing a small but non-zero net spin in each, quantified by the dimensionless spin parameter λ (Peebles, 1969). One then imagines that as a disk forms within a dark matter halo, it collapses until it is centrifugally supported: λ → 1 from some initially small value (typically λ ≈ 0.05, Barnes & Efstathiou, 1987, with some modest distribution about this median value). The spin parameter thus determines the collapse factor and the extent of the disk: low spin halos harbor compact, high surface brightness disks while high spin halos produce extended, low surface brightness disks.
The distribution of primordial spins is fairly narrow, and does not correlate with environment (Barnes & Efstathiou, 1987). The narrow distribution was invoked as an explanation for Freeman’s Law: the small variation in spins from halo to halo resulted in a narrow distribution of disk central surface brightness (van der Kruit, 1987). This association, while apparently natural, proved to be incorrect: when one goes through the mathematics to transform spin into scale length, even a narrow distribution of initial spins predicts a broad distribution in surface brightness (Dalcanton, Spergel, & Summers, 1997; McGaugh and de Blok, 1998a). Indeed, it predicts too broad a distribution: to prevent the formation of galaxies much higher in surface brightness than observed, one must invoke a stability criterion (Dalcanton, Spergel, & Summers, 1997; McGaugh and de Blok, 1998a) that precludes the existence of very high surface brightness disks. While it is physically quite reasonable that such a criterion should exist (Ostriker and Peebles, 1973), the observed surface density threshold does not emerge naturally, and must be inserted by hand. It is an auxiliary hypothesis invoked to preserve SH. Once done, size variations and the trend of average size with mass work out in reasonable quantitative detail (e.g., Mo et al., 1998).
Angular momentum conservation must hold for an isolated galaxy, but the assumption made in SH is stronger: baryons conserve their share of the angular momentum independently of the dark matter. It is considered a virtue that this simple assumption leads to disk sizes that are about right. However, this assumption is not well justified. Baryons and dark matter are free to exchange angular momentum with each other, and are seen to do so in simulations that track both components (e.g., Book et al., 2011; Combes, 2013; Klypin et al., 2002). There is no guarantee that this exchange is equitable, and in general it is not: as baryons collapse to form a small galaxy within a large dark matter halo, they tend to lose angular momentum to the dark matter. This is a one-way street that runs in the wrong direction, with the final destination uncomfortably invisible with most of the angular momentum sequestered in the unobservable dark matter. Worse still, if we impose rigorous angular momentum conservation among the baryons, the result is a disk with a completely unrealistic surface density profile (van den Bosch, 2001a). It then becomes necessary to pick and choose which baryons manage to assemble into the disk and which are expelled or otherwise excluded, thereby solving one problem by creating another.
Early work on LSB disk galaxies led to a rather different picture. Compared to the previously known population of HSB galaxies around which our theories had been built, the LSB galaxy population has a younger mean stellar age (de Blok & van der Hulst, 1998; McGaugh and Bothun, 1994), a lower content of heavy elements (McGaugh, 1994), and a systematically higher gas fraction (McGaugh and de Blok, 1997; Schombert et al., 1997). These properties suggested that LSB galaxies evolve more gradually than their higher surface brightness brethren: they convert their gas into stars over a much longer timescale (McGaugh et al., 2017). The obvious culprit for this difference is surface density: lower surface brightness galaxies have less gravity, hence less ability to gather their diffuse interstellar medium into dense clumps that could form stars (Gerritsen and de Blok, 1999; Mihos et al., 1999). It seemed reasonable to ascribe the low surface density of the baryons to a correspondingly low density of their parent dark matter halos.
One way to think about a region in the early universe that will eventually collapse to form a galaxy is as a so-called top-hat over-density. The mass density Ωm → 1 at early times, irrespective of its current value, so a spherical region (the top-hat) that is somewhat over-dense early on may locally exceed the critical density. We may then consider this finite region as its own little closed universe, and follow its evolution with the Friedmann equations with Ω > 1. The top-hat will initially expand along with the rest of the universe, but will eventually reach a maximum radius and recollapse. When that happens depends on the density. The greater the over-density, the sooner the top-hat will recollapse. Conversely, a lesser over-density will take longer to reach maximum expansion before recollapsing.
Everything about LSB galaxies suggested that they were lower density, late-forming systems. It therefore seemed quite natural to imagine a distribution of over-densities and corresponding collapse times for top-hats of similar mass, and to associate LSB galaxy with the lesser over-densities (Dekel and Silk, 1986; McGaugh, 1992). More recently, some essential aspects of this idea have been revived under the monicker of “assembly bias” (e.g. Zehavi et al., 2018).
The work that informed the DD hypothesis was based largely on photometric and spectroscopic observations of LSB galaxies: their size and surface brightness, color, chemical abundance, and gas content. DD made two obvious predictions that had not yet been tested at that juncture. First, late-forming halos should reside preferentially in low density environments. This is a generic consequence of Gaussian initial conditions: big peaks defined on small (e.g., galaxy) scales are more likely to be found in big peaks defined on large (e.g., cluster) scales, and vice-versa. Second, the density of the dark matter halo of an LSB galaxy should be lower than that of an equal mass halo containing and HSB galaxy. This predicts a clear signature in their rotation speeds, which should be lower for lower density.
The prediction for the spatial distribution of LSB galaxies was tested by Bothun et al. (1993) and Mo et al. (1994). The test showed the expected effect: LSB galaxies were less strongly clustered than HSB galaxies. They are clustered: both galaxy populations follow the same large scale structure, but HSB galaxies adhere more strongly to it. In terms of the correlation function, the LSB sample available at the time had about half the amplitude r0 as comparison HSB samples (Mo et al., 1994). The effect was even more pronounced on the smallest scales (<2 Mpc: Bothun et al., 1993), leading Mo et al. (1994) to construct a model that successfully explained both small and large scale aspects of the spatial distribution of LSB galaxies simply by associating them with dark matter halos that lacked close interactions with other halos. This was strong corroboration of the DD hypothesis.
One way to test the prediction of DD that LSB galaxies should rotate more slowly than HSB galaxies was to use the Tully-Fisher relation (Tully and Fisher, 1977) as a point of reference. Originally identified as an empirical relation between optical luminosity and the observed line-width of single-dish 21 cm observations, more fundamentally it turns out to be a relation between the baryonic mass of a galaxy (stars plus gas) and its flat rotation speed the Baryonic Tully-Fisher relation (BTFR: McGaugh et al., 2000). This relation is a simple power law of the form
Aaronson et al. (1979) provided a straightforward interpretation for a relation of this form. A test particle orbiting a mass M at a distance R will have a circular speed V
V2 = GM/R (equation 2)
where G is Newton’s constant. If we square this, a relation like the Tully-Fisher relation follows:
V4 = (GM/R)2 ∝ MΣ (equation 3)
where we have introduced the surface mass density Σ = M/R2. The Tully-Fisher relation M ∝ V4 is recovered if Σ is constant, exactly as expected from Freeman’s Law (Freeman, 1970).
LSB galaxies, by definition, have central surface brightnesses (and corresponding stellar surface densities Σ0) that are less than the Freeman value. Consequently, DD predicts, through equation (3), that LSB galaxies should shift systematically off the Tully-Fisher relation: lower Σ means lower velocity. The predicted effect is not subtle2 (Fig. 4). For the range of surface brightness that had become available, the predicted shift should have stood out like the proverbial sore thumb. It did not (Hoffman et al., 1996; McGaugh and de Blok, 1998a; Sprayberry et al., 1995; Zwaan et al., 1995). This had an immediate impact on galaxy formation theory: compare Dalcanton et al. (1995, who predict a shift in Tully-Fisher with surface brightness) with Dalcanton et al. (1997b, who do not).
Fig. 4. The Baryonic Tully-Fisher relation and residuals. The top panel shows the flat rotation velocity of galaxies in the SPARC database (Lelli et al., 2016a) as a function of the baryonic mass (stars plus gas). The sample is restricted to those objects for which both quantities are measured to better than 20% accuracy. The bottom panel shows velocity residuals around the solid line in the top panel as a function of the central surface density of the stellar disks. Variations in the stellar surface density predict variations in velocity along the dashed line. These would translate to shifts illustrated by the dotted lines in the top panel, with each dotted line representing a shift of a factor of ten in surface density. The predicted dependence on surface density is not observed (Courteau & Rix, 1999; McGaugh and de Blok, 1998a; Sprayberry et al., 1995; Zwaan et al., 1995).
Instead of the systematic variation of velocity with surface brightness expected at fixed mass, there was none. Indeed, there is no hint of a second parameter dependence. The relation is incredibly tight by the standards of extragalactic astronomy (Lelli et al., 2016b): baryonic mass and the flat rotation speed are practically interchangeable.
The above derivation is overly simplistic. The radius at which we should make a measurement is ill-defined, and the surface density is dynamical: it includes both stars and dark matter. Moreover, galaxies are not spherical cows: one needs to solve the Poisson equation for the observed disk geometry of LTGs, and account for the varying radial contributions of luminous and dark matter. While this can be made to sound intimidating, the numerical computations are straightforward and rigorous (e.g., Begeman et al., 1991; Casertano & Shostak, 1980; Lelli et al., 2016a). It still boils down to the same sort of relation (modulo geometrical factors of order unity), but with two mass distributions: one for the baryons Mb(R), and one for the dark matter MDM(R). Though the dark matter is more massive, it is also more extended. Consequently, both components can contribute non-negligibly to the rotation over the observed range of radii:
V2(R) = GM/R = G(Mb/R + MDM/R), (equation 4)
(4)where for clarity we have omitted* geometrical factors. The only absolute requirement is that the baryonic contribution should begin to decline once the majority of baryonic mass is encompassed. It is when rotation curves persist in remaining flat past this point that we infer the need for dark matter.
A recurrent problem in testing galaxy formation theories is that they seldom make ironclad predictions; I attempt a brief summary in Table 1. SH represents a broad class of theories with many variants. By construction, the dark matter halos of galaxies of similar stellar mass are similar. If we associate the flat rotation velocity with halo mass, then galaxies of the same mass have the same circular velocity, and the problem posed by Tully-Fisher is automatically satisfied.
Table 1. Predictions of DD and SH for LSB galaxies.
Observation
DD
SH
Evolutionary rate
+
+
Size distribution
+
+
Clustering
+
X
Tully-Fisher relation
X
?
Central density relation
+
X
While it is common to associate the flat rotation speed with the dark matter halo, this is a half-truth: the observed velocity is a combination of baryonic and dark components (eq. (4)). It is thus a rather curious coincidence that rotation curves are as flat as they are: the Keplerian decline of the baryonic contribution must be precisely balanced by an increasing contribution from the dark matter halo. This fine-tuning problem was dubbed the “disk-halo conspiracy” (Bahcall & Casertano, 1985; van Albada & Sancisi, 1986). The solution offered for the disk-halo conspiracy was that the formation of the baryonic disk has an effect on the distribution of the dark matter. As the disk settles, the dark matter halo respond through a process commonly referred to as adiabatic compression that brings the peak velocities of disk and dark components into alignment (Blumenthal et al., 1986). Some rearrangement of the dark matter halo in response to the change of the gravitational potential caused by the settling of the disk is inevitable, so this seemed a plausible explanation.
The observation that LSB galaxies obey the Tully-Fisher relation greatly compounds the fine-tuning (McGaugh and de Blok, 1998a; Zwaan et al., 1995). The amount of adiabatic compression depends on the surface density of stars (Sellwood and McGaugh, 2005b): HSB galaxies experience greater compression than LSB galaxies. This should enhance the predicted shift between the two in Tully-Fisher. Instead, the amplitude of the flat rotation speed remains unperturbed.
The generic failings of dark matter models was discussed at length by McGaugh and de Blok (1998a). The same problems have been encountered by others. For example, Fig. 5 shows model galaxies formed in a dark matter halo with identical total mass and density profile but with different spin parameters (van den Bosch, 2001b). Variations in the assembly and cooling history were also considered, but these make little difference and are not relevant here. The point is that smaller (larger) spin parameters lead to more (less) compact disks that contribute more (less) to the total rotation, exactly as anticipated from variations in the term Mb/R in equation (4). The nominal variation is readily detectable, and stands out prominently in the Tully-Fisher diagram (Fig. 5). This is exactly the same fine-tuning problem that was pointed out by Zwaan et al. (1995) and McGaugh and de Blok (1998a).
What I describe as a fine-tuning problem is not portrayed as such by van den Bosch (2000) and van den Bosch and Dalcanton (2000), who argued that the data could be readily accommodated in the dark matter picture. The difference is between accommodating the data once known, and predicting it a priori. The dark matter picture is extraordinarily flexible: one is free to distribute the dark matter as needed to fit any data that evinces a non-negative mass discrepancy, even data that are wrong (de Blok & McGaugh, 1998). It is another matter entirely to construct a realistic model a priori; in my experience it is quite easy to construct models with plausible-seeming parameters that bear little resemblance to real galaxies (e.g., the low-spin case in Fig. 5). A similar conundrum is encountered when constructing models that can explain the long tidal tails observed in merging and interacting galaxies: models with realistic rotation curves do not produce realistic tidal tails, and vice-versa (Dubinski et al., 1999). The data occupy a very narrow sliver of the enormous volume of parameter space available to dark matter models, a situation that seems rather contrived.
Fig. 5. Model galaxy rotation curves and the Tully-Fisher relation. Rotation curves (left panel) for model galaxies of the same mass but different spin parameters λ from van den Bosch (2001b, see his Fig. 3). Models with lower spin have more compact stellar disks that contribute more to the rotation curve (V2 = GM/R; R being smaller for the same M). These models are shown as square points on the Baryonic Tully-Fisher relation (right) along with data for real galaxies (grey circles: Lelli et al., 2016b) and a fit thereto (dashed line). Differences in the cooling history result in modest variation in the baryonic mass at fixed halo mass as reflected in the vertical scatter of the models. This is within the scatter of the data, but variation due to the spin parameter is not.
Both DD and SH predict residuals from Tully-Fisher that are not observed. I consider this to be an unrecoverable failure for DD, which was my hypothesis (McGaugh, 1992), so I worked hard to salvage it. I could not. For SH, Tully-Fisher might be recovered in the limit of dark matter domination, which requires further consideration.
I will save the further consideration for a future post, as that can take infinite words (there are literally thousands of ApJ papers on the subject). The real problem that rotation curve data pose generically for the dark matter interpretation is the fine-tuning required between baryonic and dark matter components – the balancing act explicit in the equations above. This, by itself, constitutes a practical falsification of the dark matter paradigm.
Without going into interesting but ultimately meaningless details (maybe next time), the only way to avoid this conclusion is to choose to be unconcerned with fine-tuning. If you choose to say fine-tuning isn’t a problem, then it isn’t a problem. Worse, many scientists don’t seem to understand that they’ve even made this choice: it is baked into their assumptions. There is no risk of questioning those assumptions if one never stops to think about them, much less worry that there might be something wrong with them.
Much of the field seems to have sunk into a form of scientific nihilism. The attitude I frequently encounter when I raise this issue boils down to “Don’t care! Everything will magically work out! LA LA LA!”
*Strictly speaking, eq. (4) only holds for spherical mass distributions. I make this simplification here to emphasize the fact that both mass and radius matter. This essential scaling persists for any geometry: the argument holds in complete generality.
Galaxies are gravitationally bound condensations of stars and gas in a mostly empty, expanding universe. The tens of billions of solar masses of baryonic material that comprise the stars and gas of the Milky Way now reside mostly within a radius of 20 kpc. At the average density of the universe, the equivalent mass fills a spherical volume with a comoving radius a bit in excess of 1 Mpc. This is a large factor by which a protogalaxy must collapse, starting from the very smooth (~ 1 part in 105) initial condition at z = 1090 observed in the CMB (Planck Collaboration et al., 2018). Dark matter — in particular, non-baryonic cold dark matter — plays an essential role in speeding this process along.
The mass-energy of the early universe is initially dominated by the radiation field. The baryons are held in thrall to the photons until the expansion of the universe turns the tables and matter becomes dominant. Exactly when this happens depends on the mass density (Peebles, 1980); for our purposes it suffices to realize that the baryonic components of galaxies can not begin to form until well after the time of the CMB. However, since CDM does not interact with photons, it is not subject to this limitation. The dark matter can begin to form structures — dark matter halos — that form the scaffolding of future structure. Essential to the ΛCDM galaxy formation paradigm is that the dark matter halos form first, seeding the subsequent formation of luminous galaxies by providing the potential wells into which baryons can condense once free from the radiation field.
The theoretical expectation for how dark matter halos form is well understood at this juncture. Numerical simulations of cold dark matter — mass that interacts only through gravity in an expanding universe — show that quasi-spherical dark matter halos form with a characteristic ‘NFW’ (e.g., Navarro et al., 1997) density profile. These have a ‘cuspy’ inner density profile in which the density of dark matter increases towards the center approximately 1 as a power law, ρ(r → 0) ~ r−1. At larger radii, the density profile falls of as ρ(r → ∞) ~ r−3. The centers of these halos are the density peaks around which galaxies can form.
The galaxies that we observe are composed of stars and gas: normal baryonic matter. The theoretical expectation for how baryons behave during galaxy formation is not well understood (Scannapieco et al., 2012). This results in a tremendous and long-standing disconnect between theory and observation. We can, however, stipulate a few requirements as to what needs to happen. Dark matter halos must form first; the baryons fall into these halos afterwards. Dark matter halos are observed to extend well beyond the outer edges of visible galaxies, so baryons must condense to the centers of dark matter halos. This condensation may proceed through both the hierarchical merging of protogalactic fragments (a process that has a proclivity to form ETGs) and the more gentle accretion of gas into rotating disks (a requirement to form LTGs). In either case, some fraction of the baryons form the observed, luminous component of a galaxy at the center of a CDM halo. This condensation of baryons necessarily affects the dark matter gravitationally, with the net effect of dragging some of it towards the center (Blumenthal et al., 1986; Dubinski, 1994; Gnedin et al., 2004; Sellwood and McGaugh, 2005a), thus compressing the dark matter halo from its initial condition as indicated by dark matter-only simulations like those of Navarro et al. (1997). These processes must all occur, but do not by themselves suffice to explain real galaxies.
Like last time, this is a minimalist outline of the basics that are relevant to our discussion. A proper history of this field would be much longer. Indeed, I rather doubt it would be possible to write a coherent text on the subject, which means different things to different scientists.
Entering the 1980s, options for galaxy formation were frequently portrayed as a dichotomy between monolithic galaxy formation (Eggen et al., 1962) and the merger of protogalactic fragments (Searle and Zinn, 1978). The basic idea of monolithic galaxy formation is that the initial ~ 1 Mpc cloud of gas that would form the Milky Way experienced dissipational collapse in one smooth, adiabatic process. This is effective at forming the disk, with only a tiny bit of star formation occurring during the collapse phase to provide the stars of the ancient, metal-poor stellar halo. In contrast, the Galaxy could have been built up by the merger of smaller protogalactic fragments, each with their own life as smaller galaxies prior to merging. The latter is more natural to the emergence of structure from the initial conditions observed in the CMB, where small lumps condense more readily than large ones. Indeed, this effectively forms the basis of the modern picture of hierarchical galaxy formation (Efstathiou et al., 1988).
Hierarchical galaxy formation is effective at forming bulges and pressure-supported ETGs, but is anathema to the formation of orderly disks. Dynamically cold disks are fragile and prefer to be left alone: the high rate of merging in the hierarchical ΛCDM model tends to destroy the dynamically cold state in which most spirals are observed to exist (Abadi et al., 2003; Peebles, 2020; Toth and Ostriker, 1992). Consequently, there have been some rather different ideas about galaxy formation: if one starts from the initial conditions imposed by the CMB, hierarchical galaxy formation is inevitable. If instead one works backwards from the observed state of galaxy disks, the smooth settling of gaseous disks in relatively isolated monoliths seems more plausible.
In addition to different theoretical notions, our picture of the galaxy population was woefully incomplete. An influential study by Freeman (1970) found that 28 of three dozen spirals shared very nearly the same central surface brightness. This was generalized into a belief that all spirals had the same (high) surface brightness, and came to be known as Freeman’s Law. Ultimately this proved to be a selection effect, as pointed out early by Disney (1976) and Allen and Shu (1979). However, it was not until much later (McGaugh et al., 1995a) that this became widely recognized. In the mean time, the prevailing assumption was that Freeman’s Law held true (e.g., van der Kruit, 1987) and all spirals had practically the same surface brightness. In particular, it was the central surface brightness of the disk component of spiral galaxies that was thought to be universal, while bulges and ETGs varied in surface brightness. Variation in the disk component of LTGs was thought to be restricted to variations in size, which led to variations in luminosity at fixed surface brightness.
Consequently, most theoretical effort was concentrated on the bright objects in the high-mass (M∗ > 1010 M⊙) clump in Fig. 2. Some low mass dwarf galaxies were known to exist, but were considered to be insignificant because they contained little mass. Low surface brightness galaxies violated Freeman’s Law, so were widely presumed not to exist, or be at most a rare curiosity (Bosma & Freeman, 1993). A happy consequence of this unfortunate state of affairs was that as observations of diffuse LSB galaxies were made, they forced then-current ideas about galaxy formation into a regime that they had not anticipated, and which many could not accommodate.
The similarity and difference between high surface brightness (HSB) and LSB galaxies is illustrated by Fig. 3. Both are rotationally supported, late type disk galaxies. Both show spiral structure, though it is more prominent in the HSB. More importantly, both systems are of comparable linear diameter. They exist roughly at opposite ends of a horizontal line in Fig. 2. Their differing stellar masses stem from the surface density of their stars rather than their linear extent — exactly the opposite of what had been inferred from Freeman’s Law. Any model of galaxy formation and evolution must account for the distribution of size (or surface brightness) at a given mass as well as the number density of galaxies as a function of mass. Both aspects of the galaxy population remain problematic to this day.
Fig. 3. High and low surface brightness galaxies. NGC 7757 (left) and UGC 1230 (right) are examples of high and low surface brightness galaxies, respectively. These galaxies are about the same distance away and span roughly the same physical diameter. The chief difference is in the surface brightness, which follows from the separation between stars (McGaugh et al., 1995b). Note that the intensity scale of these images is not identical; the contrast has been increased for the LSB galaxy so that it appears as more than a smudge.
Throughout my thesis work, my spouse joked that my LSB galaxy images looked like bug splots on the telescope. You can see more of them here. And a few more here. And lots more on Jim Schombert’s web pages, here and here and here.
When we look up at the sky, we see stars. Stars are the building blocks of galaxies; we can see the stellar disk of the galaxy in which we live as the vault of the Milky Way arching across the sky. When we look beyond the Milky Way, we see galaxies. Just as stars are the building blocks of galaxies, galaxies are the building blocks of the universe. One can no more hope to understand cosmology without understanding galaxies than one can hope to understand galaxies without understanding stars.
Here I give a very brief primer on basic galaxy properties. This is a subject on which entire textbooks are written, so what I say here is necessarily very incomplete. It is a bare minimum to go on for the ensuing discussion.
Galaxy Properties
Cosmology entered the modern era when Hubble (1929) resolved the debate over the nature of spiral nebulae by measuring the distance to Andromeda, establishing that vast stellar systems — galaxies — exist external to and coequal with the Milky Way. Galaxies are the primary type of object observed when we look beyond the confines of our own Milky Way: they are the building blocks of the universe. Consequently, galaxies and cosmology are intertwined: it is impossible to understand one without the other.
Here I sketch a few essential facts about the properties of galaxies. This is far from a comprehensive list (see, for example Binney & Tremaine, 1987) and serves only to provide a minimum framework for the subsequent discussion. The properties of galaxies are often cast in terms of morphological type, starting with Hubble’s tuning fork diagram. The primary distinction is between Early Type Galaxies (ETGs) and Late Type Galaxies (LTGs), which is a matter of basic structure. ETGs, also known as elliptical galaxies, are three dimensional, ellipsoidal systems that are pressure supported: there is more kinetic energy in random motions than in circular motions, a condition described as dynamically hot. The orbits of stars are generally eccentric and oriented randomly with respect to one another, filling out the ellipsoidal shape seen in projection on the sky. LTGs, including spiral and irregular galaxies, are thin, quasi-two dimensional, rotationally supported disks. The majority of their stars orbit in the same plane in the same direction on low eccentricity orbits. The lion’s share of kinetic energy is invested in circular motion, with only small random motions, a condition described as dynamically cold. Examples of early and late type galaxies are shown in Fig. 1.
Fig. 1. Galaxy morphology. These examples shown an early type elliptical galaxy (NGC 3379, left), and two late type disk galaxies: a face-on spiral (NGC 628, top right), and an edge-on disk galaxy (NGC 891, bottom right). Elliptical galaxies are quasi-spherical, pressure supported stellar systems that tend to have predominantly old stellar populations, usually lacking young stars or much in the way of the cold interstellar gas from which they might form. In contrast, late type galaxies (spirals and irregulars) are thin, rotationally supported disks. They typically contain a mix of stellar ages and cold interstellar gas from which new stars continue to form. Interstellar dust is also present, being most obvious in the edge-on case. Images from Palomar Observatory, Caltech.
Finer distinctions in morphology can be made within the broad classes of early and late type galaxies, but the basic structural and kinematic differences suffice here. The disordered motion of ETGs is a natural consequence of violent relaxation (Lynden-Bell, 1967) in which a stellar system reaches a state of dynamical equilibrium from a chaotic initial state. This can proceed relatively quickly from a number of conceivable initial conditions, and is a rather natural consequence of the hierarchical merging of sub-clumps expected from the Gaussian initial conditions indicated by observations of the CMB (White, 1996). In contrast, the orderly rotation of dynamically cold LTGs requires a gentle settling of gas into a rotationally supported disk. It is essential that disk formation occur in the gaseous phase, as gas can dissipate and settle to the preferred plane specified by the net angular momentum of the system. Once stars form, their orbits retain a memory of their initial state for a period typically much greater than the age of the universe (Binney & Tremaine, 1987). Consequently, the bulk of the stars in the spiral disk must have formed there after the gas settled.
In addition to the dichotomy in structure, ETGs and LTGs also differ in their evolutionary history. ETGs tend to be ‘red and dead,’ which is to say, dominated by old stars. They typically lack much in the way of recent star formation, and are often devoid of the cold interstellar gas from which new stars can form. Most of their star formation happened in the early universe, and may have involved the merger of multiple protogalactic fragments. Irrespective of these details, massive ETGs appeared early in the universe (Steinhardt et al., 2016), and for the most part seem to have evolved passively since (Franck and McGaugh, 2017).
Again in contrast, LTGs have on-going star formation in interstellar media replete with cold atomic and molecular gas. They exhibit a wide range in stellar ages, from newly formed stars to ancient stars dating to near the beginning of time. Old stars seem to be omnipresent, famously occupying globular clusters but also present in the general disk population. This implies that the gaseous disk settled fairly early, though accretion may continue over a long timescale (van den Bergh, 1962; Henry and Worthey, 1999). Old stars persist in the same orbital plane as young stars (Binney & Merrifield, 1998), which precludes much subsequent merger activity, as the chaos of merging distorts orbits. Disks can be over-heated (Toth and Ostriker, 1992) and transformed by interactions between galaxies (Toomre and Toomre, 1972), even turning into elliptical galaxies during major mergers (Barnes & Hernquist, 1992).
Aside from its morphology, an obvious property of a galaxy is its mass. Galaxies exist over a large range of mass, with a type-dependent characteristic stellar mass of 5 × 1010 M⊙ for disk dominated systems (the Milky Way is very close to this mass: Bland-Hawthorn & Gerhard, 2016) and 1011 M⊙ for elliptical galaxies (Moffett et al., 2016). Above this characteristic mass, the number density of galaxies declines sharply, though individual galaxies exceeding a few 1011 M⊙ certainly exist. The number density of galaxies increases gradually to lower masses, with no known minimum. The gradual increase in numbers does not compensate for the decrease in mass: integrating over the distribution, one finds that most of the stellar mass is in bright galaxies close to the characteristic mass.
Galaxies have a characteristic size and surface brightness. The same amount of stellar mass can be concentrated in a high surface brightness (HSB) galaxies, or spread over a much larger area in a low surface brightness (LSB) galaxy. For the purposes of this discussion, it suffices to assume that the observed luminosity is proportional to the mass of stars that produces the light. Similarly, the surface brightness measures the surface density of stars. Of the three observable quantities of luminosity, size, and surface brightness, only two are independent: the luminosity is the product of the surface brightness and the area over which it extends. The area scales as the square of the linear size.
The distribution of size and mass of galaxies is shown in Fig. 2. This figure spans the range from tiny dwarf irregular galaxies containing ‘only’ a few hundred thousand stars to giant spirals composed of hundreds of billions of stars with half-light radii ranging from hundreds of parsecs to tens of kpc. The upper boundaries represent real, physical limits on the sizes and masses of galaxies. Bright objects are easy to see; if still higher mass galaxies were common, they would be readily detected and cataloged. In contrast, the lower boundaries are set by the limits of observational sensitivity (“selection effects”): galaxies that are physically small or low in surface brightness are difficult to detect and are systematically under-represented in galaxy catalogs (Allen & Shu, 1979; Disney, 1976; McGaugh et al., 1995a).
Fig. 2. Galaxy size and mass. The radius that contains half of the light is plotted against the stellar mass. Galaxies exist over many decades in mass, and exhibit a considerable variation in size at a given mass. Early and late type galaxies are demarcated with different symbols, as noted. Lines illustrate tracks of constant stellar surface density. The data for ETGs are from the compilation of Dabringhausen and Fellhauer (2016) augmented by dwarf Spheroidal (dSph) galaxies in the Local Group compiled by Lelli et al. (2017). Ultra-diffuse galaxies (UDGs: van Dokkum et al., 2015; Mihos et al., 2015, × and +, respectively) have unsettled kinematic classifications at present, but most seem likely to be pressure supported ETGs. The bulk of the data for LTGs is from the SPARC database (Lelli et al., 2016a), augmented by cases that are noteworthy for their extremity in mass or surface brightness (Brunker et al., 2019; Dalcanton, Spergel, Gunn, et al., 1997; de Blok et al., 1995; McGaugh and Bothun, 1994; Mihos et al., 2018; Rhode et al., 2013; Schombert et al., 2011). The gas content of these star-forming systems adds a third axis, illustrated crudely here by whether an LTG is made more of stars or gas (filled and open symbols, respectively).
Individual galaxies can be early type or late type, high mass or low mass, large or small in linear extent, high or low surface brightness, gas poor or gas rich. No one of these properties is completely predictive of the others: the correlations that do exist tend to have lots of intrinsic scatter. The primary exception to this appears to involve the kinematics. Massive galaxies are fast rotators; low mass galaxies are slow rotators. This Tully-Fisher relation (Tully and Fisher, 1977) is one of the strongest correlations in extragalactic astronomy (Lelli et al., 2016b). It is thus necessary to simultaneously explain both the chaotic diversity of galaxy properties and the orderly nature of their kinematics (McGaugh et al., 2019).
Galaxies do not exist in isolation. Rather than being randomly distributed throughout the universe, they tend to cluster together: the best place to find a galaxy is in the proximity of another galaxy (Rubin, 1954). A common way to quantify the clustering of galaxies is the two-point correlation function ξ(r) (Peebles, 1980). This measures the excess probability of finding a galaxy within a distance r of a reference galaxy relative to a random distribution. The observed correlation function is well approximated as a power law whose slope and normalization varies with galaxy population. ETGs are more clustered than LTGs, having a longer correlation length: r0 ≈ 9 Mpc for red galaxies vs. ~ 5 Mpc for blue galaxies (Zehavi et al., 2011). Here we will find this quantity to be of interest for comparing the distribution of high and low surface brightness galaxies.
Galaxies are sometimes called island universes. That is partly a hangover from pre-Hubble times during which it was widely believed that the Milky Way contained everything: it was one giant island universe embedded in an indefinite but otherwise empty void. We know that’s not true now – there are lots of stellar systems of similar size to the Milky Way – but they often seem to stand alone even if they are clustered in non-random ways.
NGC 7757 is a high surface brightness spiral. It is easy to spot amongst the foreground stars of the Milky Way. In contrast, there are strong selection effects against low surface brightness galaxies, like UGC 1230:
The LSB galaxy is rather harder to spot. Even when noticed, it doesn’t seem as important as the HSB galaxy. This, in a nutshell, is the history of selection effects in galaxy surveys, which are inevitably biased towards the biggest and the brightest. Advances in detectors (especially the CCD revolution of the 1980s) helped open our eyes to the existence of these LSB galaxies, and allowed us to measure their physical properties. Doing so provided a stringent test of galaxy formation theories, which have scrambled to catch up ever since.
In order to agree on an interpretation, we first have to agree on the facts. Even when we agree on the facts, the available set of facts may admit multiple interpretations. This was an obvious and widely accepted truth early in my career*. Since then, the field has decayed into a haphazardly conceived set of unquestionable absolutes that are based on a large but well-curated subset of facts that gratuitously ignores any subset of facts that are inconvenient.
Sadly, we seem to have entered a post-truth period in which facts are drowned out by propaganda. I went into science to get away from people who place faith before facts, and comfortable fictions ahead of uncomfortable truths. Unfortunately, a lot of those people seem to have followed me here. This manifests as people who quote what are essentially pro-dark matter talking points at me like I don’t understand LCDM, when all it really does is reveal that they are posers** who picked up on some common myths about the field without actually reading the relevant journal articles.
Indeed, a recent experience taught me a new psychology term: identity protective cognition. Identity protective cognition is the tendency for people in a group to selectively credit or dismiss evidence in patterns that reflect the beliefs that predominate in their group. When it comes to dark matter, the group happens to be a scientific one, but the psychology is the same: I’ve seen people twist themselves into logical knots to protect their belief in dark matter from being subject to critical examination. They do it without even recognizing that this is what they’re doing. I guess this is a human foible we cannot escape.
I’ve addressed these issues before, but here I’m going to start a series of posts on what I think some of the essential but underappreciated facts are. This is based on a talk that I gave at a conference on the philosophy of science in 2019, back when we had conferences, and published in Studies in History and Philosophy of Science. I paid the exorbitant open access fee (the journal changed its name – and publication policy – during the publication process), so you can read the whole thing all at once if you are eager. I’ve already written it to be accessible, so mostly I’m going to post it here in what I hope are digestible chunks, and may add further commentary if it seems appropriate.
Cosmic context
Cosmology is the science of the origin and evolution of the universe: the biggest of big pictures. The modern picture of the hot big bang is underpinned by three empirical pillars: an expanding universe (Hubble expansion), Big Bang Nucleosynthesis (BBN: the formation of the light elements through nuclear reactions in the early universe), and the relic radiation field (the Cosmic Microwave Background: CMB) (Harrison, 2000; Peebles, 1993). The discussion here will take this framework for granted.
The three empirical pillars fit beautifully with General Relativity (GR). Making the simplifying assumptions of homogeneity and isotropy, Einstein’s equations can be applied to treat the entire universe as a dynamical entity. As such, it is compelled either to expand or contract. Running the observed expansion backwards in time, one necessarily comes to a hot, dense, early phase. This naturally explains the CMB, which marks the transition from an opaque plasma to a transparent gas (Sunyaev and Zeldovich, 1980; Weiss, 1980). The abundances of the light elements can be explained in detail with BBN provided the universe expands in the first few minutes as predicted by GR when radiation dominates the mass-energy budget of the universe (Boesgaard & Steigman, 1985).
The marvelous consistency of these early universe results with the expectations of GR builds confidence that the hot big bang is the correct general picture for cosmology. It also builds overconfidence that GR is completely sufficient to describe the universe. Maintaining consistency with modern cosmological data is only possible with the addition of two auxiliary hypotheses: dark matter and dark energy. These invisible entities are an absolute requirement of the current version of the most-favored cosmological model, ΛCDM. The very name of this model is born of these dark materials: Λ is Einstein’s cosmological constant, of which ‘dark energy’ is a generalization, and CDM is cold dark matter.
Dark matter, on the other hand, plays an intimate and essential role in galaxy formation. The term ‘dark matter’ is dangerously crude, as it can reasonably be used to mean anything that is not seen. In the cosmic context, there are at least two forms of unseen mass: normal matter that happens not to glow in a way that is easily seen — not all ordinary material need be associated with visible stars — and non-baryonic cold dark matter. It is the latter form of unseen mass that is thought to dominate the mass budget of the universe and play a critical role in galaxy formation.
Cold Dark Matter
Cold dark matter is some form of slow moving, non-relativistic (‘cold’) particulate mass that is not composed of normal matter (baryons). Baryons are the family of particles that include protons and neutrons. As such, they compose the bulk of the mass of normal matter, and it has become conventional to use this term to distinguish between normal, baryonic matter and the non-baryonic dark matter.
The distinction between baryonic and non-baryonic dark matter is no small thing. Non-baryonic dark matter must be a new particle that resides in a new ‘dark sector’ that is completely distinct from the usual stable of elementary particles. We do not just need some new particle, we need one (or many) that reside in some sector beyond the framework of the stubbornly successful Standard Model of particle physics. Whatever the solution to the mass discrepancy problem turns out to be, it requires new physics.
The cosmic dark matter must be non-baryonic for two basic reasons. First, the mass density of the universe measured gravitationally (Ωm ≈ 0.3, e.g., Faber and Gallagher, 1979; Davis et al., 1980, 1992) clearly exceeds the mass density in baryons as constrained by BBN (Ωb ≈ 0.05, e.g., Walker et al., 1991). There is something gravitating that is not ordinary matter: Ωm > Ωb.
The second reason follows from the absence of large fluctuations in the CMB (Peebles and Yu, 1970; Silk, 1968; Sunyaev and Zeldovich, 1980). The CMB is extraordinarily uniform in temperature across the sky, varying by only ~ 1 part in 105 (Smoot et al., 1992). These small temperature variations correspond to variations in density. Gravity is an attractive force; it will make the rich grow richer. Small density excesses will tend to attract more mass, making them larger, attracting more mass, and leading to the formation of large scale structures, including galaxies. But gravity is also a weak force: this process takes a long time. In the long but finite age of the universe, gravity plus known baryonic matter does not suffice to go from the initially smooth, highly uniform state of the early universe to the highly clumpy, structured state of the local universe (Peebles, 1993). The solution is to boost the process with an additional component of mass — the cold dark matter — that gravitates without interacting with the photons, thus getting a head start on the growth of structure while not aggravating the amplitude of temperature fluctuations in the CMB.
Taken separately, one might argue away the need for dark matter. Taken together, these two distinct arguments convinced nearly everyone, including myself, of the absolute need for non-baryonic dark matter. Consequently, CDM became established as the leading paradigm during the 1980s (Peebles, 1984; Steigman and Turner, 1985). The paradigm has snowballed since that time, the common attitude among cosmologists being that CDM has to exist.
From an astronomical perspective, the CDM could be any slow-moving, massive object that does not interact with photons nor participate in BBN. The range of possibilities is at once limitless yet highly constrained. Neutrons would suffice if they were stable in vacuum, but they are not. Primordial black holes are a logical possibility, but if made of normal matter, they must somehow form in the first second after the Big Bang to not impair BBN. At this juncture, microlensing experiments have excluded most plausible mass ranges that primordial black holes could occupy (Mediavilla et al., 2017). It is easy to invent hypothetical dark matter candidates, but difficult for them to remain viable.
From a particle physics perspective, the favored candidate is a Weakly Interacting Massive Particle (WIMP: Peebles, 1984; Steigman and Turner, 1985). WIMPs are expected to be the lightest stable supersymmetric partner particle that resides in the hypothetical supersymmetric sector (Martin, 1998). The WIMP has been the odds-on favorite for so long that it is often used synonymously with the more generic term ‘dark matter.’ It is the hypothesized particle that launched a thousand experiments. Experimental searches for WIMPs have matured over the past several decades, making extraordinary progress in not detecting dark matter (Aprile et al., 2018). Virtually all of the parameter space in which WIMPs had been predicted to reside (Trotta et al., 2008) is now excluded. Worse, the existence of the supersymmetric sector itself, once seemingly a sure thing, remains entirely hypothetical, and appears at this juncture to be a beautiful idea that nature declined to implement.
In sum, we must have cold dark matter for both galaxies and cosmology, but we have as yet no clue to what it is.
* There is a trope that late in their careers, great scientists come to the opinion that everything worth discovering has been discovered, because they themselves already did everything worth doing. That is not a concern I have – I know we haven’t discovered all there is to discover. Yet I see no prospect for advancing our fundamental understanding simply because there aren’t enough of us pulling in the right direction. Most of the community is busy barking up the wrong tree, and refuses to be distracted from their focus on the invisible squirrel that isn’t there.
** Many of these people are the product of the toxic culture that Simon White warned us about. They wave the sausage of galaxy formation and feedback like a magic wand that excuses all faults while being proudly ignorant of how the sausage was made. Bitch, please. I was there when that sausage was made. I helped make the damn sausage. I know what went into it, and I recognize when it tastes wrong.