What JWST will see

What JWST will see

Big galaxies at high redshift!

That’s my prediction, anyway. A little context first.

New Year, New Telescope

First, JWST finally launched. This has been a long-delayed NASA mission; the launch had been put off so many times it felt like a living example of Zeno’s paradox: ever closer but never quite there. A successful launch is always a relief – rockets do sometimes blow up on lift off – but there is still sweating to be done: it has one of the most complex deployments of any space mission. This is still a work in progress, but to start the new year, I thought it would be nice to look forward to what we hope to see.

JWST is a major space telescope optimized for observing in the near and mid-infrared. This enables observation of redshifted light from the earliest galaxies. This should enable us to see them as they would appear to our eyes had we been around at the time. And that time is long, long ago, in galaxies very far away: in principle, we should be able to see the first galaxies in their infancy, 13+ billion years ago. So what should we expect to see?

Early galaxies in LCDM

A theory is only as good as its prior. In LCDM, structure forms hierarchically: small objects emerge first, then merge into larger ones. It takes time to build up large galaxies like the Milky Way; the common estimate early on was that it would take at least a billion years to assemble an L* galaxy, and it could easily take longer. Ach, terminology: an L* galaxy is the characteristic luminosity of the Schechter function we commonly use to describe the number density of galaxies of various sizes. L* galaxies like the Milky Way are common, but the number of brighter galaxies falls precipitously. Bigger galaxies exist, but they are rare above this characteristic brightness, so L* is shorthand for a galaxy of typical brightness.

We expect galaxies to start small and slowly build up in size. This is a very basic prediction of LCDM. The hierarchical growth of dark matter halos is fundamental, and relatively easy to calculate. How this translates to the visible parts of galaxies is more fraught, depending on the details of baryonic infall, star formation, and the many kinds of feedback. [While I am a frequent critic of model feedback schemes implemented in hydrodynamic simulations on galactic scales, there is no doubt that feedback happens on the much smaller scales of individual stars and their nurseries. These are two very different things for which we confusingly use the same word since the former is the aspirational result of the latter.] That said, one only expects to assemble mass so fast, so the natural expectation is to see small galaxies first, with larger galaxies emerging slowly as their host dark matter halos merge together.

Here is an example of a model formation history that results in the brightest galaxy in a cluster (from De Lucia & Blaizot 2007). Little things merge to form bigger things (hence “hierarchical”). This happens a lot, and it isn’t really clear when you would say the main galaxy had formed. The final product (at lookback time zero, at redshift z=0) is a big galaxy composed of old stars – fairly typically for a giant elliptical. But the most massive progenitor is still rather small 8 billion years ago, over 4 billion years after the Big Bang. The final product doesn’t really emerge until the last major merger around 4 billion years ago. This is just one example in one model, and there are many different models, so your mileage will vary. But you get the idea: it takes a long time and a lot of mergers to assemble a big galaxy.

Brightest cluster galaxy merger tree. Time progresses upwards from early in the universe at bottom to the present day at top. Every line is a small galaxy that merges to ultimately form the larger galaxy. Symbols are color-coded by B−V color (red meaning old stars, blue young) and their area scales with the stellar mass (bigger circles being bigger galaxies. From De Lucia & Blaizot 2007).

It is important to note that in a hierarchical model, the age of a galaxy is not the same as the age of the stars that make up the galaxy. According to De Lucia & Blaizot, the stars of the brightest cluster galaxies

“are formed very early (50 per cent at z~5, 80 per cent at z~3)”

but do so

“in many small galaxies”

– i.e., the little progenitor circles in the plot above. The brightest cluster galaxies in their model build up rather slowly, such that

“half their final mass is typically locked-up in a single galaxy after z~0.5.”

De Lucia & Blaizot (2007)

So all the star formation happens early in the little things, but the final big thing emerges later – a lot later, only reaching half its current size when the universe is about 8 Gyr old. (That’s roughly when the solar system formed: we are late-comers to this party.) Given this prediction, one can imagine that JWST should see lots of small galaxies at high redshift, their early star formation popping off like firecrackers, but it shouldn’t see any big galaxies early on – not really at z > 3 and certainly not at z > 5.

Big galaxies in the data at early times?

While JWST is eagerly awaited, people have not been idle about looking into this. There have been many deep surveys made with the Hubble Space Telescope, augmented by the infrared capable (and now sadly defunct) Spitzer Space Telescope. These have already spied a number of big galaxies at surprisingly high redshift. So surprising that Steinhardt et al. (2016) dubbed it “The Impossibly Early Galaxy Problem.” This is their key plot:

The observed (points) and predicted (lines) luminosity functions of galaxies at various redshifts (colors). If all were well, the points would follow the lines of the same color. Instead, galaxies appear to be brighter than expected, already big at the highest redshifts probed. From Steinhardt et al. (2016).

There are lots of caveats to this kind of work. Constructing the galaxy luminosity function is a challenging task at any redshift; getting it right at high redshift especially so. While what counts as “high” varies, I’d say everything on the above plot counts. Steinhardt et al. (2016) worry about these details at considerable length but don’t find any plausible way out.

Around the same time, one of our graduate students, Jay Franck, was looking into similar issues. One of the things he found was that not only were there big galaxies in place early on, but they were also in clusters (or at least protoclusters) early and often. That is to say, not only are the galaxies too big too soon, so are the clusters in which they reside.

Dr. Franck made his own comparison of data to models, using the Millennium simulation to devise an apples-to-apples comparison:

The apparent magnitude m* at 4.5 microns of L* galaxies in clusters as a function of redshift. Circles are data; squares represent the Millennium simulation. These diverge at z > 2: galaxies are brighter (smaller m*) than predicted (Fig. 5.5 from Franck 2017).

The result is that the data look more like big galaxies formed early already as big galaxies. The solid lines are “passive evolution” models in which all the stars form in a short period starting at z=10. This starting point is an arbitrary choice, but there is little cosmic time between z = 10 and 20 – just a few hundred million years, barely one spin around the Milky Way. This is a short time in stellar evolution, so is practically the same as starting right at the beginning of time. As Jay put it,

“High redshift cluster galaxies appear to be consistent with an old stellar population… they do not appear to be rapidly assembling stellar mass at these epochs.”

Franck 2017

We see old stars, but we don’t see the predicted assembly of galaxies via mergers, at least not at the expected time. Rather, it looks like some galaxies were already big very early on.

As someone who has worked mostly on well resolved, relatively nearby galaxies, all this makes me queasy. Jay, and many others, have worked desperately hard to squeeze knowledge from the faint smudges detected by first generation space telescopes. JWST should bring these into much better focus.

Early galaxies in MOND

To go back to the first line of this post, big galaxies at high redshift did not come as a surprise to me. It is what we expect in MOND.

Structure formation is generally considered a great success of LCDM. It is straightforward and robust to calculate on large scales in linear perturbation theory. Individual galaxies, on the other hand, are highly non-linear objects, making them hard to beasts to tame in a model. In MOND, it is the other way around – predicting the behavior of individual galaxies is straightforward – only the observed distribution of mass matters, not all the details of how it came to be that way – but what happens as structure forms in the early universe is highly non-linear.

The non-linearity of MOND makes it hard to work with computationally. It is also crucial to how structure forms. I provide here an outline of how I expect structure formation to proceed in MOND. This page is now old, even ancient in internet time, as the golden age for this work was 15 – 20 years ago, when all the essential predictions were made and I was naive enough to think cosmologists were amenable to reason. Since the horizon of scientific memory is shorter than that, I felt it necessary to review in 2015. That is now itself over the horizon, so with the launch of JWST, it seems appropriate to remind the community yet again that these predictions exist.

This 1998 paper by Bob Sanders is a foundational paper in this field (see also Sanders 2001 and the other references given on the structure formation page). He says, right in the abstract,

“Objects of galaxy mass are the first virialized objects to form (by z = 10), and larger structure develops rapidly.”

Sanders (1998)

This was a remarkable prediction to make in 1998. Galaxies, much less larger structures, were supposed to take much longer to form. It takes time to go from the small initial perturbations that we see in the CMB at z=1000 to large objects like galaxies. Indeed, the it takes at least a few hundred million years simply in free fall time to assemble a galaxy’s worth of mass, a hard limit. Here Sanders was saying that an L* galaxy might assemble as early as half a billion years after the Big Bang.

So how can this happen? Without dark matter to lend a helping hand, structure formation in the very early universe is inhibited by the radiation field. This inhibition is removed around z ~ 200; exactly when being very sensitive to the baryon density. At this point, the baryon perturbations suddenly find themselves deep in the MOND regime, and behave as if there is a huge amount of dark matter. Structure proceeds hierarchically, as it must, but on a highly compressed timescale. To distinguish it from LCDM hierarchical galaxy formation, let’s call it prompt structure formation. In prompt structure formation, we expect

  • Early reionization (z ~ 20)
  • Some L* galaxies by z ~ 10
  • Early emergence of the cosmic web
  • Massive clusters already at z > 2
  • Large, empty voids
  • Large peculiar velocities
  • A very large homogeneity scale, maybe fractal over 100s of Mpc

There are already indications of all of these things, nearly all of which were predicted in advance of the relevant observations. I could elaborate, but that is beyond the scope of this post. People should read the references* if they’re keen.

*Reading the science papers is mandatory for the pros, who often seem fond of making straw man arguments about what they imagine MOND might do without bothering to check. I once referred some self-styled experts in structure formation to Sanders’s work. They promptly replied “That would mean structures of 1018 M!” when what he said was

“The largest objects being virialized now would be clusters of galaxies with masses in excess of 1014 M. Superclusters would only now be reaching maximum expansion.”

Sanders (1998)

The exact numbers are very sensitive to cosmological parameters, as Sanders discussed, but I have no idea where the “experts” got 1018, other than just making stuff up. More importantly, Sanders’s statement clearly presaged the observation of very massive clusters at surprisingly high redshift and the discovery of the Laniakea Supercluster.

These are just the early predictions of prompt structure formation, made in the same spirit that enabled me to predict the second peak of the microwave background and the absorption signal observed by EDGES at cosmic dawn. Since that time, at least two additional schools of thought as to how MOND might impact cosmology have emerged. One of them is the sterile neutrino MOND cosmology suggested by Angus and being actively pursued by the Bonn-Prague research group. Very recently, there is of course the new relativistic theory of Skordis & Złośnik which fits the cosmologists’ holy grail of the power spectrum in both the CMB at z = 1090 and galaxies at z = 0. There should be an active exchange and debate between these approaches, with perhaps new ones emerging.

Instead, we lack critical mass. Most of the community remains entirely obsessed with pursuing the vain chimera of invisible mass. I fear that this will eventually prove to be one of the greatest wastes of brainpower (some of it my own) in the history of science. I can only hope I’m wrong, as many brilliant people seem likely to waste their career running garbage in-garbage out computer simulations or at the bottom of a mine shaft failing to detect what isn’t there.

A beautiful mess

JWST can’t answer all of these questions, but it will help enormously with galaxy formation, which is bound to be messy. It’s not like L* galaxies are going to spring fully formed from the void like Athena from the forehead of Zeus. The early universe must be a chaotic place, with clumps of gas condensing to form the first stars that irradiate the surrounding intergalactic gas with UV photons before detonating as the first supernovae, and the clumps of stars merging to form giant elliptical galaxies while elsewhere gas manages to pool and settle into the large disks of spiral galaxies. When all this happens, how it happens, and how big galaxies get how fast are all to be determined – but now accessible to direct observation thanks to JWST.

It’s going to be a confusing, beautiful mess, in the best possible way – one that promises to test and challenge our predictions and preconceptions about structure formation in the early universe.

The neutrino mass hierarchy and cosmological limits on their mass

The neutrino mass hierarchy and cosmological limits on their mass

I’ve been busy. There is a lot I’d like to say here, but I’ve been writing the actual science papers. Can’t keep up with myself, let alone everything else. I am prompted to write here now because of a small rant by Maury Goodman in the neutrino newsletter he occasionally sends out. It resonated with me.

First, some context. Neutrinos are particles of the Standard Model of particle physics. They come in three families with corresponding leptons: the electron (νe), muon (νμ), and tau (ντ) neutrinos. Neutrinos only interact through the weak nuclear force, feeling neither the strong force nor electromagnetism. This makes them “ghostly” particles. Their immunity to these forces means they have such a low cross-section for interacting with other matter that they mostly don’t. Zillions are created every second by the nuclear reactions in the sun, and the vast majority of them breeze right through the Earth as if it were no more than a pane of glass. Their existence was first inferred indirectly from the apparent failure of some nuclear decays to conserve energy – the sum of the products seemed less than that initially present because the neutrinos were running off with mass-energy without telling anyone about it by interacting with detectors of the time.

Clever people did devise ways to detect neutrinos, if only at the rate of one in a zillion. Neutrinos are the template for WIMP dark matter, which is imagined to be some particle from beyond the Standard Model that is more massive than neutrinos but similarly interact only through the weak force. That’s how laboratory experiments search for them.

While a great deal of effort has been invested in searching for WIMPs, so far the most interesting new physics is in the neutrinos themselves. They move at practically the speed of light, and for a long time it was believed that like photons, they were pure energy with zero rest mass. Indeed, I’m old enough to have been taught that neutrinos must have zero mass; it would screw everything up if they didn’t. This attitude is summed up by an anecdote about the late, great author of the Standard Model, Steven Weinberg:

A colleague at UT once asked Weinberg if there was neutrino mass in the Standard Model. He told her “not in my Standard Model.”

Steven Weinberg, as related by Maury Goodman

As I’ve related before, In 1984 I heard a talk by Hans Bethe in which he made the case for neutrino dark matter. I was flabbergasted – I had just learned neutrinos couldn’t possibly have mass! But, as he pointed out, there were a lot of them, so it wouldn’t take much – a tiny mass each, well below the experimental limits that existed at the time – and that would suffice to make all the dark matter. So, getting over the theoretical impossibility of this hypothesis, I reckoned that if it turned out that neutrinos did indeed have mass, then surely that would be the solution to the dark matter problem.

Wrong and wrong. Neutrinos do have mass, but not enough to explain the missing mass problem. At least not that of the whole universe, as the modern estimate is that they might have a mass density that is somewhat shy of that of ordinary baryons (see below). They are too lightweight to stick to individual galaxies, which they would boil right out of: even with lots of cold dark matter, there isn’t enough mass to gravitationally bind these relativistic particles. It seems unlikely, but it is at least conceivable that initially fast-moving but heavy neutrinos might by now have slowed down enough to stick to and make up part of some massive clusters of galaxies. While interesting, that is a very far cry from being the dark matter.

We know neutrinos have mass because they have been observed to transition between flavors as they traverse space. This can only happen if there are different quantum states for them to transition between. They can’t all just be the same zero-mass photon-like entity, at least two of them need to have some mass to make for split quantum levels so there is something to oscillate between.

Here’s where it gets really weird. Neutrino mass states do not correspond uniquely to neutrino flavors. We’re used to thinking of particles as having a mass: a proton weighs 0.938272 GeV; a neutron 0.939565 GeV. (The neutron being only 0.1% heavier than the proton is itself pretty weird; this comes up again later in the context of neutrinos if I remember to bring it up.) No, there are three separate mass states, each of which are fractional probabilistic combinations of the three neutrino flavors. This sounds completely insane, so let’s turn to an illustration:

Neutrino mass states, from Adrián-Martínez et al (2016). There are two possible mass hierarchies for neutrinos, the so-called “normal” (left) and “inverted” (right) hierarchies. There are three mass states – the different bars – that are cleverly named ν1, ν2, and, you guessed it, ν3. The separation between these states is measured from oscillations in solar neutrinos (sol) or atmospheric neutrinos (atm) spawned by cosmic rays. The mass states do not correspond uniquely to neutrino flavors (νe, νμ, and ντ); instead, each mass state is made up of a combination of the three flavors as illustrated by the colored portions of the bars.

So we have three flavors of neutrino, νe, νμ, and ντ, that mix and match to make up the three mass eigenstates, ν1, ν2, and ν3. We would like to know what the masses, m1, m2, and m3, of the mass eignestates are. We don’t. All that we glean from the solar and atmospheric oscillation data is that there is a transition between these states with a corresponding squared mass difference (e.g., Δm2sol = m22-m12). These are now well measured by astronomical standards, with Δm2sol = 0.000075 eV2 and Δm2atm = 0.0025 eV2 depending a little bit on which hierarchy is correct.

OK, so now we guess. If the hierarchy is normal and m1 = 0, then m2 = √Δm2sol = 0.0087 eV and m3 = √(Δm2atm+m22) = 0.0507 eV. The first eigenstate mass need not be zero, though I’ve often heard it argued that it should be that or close to it, as the “natural” scale is m ~ √Δm2. So maybe we have something like m1 = 0.01 eV and m2 = 0.013 eV in sorta the same ballpark.

Maybe, but I am underwhelmed by the naturalness of this argument. If we apply this reasoning to the proton and neutron (Ha! I remembered!), then the mass of the proton should be of order 1 MeV not 1 GeV. That’d be interesting because the proton, neutron, and electron would all have a mass within a factor of two of each other (the electron mass is 0.511 MeV). That almost sounds natural. It’d also make for some very different atomic physics, as we’d now have hydrogen atoms that are quasi-binary systems rather than a lightweight electron orbiting a heavy proton. That might make for an interesting universe, but it wouldn’t be the one we live in.

One very useful result of assuming m1 = 0 is that it provides a hard lower limit on the sum of the neutrino masses: ∑mi = m1 + m2 + m3 > 0.059 eV. Here the hierarchy matters, with the lower limit becoming about 0.1 eV in the inverted hierarchy. So we know neutrinos weigh at least that much, maybe more.

There are of course efforts to measure the neutrino mass directly. There is a giant experiment called Katrin dedicated to this. It is challenging to measure a mass this close to zero, so all we have so far are upper limits. The first measurement from Katrin placed the 90% confidence limit < 1.1 eV. That’s about a factor of 20 larger than the lower limit, so in there somewhere.

Katrin on the move.

There is a famous result in cosmology concerning the sum of neutrino masses. Particles have a relic abundance that follows from thermodynamics. The cosmic microwave background is the thermal relic of photons. So too there should be a thermal relic of cosmic neutrinos with slightly lower temperature than the photon field. One can work out the relic abundance, so if one knows their mass, then their cosmic mass density is

Ωνh2 = ∑mi/(93.5 eV)

where h is the Hubble constant in units of 100 km/s/Mpc (e.g., equation 9.31 in my edition of Peacock’s text Cosmological Physics). For the cosmologists’ favorite (but not obviously correct) h=0.67, the lower limit on the neutrino mass translates to a mass density Ων > 0.0014, rather less than the corresponding baryon density, Ωb = 0.049. The experimental upper limit from Katrin yields Ων < 0.026, still a factor of two less than the baryons but in the same ballpark. These are nowhere near the ΩCDM ~ 0.25 needed for cosmic dark matter.

Nevertheless, the neutrino mass potentially plays an important role in structure formation. Where cold dark matter (CDM) clumps easily to facilitate the formation of structure, neutrinos retard the process. They start out relativistic in the early universe, becoming non-relativistic (slow moving) at some redshift that depends on their mass. Early on, the represent a fast-moving component of gravitating mass that counteracts the slow moving CDM. The nascent clumps formed by CDM can capture baryons (this is how galaxies are thought to form), but they are not even speed bumps to the relativistic neutrinos. If the latter have too large a mass, they pull lumps apart rather then help them grow larger. The higher the neutrino mass, the more damage they do. This in turn impacts the shape of the power spectrum by imprinting a free-streaming scale.

The power spectrum is a key measurement fit by ΛCDM. Indeed, it is arguably its crowning glory. The power spectrum is well fit by ΛCDM assuming zero neutrino mass. If Ων gets too big, it becomes a serious problem.

Consequently, cosmological observations place an indirect limit on the neutrino mass. There are a number of important assumptions that go into this limit, not all of which I am inclined to grant – most especially, the existence of CDM. But that makes it an important test, as the experimentally measured neutrino mass (whenever that happens) better not exceed the cosmological limit. If it does, that falsifies the cosmic structure formation theory based on cold dark matter.

The cosmological limit on neutrino mass obtained assuming ΛCDM structure formation is persistently an order of magnitude tighter than the experimental upper limit. For example, the Dark Energy Survey obtains ∑mi < 0.13 eV at 95% confidence. This is similar to other previous results, and only a factor of two more than the lower limit from neutrino oscillations. The window of allowed space is getting rather narrow. Indeed, it is already close to ruling out the inverted hierarchy for which ∑mi > 0.1 eV – or the assumptions on which the cosmological limit is made.

This brings us finally to Dr. Goodman’s rant, which I quote directly:

In the normal (inverted) mass order, s=m1+m2+m3 > 59 (100) meV. If as DES says, s < 130 meV, degenerate solutions are impossible. But DES “…model(s) massive neutrinos as three degenerate species of equal mass.” It’s been 34 years since we suspected neutrino masses were different and 23 years since that was accepted. Why don’t cosmology “measurements” of neutrino parameters do it right?

Maury Goodman

Here, s = ∑mi and of course 1 eV = 1000 meV. Degenerate solutions are those in which m1=m2=m3. When the absolute mass scale is large – say the neutrino mass were a huge (for it) 100 eV, then the sub-eV splittings between the mass levels illustrated above would be negligible and it would be fair to treat “massive neutrinos as three degenerate species of equal mass.” This is no longer the case when the implied upper limit on the mass is small; there is a clear difference between m1 and m2 and m3.

So why don’t cosmologists do this right? Why do they persist in pretending that m1=m2=m3?

Far be it from me to cut those guys slack, but I suspect there are two answers. One, it probably doesn’t matter (much), and two, habit. By habit, I mean that the tools used to compute the power spectrum were written at a time when degenerate species of equal mass was a perfectly safe assumption. Indeed, in those days, neutrinos were thought not to matter much at all to cosmological structure formation, so their inclusion was admirably forward looking – or, I suspect, a nerdy indulgence: “neutrinos probably don’t matter but I know how to code for them so I’ll do it by making the simplifying assumption that m1=m2=m3.”

So how much does it matter? I don’t know without editing & running the code (e.g, CAMB or CMBEASY), which would be a great project for a grad student if it hasn’t already been done. Nevertheless, the difference between neutrino mass states and the degenerate assumption is presumably small for small differences in mass. To get an idea that is human-friendly, let’s think about the redshift at which neutrinos become non-relativistic. OK, maybe that doesn’t sound too friendly, but it is less likely to make your eyes cross than a discussion of power spectra Fourier transforms and free-streaming wave numbers.

Neutrinos are very lightweight, so start out as relativistic particles in the early universe (high redshift z). As the universe expands it cools, and the neutrinos slow down. At some point, they transition from behaving like a photon field to a non-relativistic gas of particles. This happens at

1+znr ≈ 1987 mν/(1 eV)

(eq. 4 of Agarwal & Feldman 2012; they also discuss the free-streaming scale and power spectra for those of you who want to get into it). For a 0.5 eV neutrino that is comfortably acceptable to the current experimental upper limit, znr = 992. This is right around recombination, and would mess everything up bigly – hence the cosmological limit being much stricter. For a degenerate neutrino of 0.13 eV, znr = 257. So one way to think about the cosmological limit is that we need to delay the impact of neutrinos on the power spectrum for at least this long in order to maintain the good fit to the data.

How late can the impact of neutrinos be delayed? For the minimum masses m1 = 0, m2 = 0.0087, m3 = 0.0507 eV, zero mass neutrinos always remain relativistic, but z2 = 16 and z3 = 100. These redshifts are readily distinguishable, so maybe Dr. Goodman has a valid point. Well, he definitely has a valid point, but these redshifts aren’t probed by the currently available data, so cosmologists probably figure it is OK to stick to degenerate neutrino masses for now.

The redshifts z2 = 16 and z3 = 100 are coincident with other important events in cosmic history, cosmic dawn and the dark ages, so it is worth considering the potential impact of neutrinos on the power spectra predicted for 21 cm absorption at those redshifts. There are experiments working to detect this, but measurement of the power spectrum is still a ways off. I am not aware of any theoretical consideration of this topic, so let’s consult an expert. Thanks to Avi Loeb for pointing out these (and a lot more!) references on short notice: Pritchard & Pierpaoli (2008), Villaescusa-Navarro et al. (2015), Obuljen et al. (2018). That’s a lot to process, and more than I’m willing to digest on the fly. But it looks like at least some cosmologists are grappling with the issue Dr. Goodman raises.

Any way we slice it, it looks like there are things still to learn. The direct laboratory measurement of the neutrino mass is not guaranteed to be less than the upper limit from cosmology. It would be surprising, but that would make matters a lot more interesting.

Bias all the way down

Bias all the way down

It often happens that data are ambiguous and open to multiple interpretations. The evidence for dark matter is an obvious example. I frequently hear permutations on the statement

We know dark matter exists; we just need to find it.

This is said in all earnestness by serious scientists who clearly believe what they say. They mean it. Unfortunately, meaning something in all seriousness, indeed, believing it with the intensity of religious fervor, does not guarantee that it is so.

The way the statement above is phrased is a dangerous half-truth. What the data show beyond any dispute is that there is a discrepancy between what we observe in extragalactic systems (including cosmology) and the predictions of Newton & Einstein as applied to the visible mass. If we assume that the equations Newton & Einstein taught us are correct, then we inevitably infer the need for invisible mass. That seems like a very reasonable assumption, but it is just that: an assumption. Moreover, it is an assumption that is only tested on the relevant scales by the data that show a discrepancy. One could instead infer that theory fails this test – it does not work to predict observed motions when applied to the observed mass. From this perspective, it could just as legitimately be said that

A more general theory of dynamics must exist; we just need to figure out what it is.

That puts an entirely different complexion on exactly the same problem. The data are the same; they are not to blame. The difference is how we interpret them.

Neither of these statements are correct: they are both half-truths; two sides of the same coin. As such, one risks being wildly misled. If one only hears one, the other gets discounted. That’s pretty much where the field is now, and has it been stuck there for a long time.

That’s certainly where I got my start. I was a firm believer in the standard dark matter interpretation. The evidence was obvious and overwhelming. Not only did there need to be invisible mass, it had to be some new kind of particle, like a WIMP. Almost certainly a WIMP. Any other interpretation (like MACHOs) was obviously stupid, as it violated some strong constraint, like Big Bang Nucleosynthesis (BBN). It had to be non-baryonic cold dark matter. HAD. TO. BE. I was sure of this. We were all sure of this.

What gets us in trouble is not what we don’t know. It’s what we know for sure that just ain’t so.

Josh Billings

I realized in the 1990s that the above reasoning was not airtight. Indeed, it has a gaping hole: we were not even considering modifications of dynamical laws (gravity and inertia). That this was a possibility, even a remote one, came as a profound and deep shock to me. It took me ages of struggle to admit it might be possible, during which I worked hard to save the standard picture. I could not. So it pains me to watch the entire community repeat the same struggle, repeat the same failures, and pretend like it is a success. That last step follows from the zeal of religious conviction: the outcome is predetermined. The answer still HAS TO BE dark matter.

So I asked myself – what if we’re wrong? How could we tell? Once one has accepted that the universe is filled with invisible mass that can’t be detected by any craft available known to us, how can we disabuse ourselves of this notion should it happen to be wrong?

One approach that occurred to me was a test in the power spectrum of the cosmic microwave background. Before any of the peaks had been measured, the only clear difference one expected was a bigger second peak with dark matter, and a smaller one without it for the same absolute density of baryons as set by BBN. I’ve written about the lead up to this prediction before, and won’t repeat it here. Rather, I’ll discuss some of the immediate fall out – some of which I’ve only recently pieced together myself.

The first experiment to provide a test of the prediction for the second peak was Boomerang. The second was Maxima-1. I of course checked the new data when they became available. Maxima-1 showed what I expected. So much so that it barely warranted comment. One is only supposed to write a scientific paper when one has something genuinely new to say. This didn’t rise to that level. It was more like checking a tick box. Besides, lots more data were coming; I couldn’t write a new paper every time someone tacked on an extra data point.

There was one difference. The Maxima-1 data had a somewhat higher normalization. The shape of the power spectrum was consistent with that of Boomerang, but the overall amplitude was a bit higher. The latter mattered not at all to my prediction, which was for the relative amplitude of the first to second peaks.

Systematic errors, especially in the amplitude, were likely in early experiments. That’s like rule one of observing the sky. After examining both data sets and the model expectations, I decided the Maxima-1 amplitude was more likely to be correct, so I asked what offset was necessary to reconcile the two. About 14% in temperature. This was, to me, no big deal – it was not relevant to my prediction, and it is exactly the sort of thing one expects to happen in the early days of a new kind of observation. It did seem worth remarking on, if not writing a full blown paper about, so I put it in a conference presentation (McGaugh 2000), which was published in a journal (IJMPA, 16, 1031) as part of the conference proceedings. This correctly anticipated the subsequent recalibration of Boomerang.

The figure from McGaugh (2000) is below. Basically, I said “gee, looks like the Boomerang calibration needs to be adjusted upwards a bit.” This has been done in the figure. The amplitude of the second peak remained consistent with the prediction for a universe devoid of dark matter. In fact, if got better (see Table 4 of McGaugh 2004).

Plot from McGaugh (2000): The predictions of LCDM (left) and no-CDM (right) compared to Maxima-1 data (open points) and Boomerang data (filled points, corrected in normalization). The LCDM model shown is the most favorable prediction that could be made prior to observation of the first two peaks; other then-viable choices of cosmic parameters predicted a higher second peak. The no-CDM got the relative amplitude right a priori, and remains consistent with subsequent data from WMAP and Planck.

This much was trivial. There was nothing new to see, at least as far as the test I had proposed was concerned. New data were pouring in, but there wasn’t really anything worth commenting on until WMAP data appeared several years later, which persisted in corroborating the peak ratio prediction. By this time, the cosmological community had decided that despite persistent corroborations, my prediction was wrong.

That’s right. I got it right, but then right turned into wrong according to the scuttlebutt of cosmic gossip. This was a falsehood, but it took root, and seems to have become one of the things that cosmologists know for sure that just ain’t so.

How did this come to pass? I don’t know. People never asked me. My first inkling was 2003, when it came up in a chance conversation with Marv Leventhal (then chair of Maryland Astronomy), who opined “too bad the data changed on you.” This shocked me. Nothing relevant in the data had changed, yet here was someone asserting that it had like it was common knowledge. Which I suppose it was by then, just not to me.

Over the years, I’ve had the occasional weird conversation on the subject. In retrospect, I think the weirdness stemmed from a divergence of assumed knowledge. They knew I was right then wrong. I knew the second peak prediction had come true and remained true in all subsequent data, but the third peak was a different matter. So there were many opportunities for confusion. In retrospect, I think many of these people were laboring under the mistaken impression that I had been wrong about the second peak.

I now suspect this started with the discrepancy between the calibration of Boomerang and Maxima-1. People seemed to be aware that my prediction was consistent with the Boomerang data. Then they seem to have confused the prediction with those data. So when the data changed – i.e., Maxima-1 was somewhat different in amplitude, then it must follow that the prediction now failed.

This is wrong on many levels. The prediction is independent of the data that test it. It is incredibly sloppy thinking to confuse the two. More importantly, the prediction, as phrased, was not sensitive to this aspect of the data. If one had bothered to measure the ratio in the Maxima-1 data, one would have found a number consistent with the no-CDM prediction. This should be obvious from casual inspection of the figure above. Apparently no one bothered to check. They didn’t even bother to understand the prediction.

Understanding a prediction before dismissing it is not a hard ask. Unless, of course, you already know the answer. Then laziness is not only justified, but the preferred course of action. This sloppy thinking compounds a number of well known cognitive biases (anchoring bias, belief bias, confirmation bias, to name a few).

I mistakenly assumed that other people were seeing the same thing in the data that I saw. It was pretty obvious, after all. (Again, see the figure above.) It did not occur to me back then that other scientists would fail to see the obvious. I fully expected them to complain and try and wriggle out of it, but I could not imagine such complete reality denial.

The reality denial was twofold: clearly, people were looking for any excuse to ignore anything associated with MOND, however indirectly. But they also had no clear prior for LCDM, which I did establish as a point of comparison. A theory is only as good as its prior, and all LCDM models made before these CMB data showed the same thing: a bigger second peak than was observed. This can be fudged: there are ample free parameters, so it can be made to fit; one just had to violate BBN (as it was then known) by three or four sigma.

In retrospect, I think the very first time I had this alternate-reality conversation was at a conference at the University of Chicago in 2001. Andrey Kravtsov had just joined the faculty there, and organized a conference to get things going. He had done some early work on the cusp-core problem, which was still very much a debated thing at the time. So he asked me to come address that topic. I remember being on the plane – a short ride from Cleveland – when I looked at the program. Nearly did a spit take when I saw that I was to give the first talk. There wasn’t a lot of time to organize my transparencies (we still used overhead projectors in those days) but I’d given the talk many times before, so it was enough.

I only talked about the rotation curves of low surface brightness galaxies in the context of the cusp-core problem. That was the mandate. I didn’t talk about MOND or the CMB. There’s only so much you can address in a half hour talk. [This is a recurring problem. No matter what I say, there always seems to be someone who asks “why didn’t you address X?” where X is usually that person’s pet topic. Usually I could do so, but not in the time allotted.]

About halfway through this talk on the cusp-core problem, I guess it became clear that I wasn’t going to talk about things that I hadn’t been asked to talk about, and I was interrupted by Mike Turner, who did want to talk about the CMB. Or rather, extract a confession from me that I had been wrong about it. I forget how he phrased it exactly, but it was the academic equivalent of “Have you stopped beating your wife lately?” Say yes, and you admit to having done so in the past. Say no, and you’re still doing it. What I do clearly remember was him prefacing it with “As a test of your intellectual honesty” as he interrupted to ask a dishonest and intentionally misleading question that was completely off-topic.

Of course, the pretext for his attack question was the Maxima-1 result. He phrased it in a way that I had to agree that those disproved my prediction, or be branded a liar. Now, at the time, there were rumors swirling that the experiment – some of the people who worked on it were there – had detected the third peak, so I thought that was what he was alluding to. Those data had not yet been published and I certainly had not seen them, so I could hardly answer that question. Instead, I answered the “intellectual honesty” affront by pointing to a case where I had said I was wrong. At one point, I thought low surface brightness galaxies might explain the faint blue galaxy problem. On closer examination, it became clear that they could not provide a complete explanation, so I said so. Intellectual honesty is really important to me, and should be to all scientists. I have no problem admitting when I’m wrong. But I do have a problem with demands to admit that I’m wrong when I’m not.

To me, it was obvious that the Maxima-1 data were consistent with the second peak. The plot above was already published by then. So it never occurred to me that he thought the Maxima-1 data were in conflict with what I had predicted – it was already known that it was not. Only to him, it was already known that it was. Or so I gather – I have no way to know what others were thinking. But it appears that this was the juncture in which the field suffered a psychotic break. We are not operating on the same set of basic facts. There has been a divergence in personal realities ever since.

Arthur Kosowsky gave the summary talk at the end of the conference. He told me that he wanted to address the elephant in the room: MOND. I did not think the assembled crowd of luminary cosmologists were mature enough for that, so advised against going there. He did, and was incredibly careful in what he said: empirical, factual, posing questions rather than making assertions. Why does MOND work as well as it does?

The room dissolved into chaotic shouting. Every participant was vying to say something wrong more loudly than the person next to him. (Yes, everyone shouting was male.) Joel Primack managed to say something loudly enough for it to stick with me, asserting that gravitational lensing contradicted MOND in a way that I had already shown it did not. It was just one of dozens of superficial falsehoods that people take for granted to be true if they align with one’s confirmation bias.

The uproar settled down, the conference was over, and we started to disperse. I wanted to offer Arthur my condolences, having been in that position many times. Anatoly Klypin was still giving it to him, keeping up a steady stream of invective as everyone else moved on. I couldn’t get a word in edgewise, and had a plane home to catch. So when I briefly caught Arthur’s eye, I just said “told you” and moved on. Anatoly paused briefly, apparently fathoming that his behavior, like that of the assembled crowd, was entirely predictable. Then the moment of awkward self-awareness passed, and he resumed haranguing Arthur.

Divergence

Divergence

Reality check

Before we can agree on the interpretation of a set of facts, we have to agree on what those facts are. Even if we agree on the facts, we can differ about their interpretation. It is OK to disagree, and anyone who practices astrophysics is going to be wrong from time to time. It is the inevitable risk we take in trying to understand a universe that is vast beyond human comprehension. Heck, some people have made successful careers out of being wrong. This is OK, so long as we recognize and correct our mistakes. That’s a painful process, and there is an urge in human nature to deny such things, to pretend they never happened, or to assert that what was wrong was right all along.

This happens a lot, and it leads to a lot of weirdness. Beyond the many people in the field whom I already know personally, I tend to meet two kinds of scientists. There are those (usually other astronomers and astrophysicists) who might be familiar with my work on low surface brightness galaxies or galaxy evolution or stellar populations or the gas content of galaxies or the oxygen abundances of extragalactic HII regions or the Tully-Fisher relation or the cusp-core problem or faint blue galaxies or big bang nucleosynthesis or high redshift structure formation or joint constraints on cosmological parameters. These people behave like normal human beings. Then there are those (usually particle physicists) who have only heard of me in the context of MOND. These people often do not behave like normal human beings. They conflate me as a person with a theory that is Milgrom’s. They seem to believe that both are evil and must be destroyed. My presence, even the mere mention of my name, easily destabilizes their surprisingly fragile grasp on sanity.

One of the things that scientists-gone-crazy do is project their insecurities about the dark matter paradigm onto me. People who barely know me frequently attribute to me motivations that I neither have nor recognize. They presume that I have some anti-cosmology, anti-DM, pro-MOND agenda, and are remarkably comfortably about asserting to me what it is that I believe. What they never explain, or apparently bother to consider, is why I would be so obtuse? What is my motivation? I certainly don’t enjoy having the same argument over and over again with their ilk, which is the only thing it seems to get me.

The only agenda I have is a pro-science agenda. I want to know how the universe works.

This agenda is not theory-specific. In addition to lots of other astrophysics, I have worked on both dark matter and MOND. I will continue to work on both until we have a better understanding of how the universe works. Right now we’re very far away from obtaining that goal. Anyone who tells you otherwise is fooling themselves – usually by dint of ignoring inconvenient aspects of the evidence. Everyone is susceptible to cognitive dissonance. Scientists are no exception – I struggle with it all the time. What disturbs me is the number of scientists who apparently do not. The field is being overrun with posers who lack the self-awareness to question their own assumptions and biases.

So, I feel like I’m repeating myself here, but let me state my bias. Oh wait. I already did. That’s why it felt like repetition. It is.

The following bit of this post is adapted from an old web page I wrote well over a decade ago. I’ve lost track of exactly when – the file has been through many changes in computer systems, and unix only records the last edit date. For the linked page, that’s 2016, when I added a few comments. The original is much older, and was written while I was at the University of Maryland. Judging from the html style, it was probably early to mid-’00s. Of course, the sentiment is much older, as it shouldn’t need to be said at all.

I will make a few updates as seem appropriate, so check the link if you want to see the changes. I will add new material at the end.


Long standing remarks on intellectual honesty

The debate about MOND often degenerates into something that falls well short of the sober, objective discussion that is suppose to characterize scientific debates. One can tell when voices are raised and baseless ad hominem accusations made. I have, with disturbing frequency, found myself accused of partisanship and intellectual dishonesty, usually by people who are as fair and balanced as Fox News.

Let me state with absolute clarity that intellectual honesty is a bedrock principle of mine. My attitude is summed up well by the quote

When a man lies, he murders some part of the world.

Paul Gerhardt

I first heard this spoken by the character Merlin in the movie Excalibur (1981 version). Others may have heard it in a song by Metallica. As best I can tell, it is originally attributable to the 17th century cleric Paul Gerhardt.

This is a great quote for science, as the intent is clear. We don’t get to pick and choose our facts. Outright lying about them is antithetical to science.

I would extend this to ignoring facts. One should not only be honest, but also as complete as possible. It does not suffice to be truthful while leaving unpleasant or unpopular facts unsaid. This is lying by omission.

I “grew up” believing in dark matter. Specifically, Cold Dark Matter, presumably a WIMP. I didn’t think MOND was wrong so much as I didn’t think about it at all. Barely heard of it; not worth the bother. So I was shocked – and angered – when it its predictions came true in my data for low surface brightness galaxies. So I understand when my colleagues have the same reaction.

Nevertheless, Milgrom got the prediction right. I had a prediction, it was wrong. There were other conventional predictions, they were also wrong. Indeed, dark matter based theories generically have a very hard time explaining these data. In a Bayesian sense, given the prior that we live in a ΛCDM universe, the probability that MONDian phenomenology would be observed is practically zero. Yet it is. (This is very well established, and has been for some time.)

So – confronted with an unpopular theory that nevertheless had some important predictions come true, I reported that fact. I could have ignored it, pretended it didn’t happen, covered my eyes and shouted LA LA LA NOT LISTENING. With the benefit of hindsight, that certainly would have been the savvy career move. But it would also be ignoring a fact, and tantamount to a lie.

In short, though it was painful and protracted, I changed my mind. Isn’t that what the scientific method says we’re suppose to do when confronted with experimental evidence?

That was my experience. When confronted with evidence that contradicted my preexisting world view, I was deeply troubled. I tried to reject it. I did an enormous amount of fact-checking. The people who presume I must be wrong have not had this experience, and haven’t bothered to do any fact-checking. Why bother when you already are sure of the answer?


Willful Ignorance

I understand being skeptical about MOND. I understand being more comfortable with dark matter. That’s where I started from myself, so as I said above, I can empathize with people who come to the problem this way. This is a perfectly reasonable place to start.

For me, that was over a quarter century ago. I can understand there being some time lag. That is not what is going on. There has been ample time to process and assimilate this information. Instead, most physicists have chosen to remain ignorant. Worse, many persist in spreading what can only be described as misinformation. I don’t think they are liars; rather, it seems that they believe their own bullshit.

To give an example of disinformation, I still hear said things like “MOND fits rotation curves but nothing else.” This is not true. The first thing I did was check into exactly that. Years of fact-checking went into McGaugh & de Blok (1998), and I’ve done plenty more since. It came as a great surprise to me that MOND explained the vast majority of the data as well or better than dark matter. Not everything, to be sure, but lots more than “just” rotation curves. Yet this old falsehood still gets repeated as if it were not a misconception that was put to rest in the previous century. We’re stuck in the dark ages by choice.

It is not a defensible choice. There is no excuse to remain ignorant of MOND at this juncture in the progress of astrophysics. It is incredibly biased to point to its failings without contending with its many predictive successes. It is tragi-comically absurd to assume that dark matter provides a better explanation when it cannot make the same predictions in advance. MOND may not be correct in every particular, and makes no pretense to be a complete theory of everything. But it is demonstrably less wrong than dark matter when it comes to predicting the dynamics of systems in the low acceleration regime. Pretending like this means nothing is tantamount to ignoring essential facts.

Even a lie of omission murders a part of the world.

Big Trouble in a Deep Void

Big Trouble in a Deep Void

The following is a guest post by Indranil Banik, Moritz Haslbauer, and Pavel Kroupa (bios at end) based on their new paper

Modifying gravity to save cosmology

Cosmology is currently in a major crisis because of many severe tensions, the most serious and well-known being that local observations of how quickly the Universe is expanding (the so-called ‘Hubble constant’) exceed the prediction of the standard cosmological model, ΛCDM. This prediction is based on the cosmic microwave background (CMB), the most ancient light we can observe – which is generally thought to have been emitted about 400,000 years after the Big Bang. For ΛCDM to fit the pattern of fluctuations observed in the CMB by the Planck satellite and other experiments, the Hubble constant must have a particular value of 67.4 ± 0.5 km/s/Mpc. Local measurements are nearly all above this ‘Planck value’, but are consistent with each other. In our paper, we use a local value of 73.8 ± 1.1 km/s/Mpc using a combination of supernovae and gravitationally lensed quasars, two particularly precise yet independent techniques.

This unexpectedly rapid local expansion of the Universe could be due to us residing in a huge underdense region, or void. However, a void wide and deep enough to explain the Hubble tension is not possible in ΛCDM, which is built on Einstein’s theory of gravity, General Relativity. Still, there is quite strong evidence that we are indeed living within a large void with a radius of about 300 Mpc, or one billion light years. This evidence comes from many surveys covering the whole electromagnetic spectrum, from radio to X-rays. The most compelling evidence comes from analysis of galaxy number counts in the near-infrared, giving the void its name of the Keenan-Barger-Cowie (KBC) void. Gravity from matter outside the void would pull more than matter inside it, making the Universe appear to expand faster than it actually is for an observer inside the void. This ‘Hubble bubble’ scenario (depicted in Figure 1) could solve the Hubble tension, a possibility considered – and rejected – in several previous works (e.g. Kenworthy+ 2019). We will return to their objections against this idea.

Figure 1: Illustration of the Universe’s large scale structure. The darker regions are voids, and the bright dots represent galaxies. The arrows show how gravity from surrounding denser regions pulls outwards on galaxies in a void. If we were living in such a void (as indicated by the yellow star), the Universe would expand faster locally than it does on average. This could explain the Hubble tension. Credit: Technology Review

One of the main objections seemed to be that since such a large and deep void is incompatible with ΛCDM, it can’t exist. This is a common way of thinking, but the problem with it was clear to us from a very early stage. The first part of this logic is sound – assuming General Relativity, a hot Big Bang, and that the state of the Universe at early times is apparent in the CMB (i.e. it was flat and almost homogeneous then), we are led to the standard flat ΛCDM model. By studying the largest suitable simulation of this model (called MXXL), we found that it should be completely impossible to find ourselves inside a void with the observed size and depth (or fractional underdensity) of the KBC void – this possibility can be rejected with more confidence than the discovery of the Higgs boson when first announced. We therefore applied one of the leading alternative gravity theories called Milgromian Dynamics (MOND), a controversial idea developed in the early 1980s by Israeli physicist Mordehai Milgrom. We used MOND (explained in a simple way here) to evolve a small density fluctuation forwards from early times, studying if 13 billion years later it fits the density and velocity field of the local Universe. Before describing our results, we briefly introduce MOND and explain how to use it in a potentially viable cosmological framework. Astronomers often assume MOND cannot be extended to cosmological scales (typically >10 Mpc), which is probably true without some auxiliary assumptions. This is also the case for General Relativity, though in that case the scale where auxiliary assumptions become crucial is only a few kpc, namely in galaxies.

MOND was originally designed to explain why galaxies rotate faster in their outskirts than they should if one applies General Relativity to their luminous matter distribution. This discrepancy gave rise to the idea of dark matter halos around individual galaxies. For dark matter to cluster on such scales, it would have to be ‘cold’, or equivalently consist of rather heavy particles (above a few thousand eV/c2, or a millionth of a proton mass). Any lighter and the gravity from galaxies could not hold on to the dark matter. MOND assumes these speculative and unexplained cold dark matter haloes do not exist – the need for them is after all dependent on the validity of General Relativity. In MOND once the gravity from any object gets down to a certain very low threshold called a0, it declines more gradually with increasing distance, following an inverse distance law instead of the usual inverse square law. MOND has successfully predicted many galaxy rotation curves, highlighting some remarkable correlations with their visible mass. This is unexpected if they mostly consist of invisible dark matter with quite different properties to visible mass. The Local Group satellite galaxy planes also strongly favour MOND over ΛCDM, as explained using the logic of Figure 2 and in this YouTube video.

Figure 2: the satellite galaxies of the Milky Way and Andromeda mostly lie within thin planes. These are difficult to form unless the galaxies in them are tidal dwarfs born from the interaction of two major galaxies. Since tidal dwarfs should be free of dark matter due to the way they form, the satellites in the satellite planes should have rather weak self-gravity in ΛCDM. This is not the case as measured from their high internal velocity dispersions. So the extra gravity needed to hold galaxies together should not come from dark matter that can in principle be separated from the visible.

To extend MOND to cosmology, we used what we call the νHDM framework (with ν pronounced “nu”), originally proposed by Angus (2009). In this model, the cold dark matter of ΛCDM is replaced by the same total mass in sterile neutrinos with a mass of only 11 eV/c2, almost a billion times lighter than a proton. Their low mass means they would not clump together in galaxies, consistent with the original idea of MOND to explain galaxies with only their visible mass. This makes the extra collisionless matter ‘hot’, hence the name of the model. But this collisionless matter would exist inside galaxy clusters, helping to explain unusual configurations like the Bullet Cluster and the unexpectedly strong gravity (even in MOND) in quieter clusters. Considering the universe as a whole, νHDM has the same overall matter content as ΛCDM. This makes the overall expansion history of the universe very similar in both models, so both can explain the amounts of deuterium and helium produced in the first few minutes after the Big Bang. They should also yield similar fluctuations in the CMB because both models contain the same amount of dark matter. These fluctuations would get somewhat blurred by sterile neutrinos of such a low mass due to their rather fast motion in the early Universe. However, it has been demonstrated that Planck data are consistent with dark matter particles more massive than 10 eV/c2. Crucially, we showed that the density fluctuations evident in the CMB typically yield a gravitational field strength of 21 a0 (correcting an earlier erroneous estimate of 570 a0 in the above paper), making the gravitational physics nearly identical to General Relativity. Clearly, the main lines of early Universe evidence used to argue in favour of ΛCDM are not sufficiently unique to distinguish it from νHDM (Angus 2009).

The models nonetheless behave very differently later on. We estimated that for redshifts below about 50 (when the Universe is older than about 50 million years), the gravity would typically fall below a0 thanks to the expansion of the Universe (the CMB comes from a redshift of 1100). After this ‘MOND moment’, both the ordinary matter and the sterile neutrinos would clump on large scales just like in ΛCDM, but there would also be the extra gravity from MOND. This would cause structures to grow much faster (Figure 3), allowing much wider and deeper voids.


Figure 3: Evolution of the density contrast within a 300 co-moving Mpc sphere in different Newtonian (red) and MOND (blue) models, shown as a function of the Universe’s size relative to its present size (this changes almost linearly with time). Notice the much faster structure growth in MOND. The solid blue line uses a time-independent external field on the void, while the dot-dashed blue line shows the effect of a stronger external field in the past. This requires a deeper initial void to match present-day observations.

We used this basic framework to set up a dynamical model of the void. By making various approximations and trying different initial density profiles, we were able to simultaneously fit the apparent local Hubble constant, the observed density profile of the KBC void, and many other observables like the acceleration parameter, which we come to below. We also confirmed previous results that the same observables rule out standard cosmology at 7.09σ significance. This is much more than the typical threshold of 5σ used to claim a discovery in cases like the Higgs boson, where the results agree with prior expectations.

One objection to our model was that a large local void would cause the apparent expansion of the Universe to accelerate at late times. Equivalently, observations that go beyond the void should see a standard Planck cosmology, leading to a step-like behaviour near the void edge. At stake is the so-called acceleration parameter q0 (which we defined oppositely to convention to correct a historical error). In ΛCDM, we expect q0 = 0.55, while in general much higher values are expected in a Hubble bubble scenario. The objection of Kenworthy+ (2019) was that since the observed q0 is close to 0.55, there is no room for a void. However, their data analysis fixed q0 to the ΛCDM expectation, thereby removing any hope of discovering a deviation that might be caused by a local void. Other analyses (e.g. Camarena & Marra 2020b) which do not make such a theory-motivated assumption find q0 = 1.08, which is quite consistent with our best-fitting model (Figure 4). We also discussed other objections to a large local void, for instance the Wu & Huterer (2017) paper which did not consider a sufficiently large void, forcing the authors to consider a much deeper void to try and solve the Hubble tension. This led to some serious observational inconsistencies, but a larger and shallower void like the observed KBC void seems to explain the data nicely. In fact, combining all the constraints we applied to our model, the overall tension is only 2.53σ, meaning the data have a 1.14% chance of arising if ours were the correct model. The actual observations are thus not the most likely consequence of our model, but could plausibly arise if it were correct. Given also the high likelihood that some if not all of the observational errors we took from publications are underestimates, this is actually a very good level of consistency.

Figure 4: The predicted local Hubble constant (x-axis) and acceleration parameter (y-axis) as measured with local supernovae (black dot, with red error ellipses). Our best-fitting models with different initial void density profiles (blue symbols) can easily explain the observations. However, there is significant tension with the prediction of ΛCDM based on parameters needed to fit Planck observations of the CMB (green dot). In particular, local observations favour a higher acceleration parameter, suggestive of a local void.

Unlike other attempts to solve the Hubble tension, ours is unique in using an already existing theory (MOND) developed for a different reason (galaxy rotation curves). The use of unseen collisionless matter made of hypothetical sterile neutrinos is still required to explain the properties of galaxy clusters, which otherwise do not sit well with MOND. In addition, these neutrinos provide an easy way to explain the CMB and background expansion history, though recently Skordis & Zlosnik (2020) showed that this is possible in MOND with only ordinary matter. In any case, MOND is a theory of gravity, while dark matter is a hypothesis that more matter exists than meets the eye. The ideas could both be right, and should be tested separately.

A dark matter-MOND hybrid thus appears to be a very promising way to resolve the current crisis in cosmology. Still, more work is required to construct a fully-fledged relativistic MOND theory capable of addressing cosmology. This could build on the theory proposed by Skordis & Zlosnik (2019) in which gravitational waves travel at the speed of light, which was considered to be a major difficulty for MOND. We argued that such a theory would enhance structure formation to the required extent under a wide range of plausible theoretical assumptions, but this needs to be shown explicitly starting from a relativistic MOND theory. Cosmological structure formation simulations are certainly required in this scenario – these are currently under way in Bonn. Further observations would also help greatly, especially of the matter density in the outskirts of the KBC void at distances of about 500 Mpc. This could hold vital clues to how quickly the void has grown, helping to pin down the behaviour of the sought-after MOND theory.

There is now a very real prospect of obtaining a single theory that works across all astronomical scales, from the tiniest dwarf galaxies up to the largest structures in the Universe & its overall expansion rate, and from a few seconds after the birth of the Universe until today. Rather than argue whether this theory looks more like MOND or standard cosmology, what we should really do is combine the best elements of both, paying careful attention to all observations.


Authors

Indranil Banik is a Humboldt postdoctoral fellow in the Helmholtz Institute for Radiation and Nuclear Physics (HISKP) at the University of Bonn, Germany. He did his undergraduate and masters at Trinity College, Cambridge, and his PhD at Saint Andrews under Hongsheng Zhao. His research focuses on testing whether gravity continues to follow the Newtonian inverse square law at the low accelerations typical of galactic outskirts, with MOND being the best-developed alternative.

Moritz Haslbauer is a PhD student at the Max Planck Institute for Radio Astronomy (MPIfR) in Bonn. He obtained his undergraduate degree from the University of Vienna and his masters from the University of Bonn. He works on the formation and evolution of galaxies and their distribution in the local Universe in order to test different cosmological models and gravitational theories. Prof. Pavel Kroupa is his PhD supervisor.

Pavel Kroupa is a professor at the University of Bonn and professorem hospitem at Charles University in Prague. He went to school in Germany and South Africa, studied physics in Perth, Australia, and obtained his PhD at Trinity College, Cambridge, UK. He researches stellar populations and their dynamics as well as the dark matter problem, therewith testing gravitational theories and cosmological models.

Link to the published science paper.

YouTube video on the paper

Contact: ibanik@astro.uni-bonn.de.

Indranil Banik’s YouTube channel.

Cosmology, then and now

Cosmology, then and now

I have been busy teaching cosmology this semester. When I started on the faculty of the University of Maryland in 1998, there was no advanced course on the subject. This seemed like an obvious hole to fill, so I developed one. I remember with fond bemusement the senior faculty, many of them planetary scientists, sending Mike A’Hearn as a stately ambassador to politely inquire if cosmology had evolved beyond a dodgy subject and was now rigorous enough to be worthy of a 3 credit graduate course.

Back then, we used transparencies or wrote on the board. It was novel to have a course web page. I still have those notes, and marvel at the breadth and depth of work performed by my younger self. Now that I’m teaching it for the first time in a decade, I find it challenging to keep up. Everything has to be adapted to an electronic format, and be delivered remotely during this damnable pandemic. It is a less satisfactory experience, and it has precluded posting much here.

Another thing I notice is that attitudes have evolved along with the subject. The baseline cosmology, LCDM, has not changed much. We’ve tilted the power spectrum and spiked it with extra baryons, but the basic picture is that which emerged from the application of classical observational cosmology – measurements of the Hubble constant, the mass density, the ages of the oldest stars, the abundances of the light elements, number counts of faint galaxies, and a wealth of other observational constraints built up over decades of effort. Here is an example of combining such constraints, and exercise I have students do every time I teach the course:

Observational constraints in the mass density-Hubble constant plane assembled by students in my cosmology course in 2002. The gray area is excluded. The open window is the only space allowed; this is LCDM. The box represents the first WMAP estimate in 2003. CMB estimates have subsequently migrated out of the allowed region to lower H0 and higher mass density, but the other constraints have not changed much, most famously H0, which remains entrenched in the low to mid-70s.

These things were known by the mid-90s. Nowadays, people seem to think Type Ia SN discovered Lambda, when really they were just icing on a cake that was already baked. The location of the first peak in the acoustic power spectrum of the microwave background was corroborative of the flat geometry required by the picture that had developed, but trailed the development of LCDM rather than informing its construction. But students entering the field now seem to have been given the impression that these were the only observations that mattered.

Worse, they seem to think these things are Known, as if there’s never been a time that we cosmologists have been sure about something only to find later that we had it quite wrong. This attitude is deleterious to the progress of science, as it precludes us from seeing important clues when they fail to conform to our preconceptions. To give one recent example, everyone seems to have decided that the EDGES observation of 21 cm absorption during the dark ages is wrong. The reason? Because it is impossible in LCDM. There are technical reasons why it might be wrong, but these are subsidiary to Attitude: we can’t believe it’s true, so we don’t. But that’s what makes a result important: something that makes us reexamine how we perceive the universe. If we’re unwilling to do that, we’re no longer doing science.

Second peak bang on

Second peak bang on

At the dawn of the 21st century, we were pretty sure we had solved cosmology. The Lambda Cold Dark Matter (LCDM) model made strong predictions for the power spectrum of the Cosmic Microwave Background (CMB). One was that the flat Robertson-Walker geometry that we were assuming for LCDM predicted the location of the first peak should be at ℓ = 220. As I discuss in the history of the rehabilitation of Lambda, this was a genuinely novel prediction that was clearly confirmed first by BOOMERanG and subsequently by many other experiments, especially WMAP. As such, it was widely (and rightly) celebrated among cosmologists. The WMAP team has been awarded major prizes, including the Gruber cosmology prize and the Breakthrough prize.

As I discussed in the previous post, the location of the first peak was not relevant to the problem I had become interested in: distinguishing whether dark matter existed or not. Instead, it was the amplitude of the second peak of the acoustic power spectrum relative to the first that promised a clear distinction between LCDM and the no-CDM ansatz inspired by MOND. This was also first tested by BOOMERanG:

postboomer
The CMB power spectrum observed by BOOMERanG in 2000. The first peak is located exactly where LCDM predicted it to be. The second peak was not detected, but was clearly smaller than expected in LCDM. It was consistent with the prediction of no-CDM.

In a nutshell, LCDM predicted a big second peak while no-CDM predicted a small second peak. Quantitatively, the amplitude ratio A1:2 was predicted to be in the range 1.54 – 1.83 for LCDM, and 2.22 – 2.57 for no-CDM. Note that A1:2 is smaller for LCDM because the second peak is relatively big compared to the first. 

BOOMERanG confirmed the major predictions of both competing theories. The location of the first peak was exactly where it was expected to be for a flat Roberston-Walker geometry. The amplitude of the second peak was that expected in no-CDM. One can have the best of both worlds by building a model with high Lambda and no CDM, but I don’t take that too seriously: Lambda is just a place holder for our ignorance – in either theory.

I had made this prediction in the hopes that cosmologists would experience the same crisis of faith that I had when MOND appeared in my data. Now it was the data that they valued that was misbehaving – in precisely the way I had predicted with a model that was motivated by MOND (albeit not MOND itself). Surely they would see reason?

There is a story that Diogenes once wandered the streets of Athens with a lamp in broad daylight in search of an honest man. I can relate. Exactly one member of the CMB community wrote to me to say “Gee, I was wrong to dismiss you.” [I paraphrase only a little.] When I had the opportunity to point out to them that I had made this prediction, the most common reaction was “no you didn’t.” Exactly one of the people with whom I had this conversation actually bothered to look up the published paper, and that person also wrote to say “Gee, I guess you did.” Everyone else simply ignored it.

The sociology gets worse from here. There developed a counter-narrative that the BOOMERang data were wrong, therefore my prediction fitting it was wrong. No one asked me about it; I learned of it in a chance conversation a couple of year later in which it was asserted as common knowledge that “the data changed on you.” Let’s examine this statement.

The BOOMERanG data were early, so you expect data to improve. At the time, I noted that the second peak “is only marginally suggested by the data so far”, so I said that “as data accumulate, the second peak should become clear.” It did.

The predicted range quoted above is rather generous. It encompassed the full variation allowed by Big Bang Nucleosynthesis (BBN) at the time (1998/1999). I intentionally considered the broadest range of parameters that were plausible to be fair to both theories. However, developments in BBN were by then disfavoring low-end baryon densities, so the real expectation for the predicted range was narrower. Excluding implausibly low baryon densities, the predicted ranges were 1.6 – 1.83 for LCDM and 2.36 – 2.4 for no-CDM. Note that the prediction of no-CDM is considerably more precise than that of LCDM. This happens because all the plausible models run together in the absence of the forcing term provided by CDM. For hypothesis testing, this is great: the ratio has to be this one value, and only this value.

A few years later, WMAP provided a much more accurate measurement of the peak locations and amplitudes. WMAP measured A1:2 = 2.34 ± 0.09. This is bang on the no-CDM prediction of 2.4.

peaks_predict_wmap
Peak locations measured by WMAP in 2003 (points) compared to the a priori (1999) predictions of LCDM (red tone lines) and no-CDM (blue tone lines).

The prediction for the amplitude ratio A1:2 that I made over twenty years ago remains correct in the most recent CMB data. The same model did not successfully predict the third peak, but I didn’t necessarily expect it to: the no-CDM ansatz (which is just General Relativity without cold dark matter) had to fail at some point. But that gets ahead of the story: no-CDM made a very precise prediction for the second peak. LCDM did not.

LCDM only survives because people were willing to disregard existing bounds – in this case, on the baryon density. It was easier to abandon the most accurately measured and the only over-constrained pillar of Big Bang cosmology than acknowledge a successful prediction that respected all those things. For a few years, the attitude was “BBN was close, but not quite right.” In time, what appears to be confirmation bias kicked in, and the measured abundances of the light elements migrated towards the “right” value – as  specified by CMB fits.

LCDM does give an excellent fit to the power spectrum of the CMB. However, only the location of the first peak was predicted correctly in advance. Everything subsequent to that (at higher ℓ) is the result of a multi-parameter fit with sufficient flexibility to accommodate any physically plausible power spectrum. However, there is no guarantee that the parameters of the fit will agree with independent data. For a long while they did, but now we see the emergence of tensions in not only the baryon density, but also the amplitude of the power spectrum, and most famously, the value of the Hubble constant. Perhaps this is the level of accuracy that is necessary to begin to perceive genuine anomalies. Beyond the need to invoke invisible entities in the first place.

I could say a lot more, and perhaps will in future. For now, I’d just like to emphasize that I made a very precise, completely novel prediction for the amplitude of the second peak. That prediction came true. No one else did that. Heck of a coincidence, if there’s nothing to it.

A pre-history of the prediction of the amplitude of the second peak of the cosmic microwave background

A pre-history of the prediction of the amplitude of the second peak of the cosmic microwave background

In the previous post, I wrote about a candidate parent relativistic theory for MOND that could fit the acoustic power spectrum of the cosmic microwave background (CMB). That has been a long time coming, and probably is not the end of the road. There is a long and largely neglected history behind this, so let’s rewind a bit.

I became concerned about the viability of the dark matter paradigm in the mid-1990s. Up until that point, I was a True Believer, as much as anyone. Clearly, there had to be dark matter, specifically some kind of non-baryonic cold dark matter (CDM), and almost certainly a WIMP. Alternatives like MACHOs (numerous brown dwarfs) were obviously wrong (Big Bang Nucleosynthesis [BBN] taught us that there are not enough baryons), so microlensing experiments searching for them would make great variable star catalogs but had no chance of detecting dark matter. In short, I epitomized the impatient attitude against non-WIMP alternatives that persists throughout much of the community to this day.

It thus came as an enormous surprise that the only theory to successfully predict – in advance – our observations of low surface brightness galaxies was MOND. Dark matter, as we understood it at the time, predicted nothing of the sort. This made me angry.

grinch-max-03-q30-994x621-1

How could it be so?

To a scientist, a surprising result is a sign to think again. Maybe we do not understand this thing we thought we understood. Is it merely a puzzle – some mistake in our understanding or implementation of our preferred theory? Or is it a genuine anomaly – an irrecoverable failure? How is it that a completely different theory can successfully predict something that my preferred theory did not?

In this period, I worked very hard to make things work out for CDM. It had to be so! Yet every time I thought I had found a solution, I realized that I had imposed an assumption that guaranteed the desired result. I created and rejected tautology after tautology. This process unintentionally foretold the next couple of decades of work in galaxy formation theory: I’ve watched others pursue the same failed ideas and false leads over and over and over again.

After months of pounding my head against the proverbial wall, I realized that if I was going to remain objective, I shouldn’t just be working on dark matter. I should also try just to see how things worked in MOND. Suddenly I found myself working much less hard. The things that made no sense in terms of dark matter tumbled straight out of MOND.

This concerned me gravely. Could we really be so wrong about something so important? I went around giving talks, expressing the problems about which I was concerned, and asking how it could be that MOND got so many novel predictions correct in advance if there was nothing to it.

Reactions varied. The first time I mentioned it in a brief talk at the Institute of Astronomy in Cambridge, friend and fellow postdoc Adi Nusser became visibly agitated. He bolted outside as soon as I was done, and I found him shortly later with a cigarette turned mostly to ash as if in one long draw. I asked him what he thought and he replied the he was “NOT HAPPY!” Neither was I. It made no sense.

I first spoke at length on the subject in a colloquium at the Department of Terrestrial Magnetism, where Vera Rubin worked, along with other astronomers and planetary scientists. I was concerned about how Vera would react, so I was exceedingly thorough, spending most of the time on the dark matter side of the issue. She reacted extremely well, as did the rest of the audience, many telling me it was the best talk they had heard in five years. (I have heard this many times since; apparently 5 years is some sort of default for a long time that is short of forever.)

Several months later, I gave the same talk at the University of Pennsylvania to an audience of mostly particle physicists and early-universe cosmologists. A rather different reaction ensued. One person shouted “WHAT HAVE YOU DONE WRONG!” It wasn’t a question.

These polar opposite reactions from different scientific audiences made me realize that sociology was playing a role. As I continued to give the talk to other groups, the pattern above repeated, with the reception being more positive the further an audience was from cosmology.

I started asking people what would concern them about the paradigm. What would falsify CDM? Sometimes this brought bemused answers, like that of Tad Pryor: “CDM has been falsified many times.” (This was in 1997, at which time CDM meant standard SCDM which was indeed pretty thoroughly falsified at that point: we were on the cusp of the transition to LCDM.) More often it met with befuddlement: “Why would you even ask that?” It was disappointing how often this was the answer, as a physical theory is only considered properly scientific if it is falsifiable. [All of the people who had this reaction agreed to that much: I often specifically asked.] The only thing that was clear was that most cosmologists couldn’t care less what galaxies did. Galaxies were small, non-linear entities, they argued… to the point that, as Martin Rees put it, “we shouldn’t be surprised at anything they do.”

I found this attitude to be less than satisfactory. However, I could see its origin. I only became aware of MOND because it reared its ugly head in my data. I had come face to face with the beast, and it shook my mostly deeply held scientific beliefs. Lacking this experience, it must have seemed to them like the proverbial boy crying wolf.

So, I started to ask cosmologists what would concern them. Again, most gave no answer; it was simply inconceivable to them that something could be fundamentally amiss. Among those who did answer, the most common refrain was “Well, if the CMB did something weird.” They never specified what they meant by this, so I set out to quantify what would be weird.

This was 1998. At that time, we knew the CMB existed (the original detection in the 1960s earning Penzias and Wilson a Nobel prize) and that there were temperature fluctuations on large scales at the level of one part in 100,000 (the long-overdue detection of said fluctuations by the COBE satellite earning Mathers and Smoot another Nobel prize). Other experiments were beginning to detect the fluctuations on finer angular scales, but nothing definitive was yet known about the locations and amplitudes of the peaks that were expected in the power spectrum. However, the data were improving rapidly, and an experiment called BOOMERanG was circulating around the polar vortex of Antartica. Daniel Eisenstein told me in a chance meeting that “The data are in the can.”

This made the issue of quantifying what was weird a pressing one. The best prediction is one that comes before the fact, totally blind to the data. But what was weird?

At the time, there was no flavor of relativistic MOND yet in existence. But we know that MOND is indistinguishable from Newton in the limit of high accelerations, and whatever theory contains MOND in the appropriate limit must also contain General Relativity. So perhaps the accelerations in the early universe when the CMB occurred were high enough that MOND effects did not yet occur. This isn’t necessarily the case, but making this ansatz was the only way to proceed at that time. Then it was just General Relativity with or without dark matter. That’s what was weird: no dark matter. So what difference did that make?

Using the then-standard code CMBFAST, I computed predictions for the power spectrum for two families models: LCDM and no-CDM. The parameters of LCDM were already well known at that time. There was even an imitation of the Great Debate about it between Turner and Peebles, though it was more consensus than debate. This enabled a proper prediction of what the power spectrum should be.

Most of the interest in cosmology then concerned the geometry of the universe. We had convinced ourselves that we had to bring back Lambda, but this made a strong prediction for the location of the first peak – a prediction that was confirmed by BOOMERanG in mid-2000.

The geometry on which most cosmologists were focused was neither here nor there to the problem I had set myself. I had no idea what the geometry of a MOND universe might be, and no way to predict the locations of the peaks in the power spectrum. I had to look for relative differences, and these proved not to be all that weird. The difference between LCDM and no-CDM was, in fact, rather subtle.

The main difference I found between models with and without dark matter was a difference in the amplitude of the second peak relative to the first. As I described last time, baryons act to damp the oscillations, while dark matter acts to drive them. Take away the dark matter and there is only damping, resulting in the second peak getting dragged down. The primary variable controlling the ratio of the first-to-second peak amplitude was the baryon fraction. Without dark matter, the baryon fraction is 1. In LCDM, it was then thought to be in the range 0.05 – 0.15. (The modern value is 0.16.)

This is the prediction I published in 1999:

img49

the red lines in the left plot represent LCDM, the blue lines in the right plot no-CDM. The data that were available at the time I wrote the paper are plotted as the lengthy error bars. The location of the first peak had sorta been localized, but nothing was yet known about the amplitude of the second. Here was a clear, genuinely a priori prediction: for a given amplitude of the first peak, the amplitude of the second was smaller without CDM than with it.

Quantitatively, the ratio of the amplitude of the first to second peak was predicted to be in the range 1.54 – 1.83 for LCDM. This range represents the full range of plausible LCDM parameters as we knew them at the time, which as I noted above, we thought we knew very well. For the case of no-CDM, the predicted range was 2.22 – 2.57. In both cases, the range of variation was dominated by the uncertainty in the baryon density from BBN. While this allowed for a little play, the two hypotheses should be easily distinguishable, since the largest ratio possible in LCDM was clearly less than the smallest possible in no-CDM.

And that is as far as I am willing to write today. This is already a long post, so we’ll return to the results of this test in the future.

A Significant Theoretical Advance

A Significant Theoretical Advance

The missing mass problem has been with us many decades now. Going on a century if you start counting from the work of Oort and Zwicky in the 1930s. Not quite a half a century if we date it from the 1970s when most of the relevant scientific community started to take it seriously. Either way, that’s a very long time for a major problem to go unsolved in physics. The quantum revolution that overturned our classical view of physics was lightning fast in comparison – see the discussion of Bohr’s theory in the foundation of quantum mechanics in David Merritt’s new book.

To this day, despite tremendous efforts, we have yet to obtain a confirmed laboratory detection of a viable dark matter particle – or even a hint of persuasive evidence for the physics beyond the Standard Model of Particle Physics (e.g., supersymmetry) that would be required to enable the existence of such particles. We cannot credibly claim (as many of my colleagues insist they can) to know that such invisible mass exists. All we really know is that there is a discrepancy between what we see and what we get: the universe and the galaxies within it cannot be explained by General Relativity and the known stable of Standard Model particles.

If we assume that General Relativity is both correct and sufficient to explain the universe, which seems like a very excellent assumption, then we are indeed obliged to invoke non-baryonic dark matter. The amount of astronomical evidence that points in this direction is overwhelming. That is how we got to where we are today: once we make the obvious, imminently well-motivated assumption, then we are forced along a path in which we become convinced of the reality of the dark matter, not merely as a hypothetical convenience to cosmological calculations, but as an essential part of physical reality.

I think that the assumption that General Relativity is correct is indeed an excellent one. It has repeatedly passed many experimental and observational tests too numerous to elaborate here. However, I have come to doubt the assumption that it suffices to explain the universe. The only data that test it on scales where the missing mass problem arises is the data from which we infer the existence of dark matter. Which we do by assuming that General Relativity holds. The opportunity for circular reasoning is apparent – and frequently indulged.

It should not come as a shock that General Relativity might not be completely sufficient as a theory in all circumstances. This is exactly the motivation for and the working presumption of quantum theories of gravity. That nothing to do with cosmology will be affected along the road to quantum gravity is just another assumption.

I expect that some of my colleagues will struggle to wrap their heads around what I just wrote. I sure did. It was the hardest thing I ever did in science to accept that I might be wrong to be so sure it had to be dark matter – because I was sure it was. As sure of it as any of the folks who remain sure of it now. So imagine my shock when we obtained data that made no sense in terms of dark matter, but had been predicted in advance by a completely different theory, MOND.

When comparing dark matter and MOND, one must weigh all evidence in the balance. Much of the evidence is gratuitously ambiguous, so the conclusion to which one comes depends on how one weighs the more definitive lines of evidence. Some of this points very clearly to MOND, while other evidence prefers non-baryonic dark matter. One of the most important lines of evidence in favor of dark matter is the acoustic power spectrum of the cosmic microwave background (CMB) – the pattern of minute temperature fluctuations in the relic radiation field imprinted on the sky a few hundred thousand years after the Big Bang.

The equations that govern the acoustic power spectrum require General Relativity, but thankfully the small amplitude of the temperature variations permits them to be solved in the limit of linear perturbation theory. So posed, they can be written as a damped and driven oscillator. The power spectrum favors features corresponding to standing waves at the epoch of recombination when the universe transitioned rather abruptly from an opaque plasma to a transparent neutral gas. The edge of a cloud provides an analog: light inside the cloud scatters off the water molecules and doesn’t get very far: the cloud is opaque. Any light that makes it to the edge of the cloud meets no further resistance, and is free to travel to our eyes – which is how we perceive the edge of the cloud. The CMB is the expansion-redshifted edge of the plasma cloud of the early universe.

An easy way to think about a damped and a driven oscillator is a kid being pushed on a swing. The parent pushing the child is a driver of the oscillation. Any resistance – like the child dragging his feet – damps the oscillation. Normal matter (baryons) damps the oscillations – it acts as a net drag force on the photon fluid whose oscillations we observe. If there is nothing going on but General Relativity plus normal baryons, we should see a purely damped pattern of oscillations in which each peak is smaller than the one before it, as seen in the solid line here:

CMB_Pl_CLonly
The CMB acoustic power spectrum predicted by General Relativity with no cold dark matter (line) and as observed by the Planck satellite (data points).

As one can see, the case of no Cold Dark Matter (CDM) does well to explain the amplitudes of the first two peaks. Indeed, it was the only hypothesis to successfully predict this aspect of the data in advance of its observation. The small amplitude of the second peak came as a great surprise from the perspective of LCDM. However, without CDM, there is only baryonic damping. Each peak should have a progressively lower amplitude. This is not observed. Instead, the third peak is almost the same amplitude as the second, and clearly higher than expected in the pure damping scenario of no-CDM.

CDM provides a net driving force in the oscillation equations. It acts like the parent pushing the kid. Even though the kid drags his feet, the parent keeps pushing, and the amplitude of the oscillation is maintained. For the third peak at any rate. The baryons are an intransigent child and keep dragging their feet; eventually they win and the power spectrum damps away on progressively finer angular scales (large 𝓁 in the plot).

As I wrote in this review, the excess amplitude of the third peak over the no-CDM prediction is the best evidence to my mind in favor of the existence of non-baryonic CDM. Indeed, this observation is routinely cited by many cosmologists to absolutely require dark matter. It is argued that the observed power spectrum is impossible without it. The corollary is that any problem the dark matter picture encounters is a mere puzzle. It cannot be an anomaly because the CMB tells us that CDM has to exist.

Impossible is a high standard. I hope the reader can see the flaw in this line of reasoning. It is the same as above. In order to compute the oscillation power spectrum, we have assumed General Relativity. While not replacing it, the persistent predictive successes of a theory like MOND implies the existence of a more general theory. We do not know that such a theory cannot explain the CMB until we develop said theory and work out its predictions.

That said, it is a tall order. One needs a theory that provides a significant driving term without a large amount of excess invisible mass. Something has to push the swing in a universe full of stuff that only drags its feet. That does seem nigh on impossible. Or so I thought until I heard a talk by Pedro Ferreira where he showed how the scalar field in TeVeS – the relativistic MONDian theory proposed by Bekenstein – might play the same role as CDM. However, he and his collaborators soon showed that the desired effect was indeed impossible, at least in TeVeS: one could not simultaneously fit the third peak and the data preceding the first. This was nevertheless an important theoretical development, as it showed how it was possible, at least in principle, to affect the peak ratios without massive amounts of non-baryonic CDM.

At this juncture, there are two options. One is to seek a theory that might work, and develop it to the point where it can be tested. This is a lot of hard work that is bound to lead one down many blind alleys without promise of ultimate success. The much easier option is to assume that it cannot be done. This is the option adopted by most cosmologists, who have spent the last 15 years arguing that the CMB power spectrum requires the existence of CDM. Some even seem to consider it to be a detection thereof, in which case we might wonder why we bother with all those expensive underground experiments to detect the stuff.

Rather fewer people have invested in the approach that requires hard work. There are a few brave souls who have tried it; these include Constantinos Skordis and Tom Złosnik. Very recently, the have shown a version of a relativistic MOND theory (which they call RelMOND) that does fit the CMB power spectrum. Here is the plot from their paper:

CMB_RelMOND_2020

Note that black line in their plot is the fit of the LCDM model to the Planck power spectrum data. Their theory does the same thing, so it necessarily fits the data as well. Indeed, a good fit appears to follow for a range of parameters. This is important, because it implies that little or no fine-tuning is needed: this is just what happens. That is arguably better than the case for LCDM, in which the fit is very fine-tuned. Indeed, that was a large point of making the measurement, as it requires a very specific set of parameters in order to work. It also leads to tensions with independent measurements of the Hubble constant, the baryon density, and the amplitude of the matter power spectrum at low redshift.

As with any good science result, this one raises a host of questions. It will take time to explore these. But this in itself is a momentous result. Irrespective if RelMOND is the right theory or, like TeVeS, just a step on a longer path, it shows that the impossible is in fact possible. The argument that I have heard repeated by cosmologists ad nauseam like a rosary prayer, that dark matter is the only conceivable way to explain the CMB power spectrum, is simply WRONG.

A Philosophical Approach to MOND

A Philosophical Approach to MOND is a new book by David Merritt. This is a major development in the both the science of cosmology and astrophysics, on the one hand, and the philosophy and history of science on the other. It should be required reading for anyone interested in any of these topics.

For many years, David Merritt was a professor of astrophysics who specialized in gravitational dynamics, leading a number of breakthroughs in the effects of supermassive black holes in galaxies on the orbits of stars around them. He has since transitioned to the philosophy of science. This may not sound like a great leap, but it is: these are different scholarly fields, each with their own traditions, culture, and required background education. Changing fields like this is a bit like switching boats mid-stream: even a strong swimmer may flounder in the attempt given the many boulders academic disciplines traditionally place in the stream of knowledge to mark their territory. Merritt has managed the feat with remarkable grace, devouring the background reading and coming up to speed in a different discipline to the point of a lucid fluency.

For the most part, practicing scientists have little interaction with philosophers and historians of science. Worse, we tend to have little patience for them. The baseline presumption of many physical scientists is that we know what we’re doing; there is nothing the philosophers can teach us. In the daily practice of what Kuhn called normal science, this is close to true. When instead we are faced with potential paradigm shifts, the philosophy of science is critical, and the absence of training in it on the part of many scientists becomes glaring.

In my experience, most scientists seem to have heard of Popper and Kuhn. If that. Physical scientists will almost always pay lip service to Popper’s ideal of falsifiablity, and that’s pretty much the extent of it. Living up to applying that ideal is another matter. If an idea that is near and dear to their hearts and careers is under threat, the knee-jerk response is more commonly “let’s not get carried away!”

There is more to the philosophy of science than that. The philosophers of science have invested lots of effort in considering both how science works in practice (e.g., Kuhn) and how it should work (Popper, Lakatos, …) The practice and the ideal of science are not always the same thing.

The debate about dark matter and MOND hinges on the philosophy of science in a profound way. I do not think it is possible to make real progress out of our current intellectual morass without a deep examination of what science is and what it should be.

Merritt takes us through the methodology of scientific research programs, spelling out what we’ve learned from past experience (the history of science) and from careful consideration of how science should work (its philosophical basis). For example, all scientists agree that it is important for a scientific theory to have predictive power. But we are disturbingly fuzzy on what that means. I frequently hear my colleagues say things like “my theory predicts that” in reference to some observation, when in fact no such prediction was made in advance. What they usually mean is that it fits well with the theory. This is sometimes true – they could have predicted the observation in advance if they had considered that particular case. But sometimes it is retroactive fitting more than prediction – consistency, perhaps, but it could have gone a number of other ways equally well. Worse, it is sometimes a post facto assertion that is simply false: not only was the prediction not made in advance, but the observation was genuinely surprising at the time it was made. Only in retrospect is it “correctly” “predicted.”

The philosophers have considered these situations. One thing I appreciate is Merritt’s review of the various takes philosophers have on what counts as a prediction. I wish I had known these things when I wrote the recent review in which I took a very restrictive definition to avoid the foible above. The philosophers provide better definitions, of which more than one can be usefully applicable. I’m not going to go through them here: you should read Merritt’s book, and those of the philosophers he cites.

From this philosophical basis, Merritt makes a systematic, dare I say, scientific, analysis of the basic tenets of MOND and MONDian theories, and how they fare with regard to their predictions and observational tests. Along the way, he also considers the same material in the light of the dark matter paradigm. Of comparable import to confirmed predictions are surprising observations: if a new theory predicts that the sun will rise in the morning, that isn’t either new or surprising. If instead a theory expects one thing but another is observed, that is surprising, and it counts against that theory even if it can be adjusted to accommodate the new fact. I have seen this happen over and over with dark matter: surprising observations (e.g., the absence of cusps in dark matter halos, the small numbers of dwarf galaxies, downsizing in which big galaxies appear to form earliest) are at first ignored, doubted, debated, then partially explained with some mental gymnastics until it is Known and of course, we knew it all along. Merritt explicitly points out examples of this creeping determinism, in which scientists come to believe they predicted something they merely rationalized post-facto (hence the preeminence of genuinely a priori predictions that can’t be fudged).

Merritt’s book is also replete with examples of scientists failing to take alternatives seriously. This is natural: we have invested an enormous amount of time developing physical science to the point we have now reached; there is an enormous amount of background material that cannot simply be ignored or discarded. All too often, we are confronted with crackpot ideas that do exactly this. This makes us reluctant to consider ideas that sound crazy on first blush, and most of us will rightly display considerable irritation when asked to do so. For reasons both valid and not, MOND skirts this bondary. I certainly didn’t take it seriously myself, nor really considered it at all, until its predictions came true in my own data. It was so far below my radar that at first I did not even recognize that this is what had happened. But I did know I was surprised; what I was seeing did not make sense in terms of dark matter. So, from this perspective, I can see why other scientists are quick to dismiss it. I did so myself, initially. I was wrong to do so, and so are they.

A common failure mode is to ignore MOND entirely: despite dozens of confirmed predictions, it simply remains off the radar for many scientists. They seem never to have given it a chance, so they simply don’t pay attention when it gets something right. This is pure ignorance, which is not a strong foundation from which to render a scientific judgement.

Another common reaction is to acknowledge then dismiss. Merritt provides many examples where eminent scientists do exactly this with a construction like: “MOND correctly predicted X but…” where X is a single item, as if this is the only thing that [they are aware that] it does. Put this way, it is easy to dismiss – a common refrain I hear is “MOND fits rotation curves but nothing else.” This is a long-debunked falsehood that is asserted and repeated until it achieves the status of common knowledge within the echo chamber of scientists who refuse to think outside the dark matter box.

This is where the philosophy of science is crucial to finding our way forward. Merritt’s book illuminates how this is done. If you are reading these words, you owe it to yourself to read his book.