Big Trouble in a Deep Void

Big Trouble in a Deep Void

The following is a guest post by Indranil Banik, Moritz Haslbauer, and Pavel Kroupa (bios at end) based on their new paper

Modifying gravity to save cosmology

Cosmology is currently in a major crisis because of many severe tensions, the most serious and well-known being that local observations of how quickly the Universe is expanding (the so-called ‘Hubble constant’) exceed the prediction of the standard cosmological model, ΛCDM. This prediction is based on the cosmic microwave background (CMB), the most ancient light we can observe – which is generally thought to have been emitted about 400,000 years after the Big Bang. For ΛCDM to fit the pattern of fluctuations observed in the CMB by the Planck satellite and other experiments, the Hubble constant must have a particular value of 67.4 ± 0.5 km/s/Mpc. Local measurements are nearly all above this ‘Planck value’, but are consistent with each other. In our paper, we use a local value of 73.8 ± 1.1 km/s/Mpc using a combination of supernovae and gravitationally lensed quasars, two particularly precise yet independent techniques.

This unexpectedly rapid local expansion of the Universe could be due to us residing in a huge underdense region, or void. However, a void wide and deep enough to explain the Hubble tension is not possible in ΛCDM, which is built on Einstein’s theory of gravity, General Relativity. Still, there is quite strong evidence that we are indeed living within a large void with a radius of about 300 Mpc, or one billion light years. This evidence comes from many surveys covering the whole electromagnetic spectrum, from radio to X-rays. The most compelling evidence comes from analysis of galaxy number counts in the near-infrared, giving the void its name of the Keenan-Barger-Cowie (KBC) void. Gravity from matter outside the void would pull more than matter inside it, making the Universe appear to expand faster than it actually is for an observer inside the void. This ‘Hubble bubble’ scenario (depicted in Figure 1) could solve the Hubble tension, a possibility considered – and rejected – in several previous works (e.g. Kenworthy+ 2019). We will return to their objections against this idea.

Figure 1: Illustration of the Universe’s large scale structure. The darker regions are voids, and the bright dots represent galaxies. The arrows show how gravity from surrounding denser regions pulls outwards on galaxies in a void. If we were living in such a void (as indicated by the yellow star), the Universe would expand faster locally than it does on average. This could explain the Hubble tension. Credit: Technology Review

One of the main objections seemed to be that since such a large and deep void is incompatible with ΛCDM, it can’t exist. This is a common way of thinking, but the problem with it was clear to us from a very early stage. The first part of this logic is sound – assuming General Relativity, a hot Big Bang, and that the state of the Universe at early times is apparent in the CMB (i.e. it was flat and almost homogeneous then), we are led to the standard flat ΛCDM model. By studying the largest suitable simulation of this model (called MXXL), we found that it should be completely impossible to find ourselves inside a void with the observed size and depth (or fractional underdensity) of the KBC void – this possibility can be rejected with more confidence than the discovery of the Higgs boson when first announced. We therefore applied one of the leading alternative gravity theories called Milgromian Dynamics (MOND), a controversial idea developed in the early 1980s by Israeli physicist Mordehai Milgrom. We used MOND (explained in a simple way here) to evolve a small density fluctuation forwards from early times, studying if 13 billion years later it fits the density and velocity field of the local Universe. Before describing our results, we briefly introduce MOND and explain how to use it in a potentially viable cosmological framework. Astronomers often assume MOND cannot be extended to cosmological scales (typically >10 Mpc), which is probably true without some auxiliary assumptions. This is also the case for General Relativity, though in that case the scale where auxiliary assumptions become crucial is only a few kpc, namely in galaxies.

MOND was originally designed to explain why galaxies rotate faster in their outskirts than they should if one applies General Relativity to their luminous matter distribution. This discrepancy gave rise to the idea of dark matter halos around individual galaxies. For dark matter to cluster on such scales, it would have to be ‘cold’, or equivalently consist of rather heavy particles (above a few thousand eV/c2, or a millionth of a proton mass). Any lighter and the gravity from galaxies could not hold on to the dark matter. MOND assumes these speculative and unexplained cold dark matter haloes do not exist – the need for them is after all dependent on the validity of General Relativity. In MOND once the gravity from any object gets down to a certain very low threshold called a0, it declines more gradually with increasing distance, following an inverse distance law instead of the usual inverse square law. MOND has successfully predicted many galaxy rotation curves, highlighting some remarkable correlations with their visible mass. This is unexpected if they mostly consist of invisible dark matter with quite different properties to visible mass. The Local Group satellite galaxy planes also strongly favour MOND over ΛCDM, as explained using the logic of Figure 2 and in this YouTube video.

Figure 2: the satellite galaxies of the Milky Way and Andromeda mostly lie within thin planes. These are difficult to form unless the galaxies in them are tidal dwarfs born from the interaction of two major galaxies. Since tidal dwarfs should be free of dark matter due to the way they form, the satellites in the satellite planes should have rather weak self-gravity in ΛCDM. This is not the case as measured from their high internal velocity dispersions. So the extra gravity needed to hold galaxies together should not come from dark matter that can in principle be separated from the visible.

To extend MOND to cosmology, we used what we call the νHDM framework (with ν pronounced “nu”), originally proposed by Angus (2009). In this model, the cold dark matter of ΛCDM is replaced by the same total mass in sterile neutrinos with a mass of only 11 eV/c2, almost a billion times lighter than a proton. Their low mass means they would not clump together in galaxies, consistent with the original idea of MOND to explain galaxies with only their visible mass. This makes the extra collisionless matter ‘hot’, hence the name of the model. But this collisionless matter would exist inside galaxy clusters, helping to explain unusual configurations like the Bullet Cluster and the unexpectedly strong gravity (even in MOND) in quieter clusters. Considering the universe as a whole, νHDM has the same overall matter content as ΛCDM. This makes the overall expansion history of the universe very similar in both models, so both can explain the amounts of deuterium and helium produced in the first few minutes after the Big Bang. They should also yield similar fluctuations in the CMB because both models contain the same amount of dark matter. These fluctuations would get somewhat blurred by sterile neutrinos of such a low mass due to their rather fast motion in the early Universe. However, it has been demonstrated that Planck data are consistent with dark matter particles more massive than 10 eV/c2. Crucially, we showed that the density fluctuations evident in the CMB typically yield a gravitational field strength of 21 a0 (correcting an earlier erroneous estimate of 570 a0 in the above paper), making the gravitational physics nearly identical to General Relativity. Clearly, the main lines of early Universe evidence used to argue in favour of ΛCDM are not sufficiently unique to distinguish it from νHDM (Angus 2009).

The models nonetheless behave very differently later on. We estimated that for redshifts below about 50 (when the Universe is older than about 50 million years), the gravity would typically fall below a0 thanks to the expansion of the Universe (the CMB comes from a redshift of 1100). After this ‘MOND moment’, both the ordinary matter and the sterile neutrinos would clump on large scales just like in ΛCDM, but there would also be the extra gravity from MOND. This would cause structures to grow much faster (Figure 3), allowing much wider and deeper voids.


Figure 3: Evolution of the density contrast within a 300 co-moving Mpc sphere in different Newtonian (red) and MOND (blue) models, shown as a function of the Universe’s size relative to its present size (this changes almost linearly with time). Notice the much faster structure growth in MOND. The solid blue line uses a time-independent external field on the void, while the dot-dashed blue line shows the effect of a stronger external field in the past. This requires a deeper initial void to match present-day observations.

We used this basic framework to set up a dynamical model of the void. By making various approximations and trying different initial density profiles, we were able to simultaneously fit the apparent local Hubble constant, the observed density profile of the KBC void, and many other observables like the acceleration parameter, which we come to below. We also confirmed previous results that the same observables rule out standard cosmology at 7.09σ significance. This is much more than the typical threshold of 5σ used to claim a discovery in cases like the Higgs boson, where the results agree with prior expectations.

One objection to our model was that a large local void would cause the apparent expansion of the Universe to accelerate at late times. Equivalently, observations that go beyond the void should see a standard Planck cosmology, leading to a step-like behaviour near the void edge. At stake is the so-called acceleration parameter q0 (which we defined oppositely to convention to correct a historical error). In ΛCDM, we expect q0 = 0.55, while in general much higher values are expected in a Hubble bubble scenario. The objection of Kenworthy+ (2019) was that since the observed q0 is close to 0.55, there is no room for a void. However, their data analysis fixed q0 to the ΛCDM expectation, thereby removing any hope of discovering a deviation that might be caused by a local void. Other analyses (e.g. Camarena & Marra 2020b) which do not make such a theory-motivated assumption find q0 = 1.08, which is quite consistent with our best-fitting model (Figure 4). We also discussed other objections to a large local void, for instance the Wu & Huterer (2017) paper which did not consider a sufficiently large void, forcing the authors to consider a much deeper void to try and solve the Hubble tension. This led to some serious observational inconsistencies, but a larger and shallower void like the observed KBC void seems to explain the data nicely. In fact, combining all the constraints we applied to our model, the overall tension is only 2.53σ, meaning the data have a 1.14% chance of arising if ours were the correct model. The actual observations are thus not the most likely consequence of our model, but could plausibly arise if it were correct. Given also the high likelihood that some if not all of the observational errors we took from publications are underestimates, this is actually a very good level of consistency.

Figure 4: The predicted local Hubble constant (x-axis) and acceleration parameter (y-axis) as measured with local supernovae (black dot, with red error ellipses). Our best-fitting models with different initial void density profiles (blue symbols) can easily explain the observations. However, there is significant tension with the prediction of ΛCDM based on parameters needed to fit Planck observations of the CMB (green dot). In particular, local observations favour a higher acceleration parameter, suggestive of a local void.

Unlike other attempts to solve the Hubble tension, ours is unique in using an already existing theory (MOND) developed for a different reason (galaxy rotation curves). The use of unseen collisionless matter made of hypothetical sterile neutrinos is still required to explain the properties of galaxy clusters, which otherwise do not sit well with MOND. In addition, these neutrinos provide an easy way to explain the CMB and background expansion history, though recently Skordis & Zlosnik (2020) showed that this is possible in MOND with only ordinary matter. In any case, MOND is a theory of gravity, while dark matter is a hypothesis that more matter exists than meets the eye. The ideas could both be right, and should be tested separately.

A dark matter-MOND hybrid thus appears to be a very promising way to resolve the current crisis in cosmology. Still, more work is required to construct a fully-fledged relativistic MOND theory capable of addressing cosmology. This could build on the theory proposed by Skordis & Zlosnik (2019) in which gravitational waves travel at the speed of light, which was considered to be a major difficulty for MOND. We argued that such a theory would enhance structure formation to the required extent under a wide range of plausible theoretical assumptions, but this needs to be shown explicitly starting from a relativistic MOND theory. Cosmological structure formation simulations are certainly required in this scenario – these are currently under way in Bonn. Further observations would also help greatly, especially of the matter density in the outskirts of the KBC void at distances of about 500 Mpc. This could hold vital clues to how quickly the void has grown, helping to pin down the behaviour of the sought-after MOND theory.

There is now a very real prospect of obtaining a single theory that works across all astronomical scales, from the tiniest dwarf galaxies up to the largest structures in the Universe & its overall expansion rate, and from a few seconds after the birth of the Universe until today. Rather than argue whether this theory looks more like MOND or standard cosmology, what we should really do is combine the best elements of both, paying careful attention to all observations.


Authors

Indranil Banik is a Humboldt postdoctoral fellow in the Helmholtz Institute for Radiation and Nuclear Physics (HISKP) at the University of Bonn, Germany. He did his undergraduate and masters at Trinity College, Cambridge, and his PhD at Saint Andrews under Hongsheng Zhao. His research focuses on testing whether gravity continues to follow the Newtonian inverse square law at the low accelerations typical of galactic outskirts, with MOND being the best-developed alternative.

Moritz Haslbauer is a PhD student at the Max Planck Institute for Radio Astronomy (MPIfR) in Bonn. He obtained his undergraduate degree from the University of Vienna and his masters from the University of Bonn. He works on the formation and evolution of galaxies and their distribution in the local Universe in order to test different cosmological models and gravitational theories. Prof. Pavel Kroupa is his PhD supervisor.

Pavel Kroupa is a professor at the University of Bonn and professorem hospitem at Charles University in Prague. He went to school in Germany and South Africa, studied physics in Perth, Australia, and obtained his PhD at Trinity College, Cambridge, UK. He researches stellar populations and their dynamics as well as the dark matter problem, therewith testing gravitational theories and cosmological models.

Link to the published science paper.

YouTube video on the paper

Contact: ibanik@astro.uni-bonn.de.

Indranil Banik’s YouTube channel.

Cosmology, then and now

Cosmology, then and now

I have been busy teaching cosmology this semester. When I started on the faculty of the University of Maryland in 1998, there was no advanced course on the subject. This seemed like an obvious hole to fill, so I developed one. I remember with fond bemusement the senior faculty, many of them planetary scientists, sending Mike A’Hearn as a stately ambassador to politely inquire if cosmology had evolved beyond a dodgy subject and was now rigorous enough to be worthy of a 3 credit graduate course.

Back then, we used transparencies or wrote on the board. It was novel to have a course web page. I still have those notes, and marvel at the breadth and depth of work performed by my younger self. Now that I’m teaching it for the first time in a decade, I find it challenging to keep up. Everything has to be adapted to an electronic format, and be delivered remotely during this damnable pandemic. It is a less satisfactory experience, and it has precluded posting much here.

Another thing I notice is that attitudes have evolved along with the subject. The baseline cosmology, LCDM, has not changed much. We’ve tilted the power spectrum and spiked it with extra baryons, but the basic picture is that which emerged from the application of classical observational cosmology – measurements of the Hubble constant, the mass density, the ages of the oldest stars, the abundances of the light elements, number counts of faint galaxies, and a wealth of other observational constraints built up over decades of effort. Here is an example of combining such constraints, and exercise I have students do every time I teach the course:

Observational constraints in the mass density-Hubble constant plane assembled by students in my cosmology course in 2002. The gray area is excluded. The open window is the only space allowed; this is LCDM. The box represents the first WMAP estimate in 2003. CMB estimates have subsequently migrated out of the allowed region to lower H0 and higher mass density, but the other constraints have not changed much, most famously H0, which remains entrenched in the low to mid-70s.

These things were known by the mid-90s. Nowadays, people seem to think Type Ia SN discovered Lambda, when really they were just icing on a cake that was already baked. The location of the first peak in the acoustic power spectrum of the microwave background was corroborative of the flat geometry required by the picture that had developed, but trailed the development of LCDM rather than informing its construction. But students entering the field now seem to have been given the impression that these were the only observations that mattered.

Worse, they seem to think these things are Known, as if there’s never been a time that we cosmologists have been sure about something only to find later that we had it quite wrong. This attitude is deleterious to the progress of science, as it precludes us from seeing important clues when they fail to conform to our preconceptions. To give one recent example, everyone seems to have decided that the EDGES observation of 21 cm absorption during the dark ages is wrong. The reason? Because it is impossible in LCDM. There are technical reasons why it might be wrong, but these are subsidiary to Attitude: we can’t believe it’s true, so we don’t. But that’s what makes a result important: something that makes us reexamine how we perceive the universe. If we’re unwilling to do that, we’re no longer doing science.

Second peak bang on

Second peak bang on

At the dawn of the 21st century, we were pretty sure we had solved cosmology. The Lambda Cold Dark Matter (LCDM) model made strong predictions for the power spectrum of the Cosmic Microwave Background (CMB). One was that the flat Robertson-Walker geometry that we were assuming for LCDM predicted the location of the first peak should be at ℓ = 220. As I discuss in the history of the rehabilitation of Lambda, this was a genuinely novel prediction that was clearly confirmed first by BOOMERanG and subsequently by many other experiments, especially WMAP. As such, it was widely (and rightly) celebrated among cosmologists. The WMAP team has been awarded major prizes, including the Gruber cosmology prize and the Breakthrough prize.

As I discussed in the previous post, the location of the first peak was not relevant to the problem I had become interested in: distinguishing whether dark matter existed or not. Instead, it was the amplitude of the second peak of the acoustic power spectrum relative to the first that promised a clear distinction between LCDM and the no-CDM ansatz inspired by MOND. This was also first tested by BOOMERanG:

postboomer
The CMB power spectrum observed by BOOMERanG in 2000. The first peak is located exactly where LCDM predicted it to be. The second peak was not detected, but was clearly smaller than expected in LCDM. It was consistent with the prediction of no-CDM.

In a nutshell, LCDM predicted a big second peak while no-CDM predicted a small second peak. Quantitatively, the amplitude ratio A1:2 was predicted to be in the range 1.54 – 1.83 for LCDM, and 2.22 – 2.57 for no-CDM. Note that A1:2 is smaller for LCDM because the second peak is relatively big compared to the first. 

BOOMERanG confirmed the major predictions of both competing theories. The location of the first peak was exactly where it was expected to be for a flat Roberston-Walker geometry. The amplitude of the second peak was that expected in no-CDM. One can have the best of both worlds by building a model with high Lambda and no CDM, but I don’t take that too seriously: Lambda is just a place holder for our ignorance – in either theory.

I had made this prediction in the hopes that cosmologists would experience the same crisis of faith that I had when MOND appeared in my data. Now it was the data that they valued that was misbehaving – in precisely the way I had predicted with a model that was motivated by MOND (albeit not MOND itself). Surely they would see reason?

There is a story that Diogenes once wandered the streets of Athens with a lamp in broad daylight in search of an honest man. I can relate. Exactly one member of the CMB community wrote to me to say “Gee, I was wrong to dismiss you.” [I paraphrase only a little.] When I had the opportunity to point out to them that I had made this prediction, the most common reaction was “no you didn’t.” Exactly one of the people with whom I had this conversation actually bothered to look up the published paper, and that person also wrote to say “Gee, I guess you did.” Everyone else simply ignored it.

The sociology gets worse from here. There developed a counter-narrative that the BOOMERang data were wrong, therefore my prediction fitting it was wrong. No one asked me about it; I learned of it in a chance conversation a couple of year later in which it was asserted as common knowledge that “the data changed on you.” Let’s examine this statement.

The BOOMERanG data were early, so you expect data to improve. At the time, I noted that the second peak “is only marginally suggested by the data so far”, so I said that “as data accumulate, the second peak should become clear.” It did.

The predicted range quoted above is rather generous. It encompassed the full variation allowed by Big Bang Nucleosynthesis (BBN) at the time (1998/1999). I intentionally considered the broadest range of parameters that were plausible to be fair to both theories. However, developments in BBN were by then disfavoring low-end baryon densities, so the real expectation for the predicted range was narrower. Excluding implausibly low baryon densities, the predicted ranges were 1.6 – 1.83 for LCDM and 2.36 – 2.4 for no-CDM. Note that the prediction of no-CDM is considerably more precise than that of LCDM. This happens because all the plausible models run together in the absence of the forcing term provided by CDM. For hypothesis testing, this is great: the ratio has to be this one value, and only this value.

A few years later, WMAP provided a much more accurate measurement of the peak locations and amplitudes. WMAP measured A1:2 = 2.34 ± 0.09. This is bang on the no-CDM prediction of 2.4.

peaks_predict_wmap
Peak locations measured by WMAP in 2003 (points) compared to the a priori (1999) predictions of LCDM (red tone lines) and no-CDM (blue tone lines).

The prediction for the amplitude ratio A1:2 that I made over twenty years ago remains correct in the most recent CMB data. The same model did not successfully predict the third peak, but I didn’t necessarily expect it to: the no-CDM ansatz (which is just General Relativity without cold dark matter) had to fail at some point. But that gets ahead of the story: no-CDM made a very precise prediction for the second peak. LCDM did not.

LCDM only survives because people were willing to disregard existing bounds – in this case, on the baryon density. It was easier to abandon the most accurately measured and the only over-constrained pillar of Big Bang cosmology than acknowledge a successful prediction that respected all those things. For a few years, the attitude was “BBN was close, but not quite right.” In time, what appears to be confirmation bias kicked in, and the measured abundances of the light elements migrated towards the “right” value – as  specified by CMB fits.

LCDM does give an excellent fit to the power spectrum of the CMB. However, only the location of the first peak was predicted correctly in advance. Everything subsequent to that (at higher ℓ) is the result of a multi-parameter fit with sufficient flexibility to accommodate any physically plausible power spectrum. However, there is no guarantee that the parameters of the fit will agree with independent data. For a long while they did, but now we see the emergence of tensions in not only the baryon density, but also the amplitude of the power spectrum, and most famously, the value of the Hubble constant. Perhaps this is the level of accuracy that is necessary to begin to perceive genuine anomalies. Beyond the need to invoke invisible entities in the first place.

I could say a lot more, and perhaps will in future. For now, I’d just like to emphasize that I made a very precise, completely novel prediction for the amplitude of the second peak. That prediction came true. No one else did that. Heck of a coincidence, if there’s nothing to it.

A pre-history of the prediction of the amplitude of the second peak of the cosmic microwave background

A pre-history of the prediction of the amplitude of the second peak of the cosmic microwave background

In the previous post, I wrote about a candidate parent relativistic theory for MOND that could fit the acoustic power spectrum of the cosmic microwave background (CMB). That has been a long time coming, and probably is not the end of the road. There is a long and largely neglected history behind this, so let’s rewind a bit.

I became concerned about the viability of the dark matter paradigm in the mid-1990s. Up until that point, I was a True Believer, as much as anyone. Clearly, there had to be dark matter, specifically some kind of non-baryonic cold dark matter (CDM), and almost certainly a WIMP. Alternatives like MACHOs (numerous brown dwarfs) were obviously wrong (Big Bang Nucleosynthesis [BBN] taught us that there are not enough baryons), so microlensing experiments searching for them would make great variable star catalogs but had no chance of detecting dark matter. In short, I epitomized the impatient attitude against non-WIMP alternatives that persists throughout much of the community to this day.

It thus came as an enormous surprise that the only theory to successfully predict – in advance – our observations of low surface brightness galaxies was MOND. Dark matter, as we understood it at the time, predicted nothing of the sort. This made me angry.

grinch-max-03-q30-994x621-1

How could it be so?

To a scientist, a surprising result is a sign to think again. Maybe we do not understand this thing we thought we understood. Is it merely a puzzle – some mistake in our understanding or implementation of our preferred theory? Or is it a genuine anomaly – an irrecoverable failure? How is it that a completely different theory can successfully predict something that my preferred theory did not?

In this period, I worked very hard to make things work out for CDM. It had to be so! Yet every time I thought I had found a solution, I realized that I had imposed an assumption that guaranteed the desired result. I created and rejected tautology after tautology. This process unintentionally foretold the next couple of decades of work in galaxy formation theory: I’ve watched others pursue the same failed ideas and false leads over and over and over again.

After months of pounding my head against the proverbial wall, I realized that if I was going to remain objective, I shouldn’t just be working on dark matter. I should also try just to see how things worked in MOND. Suddenly I found myself working much less hard. The things that made no sense in terms of dark matter tumbled straight out of MOND.

This concerned me gravely. Could we really be so wrong about something so important? I went around giving talks, expressing the problems about which I was concerned, and asking how it could be that MOND got so many novel predictions correct in advance if there was nothing to it.

Reactions varied. The first time I mentioned it in a brief talk at the Institute of Astronomy in Cambridge, friend and fellow postdoc Adi Nusser became visibly agitated. He bolted outside as soon as I was done, and I found him shortly later with a cigarette turned mostly to ash as if in one long draw. I asked him what he thought and he replied the he was “NOT HAPPY!” Neither was I. It made no sense.

I first spoke at length on the subject in a colloquium at the Department of Terrestrial Magnetism, where Vera Rubin worked, along with other astronomers and planetary scientists. I was concerned about how Vera would react, so I was exceedingly thorough, spending most of the time on the dark matter side of the issue. She reacted extremely well, as did the rest of the audience, many telling me it was the best talk they had heard in five years. (I have heard this many times since; apparently 5 years is some sort of default for a long time that is short of forever.)

Several months later, I gave the same talk at the University of Pennsylvania to an audience of mostly particle physicists and early-universe cosmologists. A rather different reaction ensued. One person shouted “WHAT HAVE YOU DONE WRONG!” It wasn’t a question.

These polar opposite reactions from different scientific audiences made me realize that sociology was playing a role. As I continued to give the talk to other groups, the pattern above repeated, with the reception being more positive the further an audience was from cosmology.

I started asking people what would concern them about the paradigm. What would falsify CDM? Sometimes this brought bemused answers, like that of Tad Pryor: “CDM has been falsified many times.” (This was in 1997, at which time CDM meant standard SCDM which was indeed pretty thoroughly falsified at that point: we were on the cusp of the transition to LCDM.) More often it met with befuddlement: “Why would you even ask that?” It was disappointing how often this was the answer, as a physical theory is only considered properly scientific if it is falsifiable. [All of the people who had this reaction agreed to that much: I often specifically asked.] The only thing that was clear was that most cosmologists couldn’t care less what galaxies did. Galaxies were small, non-linear entities, they argued… to the point that, as Martin Rees put it, “we shouldn’t be surprised at anything they do.”

I found this attitude to be less than satisfactory. However, I could see its origin. I only became aware of MOND because it reared its ugly head in my data. I had come face to face with the beast, and it shook my mostly deeply held scientific beliefs. Lacking this experience, it must have seemed to them like the proverbial boy crying wolf.

So, I started to ask cosmologists what would concern them. Again, most gave no answer; it was simply inconceivable to them that something could be fundamentally amiss. Among those who did answer, the most common refrain was “Well, if the CMB did something weird.” They never specified what they meant by this, so I set out to quantify what would be weird.

This was 1998. At that time, we knew the CMB existed (the original detection in the 1960s earning Penzias and Wilson a Nobel prize) and that there were temperature fluctuations on large scales at the level of one part in 100,000 (the long-overdue detection of said fluctuations by the COBE satellite earning Mathers and Smoot another Nobel prize). Other experiments were beginning to detect the fluctuations on finer angular scales, but nothing definitive was yet known about the locations and amplitudes of the peaks that were expected in the power spectrum. However, the data were improving rapidly, and an experiment called BOOMERanG was circulating around the polar vortex of Antartica. Daniel Eisenstein told me in a chance meeting that “The data are in the can.”

This made the issue of quantifying what was weird a pressing one. The best prediction is one that comes before the fact, totally blind to the data. But what was weird?

At the time, there was no flavor of relativistic MOND yet in existence. But we know that MOND is indistinguishable from Newton in the limit of high accelerations, and whatever theory contains MOND in the appropriate limit must also contain General Relativity. So perhaps the accelerations in the early universe when the CMB occurred were high enough that MOND effects did not yet occur. This isn’t necessarily the case, but making this ansatz was the only way to proceed at that time. Then it was just General Relativity with or without dark matter. That’s what was weird: no dark matter. So what difference did that make?

Using the then-standard code CMBFAST, I computed predictions for the power spectrum for two families models: LCDM and no-CDM. The parameters of LCDM were already well known at that time. There was even an imitation of the Great Debate about it between Turner and Peebles, though it was more consensus than debate. This enabled a proper prediction of what the power spectrum should be.

Most of the interest in cosmology then concerned the geometry of the universe. We had convinced ourselves that we had to bring back Lambda, but this made a strong prediction for the location of the first peak – a prediction that was confirmed by BOOMERanG in mid-2000.

The geometry on which most cosmologists were focused was neither here nor there to the problem I had set myself. I had no idea what the geometry of a MOND universe might be, and no way to predict the locations of the peaks in the power spectrum. I had to look for relative differences, and these proved not to be all that weird. The difference between LCDM and no-CDM was, in fact, rather subtle.

The main difference I found between models with and without dark matter was a difference in the amplitude of the second peak relative to the first. As I described last time, baryons act to damp the oscillations, while dark matter acts to drive them. Take away the dark matter and there is only damping, resulting in the second peak getting dragged down. The primary variable controlling the ratio of the first-to-second peak amplitude was the baryon fraction. Without dark matter, the baryon fraction is 1. In LCDM, it was then thought to be in the range 0.05 – 0.15. (The modern value is 0.16.)

This is the prediction I published in 1999:

img49

the red lines in the left plot represent LCDM, the blue lines in the right plot no-CDM. The data that were available at the time I wrote the paper are plotted as the lengthy error bars. The location of the first peak had sorta been localized, but nothing was yet known about the amplitude of the second. Here was a clear, genuinely a priori prediction: for a given amplitude of the first peak, the amplitude of the second was smaller without CDM than with it.

Quantitatively, the ratio of the amplitude of the first to second peak was predicted to be in the range 1.54 – 1.83 for LCDM. This range represents the full range of plausible LCDM parameters as we knew them at the time, which as I noted above, we thought we knew very well. For the case of no-CDM, the predicted range was 2.22 – 2.57. In both cases, the range of variation was dominated by the uncertainty in the baryon density from BBN. While this allowed for a little play, the two hypotheses should be easily distinguishable, since the largest ratio possible in LCDM was clearly less than the smallest possible in no-CDM.

And that is as far as I am willing to write today. This is already a long post, so we’ll return to the results of this test in the future.

A Significant Theoretical Advance

A Significant Theoretical Advance

The missing mass problem has been with us many decades now. Going on a century if you start counting from the work of Oort and Zwicky in the 1930s. Not quite a half a century if we date it from the 1970s when most of the relevant scientific community started to take it seriously. Either way, that’s a very long time for a major problem to go unsolved in physics. The quantum revolution that overturned our classical view of physics was lightning fast in comparison – see the discussion of Bohr’s theory in the foundation of quantum mechanics in David Merritt’s new book.

To this day, despite tremendous efforts, we have yet to obtain a confirmed laboratory detection of a viable dark matter particle – or even a hint of persuasive evidence for the physics beyond the Standard Model of Particle Physics (e.g., supersymmetry) that would be required to enable the existence of such particles. We cannot credibly claim (as many of my colleagues insist they can) to know that such invisible mass exists. All we really know is that there is a discrepancy between what we see and what we get: the universe and the galaxies within it cannot be explained by General Relativity and the known stable of Standard Model particles.

If we assume that General Relativity is both correct and sufficient to explain the universe, which seems like a very excellent assumption, then we are indeed obliged to invoke non-baryonic dark matter. The amount of astronomical evidence that points in this direction is overwhelming. That is how we got to where we are today: once we make the obvious, imminently well-motivated assumption, then we are forced along a path in which we become convinced of the reality of the dark matter, not merely as a hypothetical convenience to cosmological calculations, but as an essential part of physical reality.

I think that the assumption that General Relativity is correct is indeed an excellent one. It has repeatedly passed many experimental and observational tests too numerous to elaborate here. However, I have come to doubt the assumption that it suffices to explain the universe. The only data that test it on scales where the missing mass problem arises is the data from which we infer the existence of dark matter. Which we do by assuming that General Relativity holds. The opportunity for circular reasoning is apparent – and frequently indulged.

It should not come as a shock that General Relativity might not be completely sufficient as a theory in all circumstances. This is exactly the motivation for and the working presumption of quantum theories of gravity. That nothing to do with cosmology will be affected along the road to quantum gravity is just another assumption.

I expect that some of my colleagues will struggle to wrap their heads around what I just wrote. I sure did. It was the hardest thing I ever did in science to accept that I might be wrong to be so sure it had to be dark matter – because I was sure it was. As sure of it as any of the folks who remain sure of it now. So imagine my shock when we obtained data that made no sense in terms of dark matter, but had been predicted in advance by a completely different theory, MOND.

When comparing dark matter and MOND, one must weigh all evidence in the balance. Much of the evidence is gratuitously ambiguous, so the conclusion to which one comes depends on how one weighs the more definitive lines of evidence. Some of this points very clearly to MOND, while other evidence prefers non-baryonic dark matter. One of the most important lines of evidence in favor of dark matter is the acoustic power spectrum of the cosmic microwave background (CMB) – the pattern of minute temperature fluctuations in the relic radiation field imprinted on the sky a few hundred thousand years after the Big Bang.

The equations that govern the acoustic power spectrum require General Relativity, but thankfully the small amplitude of the temperature variations permits them to be solved in the limit of linear perturbation theory. So posed, they can be written as a damped and driven oscillator. The power spectrum favors features corresponding to standing waves at the epoch of recombination when the universe transitioned rather abruptly from an opaque plasma to a transparent neutral gas. The edge of a cloud provides an analog: light inside the cloud scatters off the water molecules and doesn’t get very far: the cloud is opaque. Any light that makes it to the edge of the cloud meets no further resistance, and is free to travel to our eyes – which is how we perceive the edge of the cloud. The CMB is the expansion-redshifted edge of the plasma cloud of the early universe.

An easy way to think about a damped and a driven oscillator is a kid being pushed on a swing. The parent pushing the child is a driver of the oscillation. Any resistance – like the child dragging his feet – damps the oscillation. Normal matter (baryons) damps the oscillations – it acts as a net drag force on the photon fluid whose oscillations we observe. If there is nothing going on but General Relativity plus normal baryons, we should see a purely damped pattern of oscillations in which each peak is smaller than the one before it, as seen in the solid line here:

CMB_Pl_CLonly
The CMB acoustic power spectrum predicted by General Relativity with no cold dark matter (line) and as observed by the Planck satellite (data points).

As one can see, the case of no Cold Dark Matter (CDM) does well to explain the amplitudes of the first two peaks. Indeed, it was the only hypothesis to successfully predict this aspect of the data in advance of its observation. The small amplitude of the second peak came as a great surprise from the perspective of LCDM. However, without CDM, there is only baryonic damping. Each peak should have a progressively lower amplitude. This is not observed. Instead, the third peak is almost the same amplitude as the second, and clearly higher than expected in the pure damping scenario of no-CDM.

CDM provides a net driving force in the oscillation equations. It acts like the parent pushing the kid. Even though the kid drags his feet, the parent keeps pushing, and the amplitude of the oscillation is maintained. For the third peak at any rate. The baryons are an intransigent child and keep dragging their feet; eventually they win and the power spectrum damps away on progressively finer angular scales (large 𝓁 in the plot).

As I wrote in this review, the excess amplitude of the third peak over the no-CDM prediction is the best evidence to my mind in favor of the existence of non-baryonic CDM. Indeed, this observation is routinely cited by many cosmologists to absolutely require dark matter. It is argued that the observed power spectrum is impossible without it. The corollary is that any problem the dark matter picture encounters is a mere puzzle. It cannot be an anomaly because the CMB tells us that CDM has to exist.

Impossible is a high standard. I hope the reader can see the flaw in this line of reasoning. It is the same as above. In order to compute the oscillation power spectrum, we have assumed General Relativity. While not replacing it, the persistent predictive successes of a theory like MOND implies the existence of a more general theory. We do not know that such a theory cannot explain the CMB until we develop said theory and work out its predictions.

That said, it is a tall order. One needs a theory that provides a significant driving term without a large amount of excess invisible mass. Something has to push the swing in a universe full of stuff that only drags its feet. That does seem nigh on impossible. Or so I thought until I heard a talk by Pedro Ferreira where he showed how the scalar field in TeVeS – the relativistic MONDian theory proposed by Bekenstein – might play the same role as CDM. However, he and his collaborators soon showed that the desired effect was indeed impossible, at least in TeVeS: one could not simultaneously fit the third peak and the data preceding the first. This was nevertheless an important theoretical development, as it showed how it was possible, at least in principle, to affect the peak ratios without massive amounts of non-baryonic CDM.

At this juncture, there are two options. One is to seek a theory that might work, and develop it to the point where it can be tested. This is a lot of hard work that is bound to lead one down many blind alleys without promise of ultimate success. The much easier option is to assume that it cannot be done. This is the option adopted by most cosmologists, who have spent the last 15 years arguing that the CMB power spectrum requires the existence of CDM. Some even seem to consider it to be a detection thereof, in which case we might wonder why we bother with all those expensive underground experiments to detect the stuff.

Rather fewer people have invested in the approach that requires hard work. There are a few brave souls who have tried it; these include Constantinos Skordis and Tom Złosnik. Very recently, the have shown a version of a relativistic MOND theory (which they call RelMOND) that does fit the CMB power spectrum. Here is the plot from their paper:

CMB_RelMOND_2020

Note that black line in their plot is the fit of the LCDM model to the Planck power spectrum data. Their theory does the same thing, so it necessarily fits the data as well. Indeed, a good fit appears to follow for a range of parameters. This is important, because it implies that little or no fine-tuning is needed: this is just what happens. That is arguably better than the case for LCDM, in which the fit is very fine-tuned. Indeed, that was a large point of making the measurement, as it requires a very specific set of parameters in order to work. It also leads to tensions with independent measurements of the Hubble constant, the baryon density, and the amplitude of the matter power spectrum at low redshift.

As with any good science result, this one raises a host of questions. It will take time to explore these. But this in itself is a momentous result. Irrespective if RelMOND is the right theory or, like TeVeS, just a step on a longer path, it shows that the impossible is in fact possible. The argument that I have heard repeated by cosmologists ad nauseam like a rosary prayer, that dark matter is the only conceivable way to explain the CMB power spectrum, is simply WRONG.

A Philosophical Approach to MOND

A Philosophical Approach to MOND is a new book by David Merritt. This is a major development in the both the science of cosmology and astrophysics, on the one hand, and the philosophy and history of science on the other. It should be required reading for anyone interested in any of these topics.

For many years, David Merritt was a professor of astrophysics who specialized in gravitational dynamics, leading a number of breakthroughs in the effects of supermassive black holes in galaxies on the orbits of stars around them. He has since transitioned to the philosophy of science. This may not sound like a great leap, but it is: these are different scholarly fields, each with their own traditions, culture, and required background education. Changing fields like this is a bit like switching boats mid-stream: even a strong swimmer may flounder in the attempt given the many boulders academic disciplines traditionally place in the stream of knowledge to mark their territory. Merritt has managed the feat with remarkable grace, devouring the background reading and coming up to speed in a different discipline to the point of a lucid fluency.

For the most part, practicing scientists have little interaction with philosophers and historians of science. Worse, we tend to have little patience for them. The baseline presumption of many physical scientists is that we know what we’re doing; there is nothing the philosophers can teach us. In the daily practice of what Kuhn called normal science, this is close to true. When instead we are faced with potential paradigm shifts, the philosophy of science is critical, and the absence of training in it on the part of many scientists becomes glaring.

In my experience, most scientists seem to have heard of Popper and Kuhn. If that. Physical scientists will almost always pay lip service to Popper’s ideal of falsifiablity, and that’s pretty much the extent of it. Living up to applying that ideal is another matter. If an idea that is near and dear to their hearts and careers is under threat, the knee-jerk response is more commonly “let’s not get carried away!”

There is more to the philosophy of science than that. The philosophers of science have invested lots of effort in considering both how science works in practice (e.g., Kuhn) and how it should work (Popper, Lakatos, …) The practice and the ideal of science are not always the same thing.

The debate about dark matter and MOND hinges on the philosophy of science in a profound way. I do not think it is possible to make real progress out of our current intellectual morass without a deep examination of what science is and what it should be.

Merritt takes us through the methodology of scientific research programs, spelling out what we’ve learned from past experience (the history of science) and from careful consideration of how science should work (its philosophical basis). For example, all scientists agree that it is important for a scientific theory to have predictive power. But we are disturbingly fuzzy on what that means. I frequently hear my colleagues say things like “my theory predicts that” in reference to some observation, when in fact no such prediction was made in advance. What they usually mean is that it fits well with the theory. This is sometimes true – they could have predicted the observation in advance if they had considered that particular case. But sometimes it is retroactive fitting more than prediction – consistency, perhaps, but it could have gone a number of other ways equally well. Worse, it is sometimes a post facto assertion that is simply false: not only was the prediction not made in advance, but the observation was genuinely surprising at the time it was made. Only in retrospect is it “correctly” “predicted.”

The philosophers have considered these situations. One thing I appreciate is Merritt’s review of the various takes philosophers have on what counts as a prediction. I wish I had known these things when I wrote the recent review in which I took a very restrictive definition to avoid the foible above. The philosophers provide better definitions, of which more than one can be usefully applicable. I’m not going to go through them here: you should read Merritt’s book, and those of the philosophers he cites.

From this philosophical basis, Merritt makes a systematic, dare I say, scientific, analysis of the basic tenets of MOND and MONDian theories, and how they fare with regard to their predictions and observational tests. Along the way, he also considers the same material in the light of the dark matter paradigm. Of comparable import to confirmed predictions are surprising observations: if a new theory predicts that the sun will rise in the morning, that isn’t either new or surprising. If instead a theory expects one thing but another is observed, that is surprising, and it counts against that theory even if it can be adjusted to accommodate the new fact. I have seen this happen over and over with dark matter: surprising observations (e.g., the absence of cusps in dark matter halos, the small numbers of dwarf galaxies, downsizing in which big galaxies appear to form earliest) are at first ignored, doubted, debated, then partially explained with some mental gymnastics until it is Known and of course, we knew it all along. Merritt explicitly points out examples of this creeping determinism, in which scientists come to believe they predicted something they merely rationalized post-facto (hence the preeminence of genuinely a priori predictions that can’t be fudged).

Merritt’s book is also replete with examples of scientists failing to take alternatives seriously. This is natural: we have invested an enormous amount of time developing physical science to the point we have now reached; there is an enormous amount of background material that cannot simply be ignored or discarded. All too often, we are confronted with crackpot ideas that do exactly this. This makes us reluctant to consider ideas that sound crazy on first blush, and most of us will rightly display considerable irritation when asked to do so. For reasons both valid and not, MOND skirts this bondary. I certainly didn’t take it seriously myself, nor really considered it at all, until its predictions came true in my own data. It was so far below my radar that at first I did not even recognize that this is what had happened. But I did know I was surprised; what I was seeing did not make sense in terms of dark matter. So, from this perspective, I can see why other scientists are quick to dismiss it. I did so myself, initially. I was wrong to do so, and so are they.

A common failure mode is to ignore MOND entirely: despite dozens of confirmed predictions, it simply remains off the radar for many scientists. They seem never to have given it a chance, so they simply don’t pay attention when it gets something right. This is pure ignorance, which is not a strong foundation from which to render a scientific judgement.

Another common reaction is to acknowledge then dismiss. Merritt provides many examples where eminent scientists do exactly this with a construction like: “MOND correctly predicted X but…” where X is a single item, as if this is the only thing that [they are aware that] it does. Put this way, it is easy to dismiss – a common refrain I hear is “MOND fits rotation curves but nothing else.” This is a long-debunked falsehood that is asserted and repeated until it achieves the status of common knowledge within the echo chamber of scientists who refuse to think outside the dark matter box.

This is where the philosophy of science is crucial to finding our way forward. Merritt’s book illuminates how this is done. If you are reading these words, you owe it to yourself to read his book.

The Hubble Constant from the Baryonic Tully-Fisher Relation

The Hubble Constant from the Baryonic Tully-Fisher Relation

The distance scale is fundamental to cosmology. How big is the universe? is pretty much the first question we ask when we look at the Big Picture.

The primary yardstick we use to describe the scale of the universe is Hubble’s constant: the H0 in

v = H0 D

that relates the recession velocity (redshift) of a galaxy to its distance. More generally, this is the current expansion rate of the universe. Pick up any book on cosmology and you will find a lengthy disquisition on the importance of this fundamental parameter that encapsulates the size, age, critical density, and potential fate of the cosmos. It is the first of the Big Two numbers in cosmology that expresses the still-amazing fact that the entire universe is expanding.

Quantifying the distance scale is hard. Throughout my career, I have avoided working on it. There are quite enough, er, personalities on the case already.

AliceMadPeople

No need for me to add to the madness.

Not that I couldn’t. The Tully-Fisher relation has long been used as a distance indicator. It played an important role in breaking the stranglehold that H0 = 50 km/s/Mpc had on the minds of cosmologists, including myself. Tully & Fisher (1977) found that it was approximately 80 km/s/Mpc. Their method continues to provide strong constraints to this day: Kourkchi et al. find H0 = 76.0 ± 1.1(stat) ± 2.3(sys) km s-1 Mpc-1. So I’ve been happy to stay out of it.

Until now.

d8onl2_u8aetogk

I am motivated in part by the calibration opportunity provided by gas rich galaxies, in part by the fact that tension in independent approaches to constrain the Hubble constant only seems to be getting worse, and in part by a recent conference experience. (Remember when we traveled?) Less than a year ago, I was at a cosmology conference in which I heard an all-too-typical talk that asserted that the Planck H0 = 67.4 ± 0.5 km/s/Mpc had to be correct and everybody who got something different was a stupid-head. I’ve seen this movie before. It is the same community (often the very same people) who once insisted that H0 had to be 50, dammit. They’re every bit as overconfident as before, suffering just as much from confirmation bias (LCDM! LCDM! LCDM!), and seem every bit as likely to be correct this time around.

So, is it true? We have the data, we’ve just refrained from using it in this particular way because other people were on the case. Let’s check.

The big hassle here is not measuring H0 so much as quantifying the uncertainties. That’s the part that’s really hard. So all credit goes to Jim Schombert, who rolled up his proverbial sleeves and did all the hard work. Federico Lelli and I mostly just played the mother-of-all-jerks referees (I’ve had plenty of role models) by asking about every annoying detail. To make a very long story short, none of the items under our control matter at a level we care about, each making < 1 km/s/Mpc difference to the final answer.

In principle, the Baryonic Tully-Fisher relation (BTFR) helps over the usual luminosity-based version by including the gas, which extends application of the relation to lower mass galaxies that can be quite gas rich. Ignoring this component results in a mess that can only be avoided by restricting attention to bright galaxies. But including it introduces an extra parameter. One has to adopt a stellar mass-to-light ratio to put the stars and the gas on the same footing. I always figured that would make things worse – and for a long time, it did. That is no longer the case. So long as we treat the calibration sample that defines the BTFR and the sample used to measure the Hubble constant self-consistently, plausible choices for the mass-to-light ratio return the same answer for H0. It’s all relative – the calibration changes with different choices, but the application to more distant galaxies changes in the same way. Same for the treatment of molecular gas and metallicity. It all comes out in the wash. Our relative distance scale is very precise. Putting an absolute number on it simply requires a lot of calibrating galaxies with accurate, independently measured distances.

Here is the absolute calibration of the BTFR that we obtain:

btf_cep_trgb
The Baryonic Tully-Fisher relation calibrated with 50 galaxies with direct distance determinations from either the Tip of the Red Giant Branch method (23) or Cepheids (27).

In constructing this calibrated BTFR, we have relied on distance measurements made or compiled by the Extragalactic Distance Database, which represents the cumulative efforts of Tully and many others to map out the local universe in great detail. We have also benefited from the work of Ponomareva et al, which provides new calibrator galaxies not already in our SPARC sample. Critically, they also measure the flat velocity from rotation curves, which is a huge improvement in accuracy over the more readily available linewidths commonly employed in Tully-Fisher work, but is expensive to obtain so remains the primary observational limitation on this procedure.

Still, we’re in pretty good shape. We now have 50 galaxies with well measured distances as well as the necessary ingredients to construct the BTFR: extended, resolved rotation curves, HI fluxes to measure the gas mass, and Spitzer near-IR data to estimate the stellar mass. This is a huge sample for which to have all of these data simultaneously. Measuring distances to individual galaxies remains challenging and time-consuming hard work that has been done by others. We are not about to second-guess their results, but we can note that they are sensible and remarkably consistent.

There are two primary methods by which the distances we use have been measured. One is Cepheids – the same type of variable stars that Hubble used to measure the distance to spiral nebulae to demonstrate their extragalactic nature. The other is the tip of the red giant branch (TRGB) method, which takes advantage of the brightest red giants having nearly the same luminosity. The sample is split nearly 50/50: there are 27 galaxies with a Cepheid distance measurement, and 23 with the TRGB. The two methods (different colored points in the figure) give the same calibration, within the errors, as do the two samples (circles vs. diamonds). There have been plenty of mistakes in the distance scale historically, so this consistency is important. There are many places where things could go wrong: differences between ourselves and Ponomareva, differences between Cepheids and the TRGB as distance indicators, mistakes in the application of either method to individual galaxies… so many opportunities to go wrong, and yet everything is consistent.

Having  followed the distance scale problem my entire career, I cannot express how deeply impressive it is that all these different measurements paint a consistent picture. This is a credit to a large community of astronomers who have worked diligently on this problem for what seems like aeons. There is a temptation to dismiss distance scale work as having been wrong in the past, so it can be again. Of course that is true, but it is also true that matters have improved considerably. Forty years ago, it was not surprising when a distance indicator turned out to be wrong, and distances changed by a factor of two. That stopped twenty years ago, thanks in large part to the Hubble Space Telescope, a key goal of which had been to nail down the distance scale. That mission seems largely to have been accomplished, with small differences persisting only at the level that one expects from experimental error. One cannot, for example, make a change to the Cepheid calibration without creating a tension with the TRGB data, or vice-versa: both have to change in concert by the same amount in the same direction. That is unlikely to the point of wishful thinking.

Having nailed down the absolute calibration of the BTFR for galaxies with well-measured distances, we can apply it to other galaxies for which we know the redshift but not the distance. There are nearly 100 suitable galaxies available in the SPARC database. Consistency between them and the calibrator galaxies requires

H0 = 75.1 +/- 2.3 (stat) +/- 1.5 (sys) km/s/Mpc.

This is consistent with the result for the standard luminosity-linewidth version of the Tully-Fisher relation reported by Kourkchi et al. Note also that our statistical (random/experimental) error is larger, but our systematic error is smaller. That’s because we have a much smaller number of galaxies. The method is, in principle, more precise (mostly because rotation curves are more accurate than linewidhts), so there is still a lot to be gained by collecting more data.

Our measurement is also consistent with many other “local” measurements of the distance scale,

hubbletension1but not with “global” measurements. See the nice discussion by Telescoper and the paper from which it comes. A Hubble constant in the 70s is the answer that we’ve consistently gotten for the past 20 years by a wide variety of distinct methods, including direct measurements that are not dependent on lower rungs of the distance ladder, like gravitational lensing and megamasers. These are repeatable experiments. In contrast, as I’ve pointed out before, it is the “global” CMB-fitted value of the Hubble parameter that has steadily diverged from the concordance region that originally established LCDM.

So, where does this leave us? In the past, it was easy to dismiss a tension of this sort as due to some systematic error, because that happened all the time – in the 20th century. That’s not so true anymore. It looks to me like the tension is real.

 

Two fields divided by a common interest

Two fields divided by a common interest

Britain and America are two nations divided by a common language.

attributed to George Bernard Shaw

Physics and Astronomy are two fields divided by a common interest in how the universe works. There is a considerable amount of overlap between some sub-fields of these subjects, and practically none at all in others. The aims and goals are often in common, but the methods, assumptions, history, and culture are quite distinct. This leads to considerable confusion, as with the English language – scientists with different backgrounds sometimes use the same words to mean rather different things.

A few terms that are commonly used to describe scientists who work on the subjects that I do include astronomer, astrophysicist, and cosmologist. I could be described as any of the these. But I also know lots of scientists to whom these words could be applied, but would mean something rather different.

A common question I get is “What’s the difference between an astronomer and an astrophysicist?” This is easy to answer from my experience as a long-distance commuter. If I get on a plane, and the person next to me is chatty and asks what I do, if I feel like chatting, I am an astronomer. If I don’t, I’m an astrophysicist. The first answer starts a conversation, the second shuts it down.

Flippant as that anecdote is, it is excruciatingly accurate – both for how people react (commuting between Cleveland and Baltimore for a dozen years provided lots of examples), and for what the difference is: practically none. If I try to offer a more accurate definition, then I am sure to fail to provide a complete answer, as I don’t think there is one. But to make the attempt:

Astronomy is the science of observing the sky, encompassing all elements required to do so. That includes practical matters like the technology of telescopes and their instruments across all wavelengths of the electromagnetic spectrum, and theoretical matters that allow us to interpret what we see up there: what’s a star? a nebula? a galaxy? How does the light emitted by these objects get to us? How do we count photons accurately and interpret what they mean?

Astrophysics is the science of how things in the sky work. What makes a star shine? [Nuclear reactions]. What produces a nebular spectrum? [The atomic physics of incredibly low density interstellar plasma.] What makes a spiral galaxy rotate? [Gravity! Gravity plus, well, you know, something. Or, if you read this blog, you know that we don’t really know.] So astrophysics is the physics of the objects astronomy discovers in the sky. This is a rather broad remit, and covers lots of physics.

With this definition, astrophysics is a subset of astronomy – such a large and essential subset that the terms can and often are used interchangeably. These definitions are so intimately intertwined that the distinction is not obvious even for those of us who publish in the learned journals of the American Astronomical Society: the Astronomical Journal (AJ) and the Astrophysical Journal (ApJ). I am often hard-pressed to distinguish between them, but to attempt it in brief, the AJ is where you publish a paper that says “we observed these objects” and the ApJ is where you write “here is a model to explain these objects.” The opportunity for overlap is obvious: a paper that says “observations of these objects test/refute/corroborate this theory” could appear in either. Nevertheless, there was a clearly a sufficient need to establish a separate journal focused on the physics of how things in the sky worked to launch the Astrophysical Journal in 1895 to complement the older Astronomical Journal (dating from 1849).

Cosmology is the study of the entire universe. As a science, it is the subset of astrophysics that encompasses observations that measure the universe as a physical entity: its size, age, expansion rate, and temporal evolution. Examples are sufficiently diverse that practicing scientists who call themselves cosmologists may have rather different ideas about what it encompasses, or whether it even counts as astrophysics in the way defined above.

Indeed, more generally, cosmology is where science, philosophy, and religion collide. People have always asked the big questions – we want to understand the world in which we find ourselves, our place in it, our relation to it, and to its Maker in the religious sense – and we have always made up stories to fill in the gaping void of our ignorance. Stories that become the stuff of myth and legend until they are unquestionable aspects of a misplaced faith that we understand all of this. The science of cosmology is far from immune to myth making, and often times philosophical imperatives have overwhelmed observational facts. The lengthy persistence of SCDM in the absence of any credible evidence that Ωm = 1 is a recent example. Another that comes and goes is the desire for a Phoenix universe – one that expands, recollapses, and is then reborn for another cycle of expansion and contraction that repeats ad infinitum. This is appealing for philosophical reasons – the universe isn’t just some bizarre one-off – but there’s precious little that we know (or perhaps can know) to suggest it is a reality.

battlestar_galactica-last-supper
This has all happened before, and will all happen again.

Nevertheless, genuine and enormous empirical progress has been made. It is stunning what we know now that we didn’t a century ago. It has only been 90 years since Hubble established that there are galaxies external to the Milky Way. Prior to that, the prevailing cosmology consisted of a single island universe – the Milky Way – that tapered off into an indefinite, empty void. Until Hubble established otherwise, it was widely (though not universally) thought that the spiral nebulae were some kind of gas clouds within the Milky Way. Instead, the universe is filled with millions and billions of galaxies comparable in stature to the Milky Way.

We have sometimes let our progress blind us to the gaping holes that remain in our knowledge. Some of our more imaginative and less grounded colleagues take some of our more fanciful stories to be established fact, which sometimes just means the problem is old and familiar so boring if still unsolved. They race ahead to create new stories about entities like multiverses. To me, multiverses are manifestly metaphysical: great fun for late night bull sessions, but not a legitimate branch of physics.

So cosmology encompasses a lot. It can mean very different things to different people, and not all of it is scientific. I am not about to touch on the world-views of popular religions, all of which have some flavor of cosmology. There is controversy enough about these definitions among practicing scientists.

I started as a physicist. I earned an SB in physics from MIT in 1985, and went on to the physics (not the astrophysics) department of Princeton for grad school. I had elected to study physics because I had a burning curiosity about how the world works. It was not specific to astronomy as defined above. Indeed, astronomy seemed to me at the time to be but one of many curiosities, and not necessarily the main one.

There was no clear department of astronomy at MIT. Some people who practiced astrophysics were in the physics department; others in Earth, Atmospheric, and Planetary Science, still others in Mathematics. At the recommendation of my academic advisor Michael Feld, I wound up doing a senior thesis with George W. Clark, a high energy astrophysicist who mostly worked on cosmic rays and X-ray satellites. There was a large high energy astrophysics group at MIT who studied X-ray sources and the physics that produced them – things like neutron stars, black holes, supernova remnants, and the intracluster medium of clusters of galaxies – celestial objects with sufficiently extreme energies to make X-rays. The X-ray group needed to do optical follow-up (OK, there’s an X-ray source at this location on the sky. What’s there?) so they had joined the MDM Observatory. I had expressed a vague interest in orbital dynamics, and Clark had become interested in the structure of elliptical galaxies, motivated by the elegant orbital structures described by Martin Schwarzschild. The astrophysics group did a lot of work on instrumentation, so we had access to a new-fangled CCD. These made (and continue to make) much more sensitive detectors than photographic plates.

Empowered by this then-new technology, we embarked on a campaign to image elliptical galaxies with the MDM 1.3 m telescope. The initial goal was to search for axial twists as the predicted consequence of triaxial structure – Schwarzschild had shown that elliptical galaxies need not be oblate or prolate, but could have three distinct characteristic lengths along their principal axes. What we noticed instead with the sensitive CCD was a wonder of new features in the low surface brightness outskirts of these galaxies. Most elliptical galaxies just fade smoothly into obscurity, but every fourth or fifth case displayed distinct shells and ripples – features that were otherwise hard to spot that had only recently been highlighted by Malin & Carter.

Arp227_crop
A modern picture (courtesy of Pierre-Alain Duc) of the shell galaxy Arp 227 (NGC 474). Quantifying the surface brightness profiles of the shells in order to constrain theories for their origin became the subject of my senior thesis. I found that they were most consistent with stars on highly elliptical orbits, as expected from the shredded remnants of a cannibalized galaxy. Observations like this contributed to a sea change in the thinking about galaxies as isolated island universes that never interacted to the modern hierarchical view in which galaxy mergers are ubiquitous.

At the time I was doing this work, I was of course reading up on galaxies in general, and came across Mike Disney’s arguments as to how low surface brightness galaxies could be ubiquitous and yet missed by many surveys. This resonated with my new observing experience. Look hard enough, and you would find something new that had never before been seen. This proved to be true, and remains true to this day.

I went on only two observing runs my senior year. The weather was bad for the first one, clearing only the last night during which I collected all the useful data. The second run came too late to contribute to my thesis. But I was enchanted by the observatory as a remote laboratory, perched in the solitude of the rugged mountains, themselves alone in an empty desert of subtly magnificent beauty. And it got dark at night. You could actually see the stars. More stars than can be imagined by those confined to the light pollution of a city.

It hadn’t occurred to me to apply to an astronomy graduate program. I continued on to Princeton, where I was assigned to work in the atomic physics lab of Will Happer. There I mostly measured the efficiency of various buffer gases in moderating spin exchange between sodium and xenon. This resulted in my first published paper.

In retrospect, this is kinda cool. As an alkali, the atomic structure of sodium is basically that of a noble gas with a spare electron it’s eager to give away in a chemical reaction. Xenon is a noble gas, chemically inert as it already has nicely complete atomic shells; it wants neither to give nor receive electrons from other elements. Put the two together in a vapor, and they can form weak van der Waals molecules in which they share the unwanted valence electron like a hot potato. The nifty thing is that one can spin-polarize the electron by optical pumping with a laser. As it happens, the wave function of the electron has a lot of overlap with the nucleus of the xenon (one of the allowed states has no angular momentum). Thanks to this overlap, the spin polarization imparted to the electron can be transferred to the xenon nucleus. In this way, it is possible to create large amounts of spin-polarized xenon nuclei. This greatly enhances the signal of MRI, and has found an application in medical imaging: a patient can breathe in a chemically inert [SAFE], spin polarized noble gas, making visible all the little passageways of the lungs that are otherwise invisible to an MRI. I contributed very little to making this possible, but it is probably the closest I’ll ever come to doing anything practical.

The same technology could, in principle, be applied to make dark matter detection experiments phenomenally more sensitive to spin-dependent interactions. Giant tanks of xenon have already become one of the leading ways to search for WIMP dark matter, gobbling up a significant fraction of the world supply of this rare noble gas. Spin polarizing the xenon on the scales of tons rather than grams is a considerable engineering challenge.

Now, in that last sentence, I lapsed into a bit of physics arrogance. We understand the process. Making it work is “just” a matter of engineering. In general, there is a lot of hard work involved in that “just,” and a lot of times it is a practical impossibility. That’s probably the case here, as the polarization decays away quickly – much more quickly than one could purify and pump tons of the stuff into a vat maintained at a temperature near absolute zero.

At the time, I did not appreciate the meaning of what I was doing. I did not like working in Happer’s lab. The windowless confines kept dark but for the sickly orange glow of a sodium D laser was not a positive environment to be in day after day after day. More importantly, the science did not call to my heart. I began to dream of a remote lab on a scenic mountain top.

I also found the culture in the physics department at Princeton to be toxic. Nothing mattered but to be smarter than the next guy (and it was practically all guys). There was no agreed measure for this, and for the most part people weren’t so brazen as to compare test scores. So the thing to do was Be Arrogant. Everybody walked around like they were too frickin’ smart to be bothered to talk to anyone else, or even see them under their upturned noses. It was weird – everybody there was smart, but no human could possible be as smart as these people thought they were. Well, not everybody, of course – Jim Peebles is impossibly intelligent, sane, and even nice (perhaps he is an alien, or at least a Canadian) – but for most of Princeton arrogance was a defining characteristic that seeped unpleasantly into every interaction.

It was, in considerable part, arrogance that drove me away from physics. I was appalled by it. One of the best displays was put on by David Gross in a colloquium that marked the take-over of theoretical physics by string theory. The dude was talking confidently in bold positivist terms about predictions that were twenty orders of magnitude in energy beyond any conceivable experimental test. That, to me, wasn’t physics.

More than thirty years on, I can take cold comfort that my youthful intuition was correct. String theory has conspicuously failed to provide the vaunted “theory of everything” that was promised. Instead, we have vague “landscapes” of 10500 possible theories. Just want one. 10500 is not progress. It’s getting hopelessly lost. That’s what happens when brilliant ideologues are encouraged to wander about in their hyperactive imaginations without experimental guidance. You don’t get physics, you get metaphysics. If you think that sounds harsh, note that Gross himself takes exactly this issue with multiverses, saying the notion “smells of angels” and worrying that a generation of physicists will be misled down a garden path – exactly the way he misled a generation with string theory.

So I left Princeton, and switched to a field where progress could be made. I chose to go to the University of Michigan, because I knew it had access to the MDM telescopes (one of the M’s stood for Michigan, the other MIT, with the D for Dartmouth) and because I was getting married. My wife is an historian, and we needed a university that was good in both our fields.

When I got to Michigan, I was ready to do research. I wanted to do more on shell galaxies, and low surface brightness galaxies in general. I had had enough coursework, I reckoned; I was ready to DO science. So I was somewhat taken aback that they wanted me to do two more years of graduate coursework in astronomy.

Some of the physics arrogance had inevitably been incorporated into my outlook. To a physicist, all other fields are trivial. They are just particular realizations of some subset of physics. Chemistry is just applied atomic physics. Biology barely even counts as science, and those parts that do could be derived from physics, in principle. As mere subsets of physics, any other field can and will be picked up trivially.

After two years of graduate coursework in astronomy, I had the epiphany that the field was not trivial. There were excellent reasons, both practical and historical, why it was a separate field. I had been wrong to presume otherwise.

Modern physicists are not afflicted by this epiphany. That bad attitude I was guilty of persists and is remarkably widespread. I am frequently confronted by young physicists eager to mansplain my own field to me, who casually assume that I am ignorant of subjects that I wrote papers on before they started reading the literature, and who equate a disagreement with their interpretation on any subject with ignorance on my part. This is one place the fields diverge enormously. In physics, if it appears in a textbook, it must be true. In astronomy, we recognize that we’ve been wrong about the universe so many times, we’ve learned to be tolerant of interpretations that initially sound absurd. Today’s absurdity may be tomorrow’s obvious fact. Physicists don’t share this history, and often fail to distinguish interpretation from fact, much less cope with the possibility that a single set of facts may admit multiple interpretations.

Cosmology has often been a leader in being wrong, and consequently enjoyed a shady reputation in both physics and astronomy for much of the 20th century. When I started on the faculty at the University of Maryland in 1998, there was no graduate course in the subject. This seemed to me to be an obvious gap to fill, so I developed one. Some of the senior astronomy faculty expressed concern as to whether this could be a rigorous 3 credit graduate course, and sent a neutral representative to discuss the issue with me. He was satisfied. As would be any cosmologist – I was teaching LCDM before most other cosmologists had admitted it was a thing.

At that time, 1998, my wife was also a new faculty member at John Carroll University. They held a welcome picnic, which I attended as the spouse. So I strike up a conversation with another random spouse who is also standing around looking similarly out of place. Ask him what he does. “I’m a physicist.” Ah! common ground – what do you work on? “Cosmology and dark matter.” I was flabbergasted. How did I not know this person? It was Glenn Starkman, and this was my first indication that sometime in the preceding decade, cosmology had become an acceptable field in physics and not a suspect curiosity best left to woolly-minded astronomers.

This was my first clue that there were two entirely separate groups of professional scientists who self-identified as cosmologists. One from the astronomy tradition, one from physics. These groups use the same words to mean the same things – sometimes. There is a common language. But like British English and American English, sometimes different things are meant by the same words.

“Dark matter” is a good example. When I say dark matter, I mean the vast diversity of observational evidence for a discrepancy between measurable probes of gravity (orbital speeds, gravitational lensing, equilibrium hydrostatic temperatures, etc.) and what is predicted by the gravity of the observed baryonic material – the stars and gas we can see. When a physicist says “dark matter,” he seems usually to mean the vast array of theoretical hypotheses for what new particle the dark matter might be.

To give a recent example, a colleague who is a world-reknowned expert on dark matter, and an observational astronomer in a physics department dominated by particle cosmologists, noted that their chairperson had advocated a particular hiring plan because “we have no one who works on dark matter.” This came across as incredibly disrespectful, which it is. But it is also simply clueless. It took some talking to work through, but what we think he meant was that they had no one who worked on laboratory experiments to detect dark matter. That’s a valid thing to do, which astronomers don’t deny. But it is a severely limited way to think about it.

To date, the evidence for dark matter to date is 100% astronomical in nature. That’s all of it. Despite enormous effort and progress, laboratory experiments provide 0%. Zero point zero zero zero. And before some fool points to the cosmic microwave background, that is not a laboratory experiment. It is astronomy as defined above: information gleaned from observation of the sky. That it is done with photons from the mm and microwave part of the spectrum instead of the optical part of the spectrum doesn’t make it fundamentally different: it is still an observation of the sky.

And yet, apparently the observational work that my colleague did was unappreciated by his own department head, who I know to fancy himself an expert on the subject. Yet existence of a complementary expert in his own department didn’t ever register him. Even though, as chair, he would be responsible for reviewing the contributions of the faculty in his department on an annual basis.

To many physicists we astronomers are simply invisible. What could we possibly teach them about cosmology or dark matter? That we’ve been doing it for a lot longer is irrelevant. Only what they [re]invent themselves is valid, because astronomy is a subservient subfield populated by people who weren’t smart enough to become particle physicists. Because particle physicists are the smartest people in the world. Just ask one. He’ll tell you.

To give just one personal example of many: a few years ago, after I had published a paper in the premiere physics journal, I had a particle physics colleague ask, in apparent sincerity, “Are you an astrophysicist?” I managed to refrain from shouting YES YOU CLUELESS DUNCE! Only been doing astrophysics for my entire career!

As near as I can work out, his erroneous definition of astrophysicist involved having a Ph.D. in physics. That’s a good basis to start learning astrophysics, but it doesn’t actually qualify. Kris Davidson noted a similar sociology among his particle physics colleagues: “They simply declare themselves to be astrophysicsts.” Well, I can tell you – having made that same mistake personally – it ain’t that simple. I’m pleased that so many physicists are finally figuring out what I did in the 1980s, and welcome their interest in astrophysics and cosmology. But they need to actually learn the subject, just not assume they’ll pick it up in a snap without actually doing so.

 

A personal recollection of how we learned to stop worrying and love the Lambda

A personal recollection of how we learned to stop worrying and love the Lambda

There is a tendency when teaching science to oversimplify its history for the sake of getting on with the science. How it came to be isn’t necessary to learn it. But to do science requires a proper understanding of the process by which it came to be.

The story taught to cosmology students seems to have become: we didn’t believe in the cosmological constant (Λ), then in 1998 the Type Ia supernovae (SN) monitoring campaigns detected accelerated expansion, then all of a sudden we did believe in Λ. The actual history was, of course, rather more involved – to the point where this oversimplification verges on disingenuous. There were many observational indications of Λ that were essential in paving the way.

Modern cosmology starts in the early 20th century with the recognition that the universe should be expanding or contracting – a theoretical inevitability of General Relativity that Einstein initially tried to dodge by inventing the cosmological constant – and is expanding in fact, as observationally established by Hubble and Slipher and many others since. The Big Bang was largely considered settled truth after the discovery of the existence of the cosmic microwave background (CMB) in 1964.

The CMB held a puzzle, as it quickly was shown to be too smooth. The early universe was both isotropic and homogeneous. Too homogeneous. We couldn’t detect the density variations that could grow into galaxies and other immense structures. Though such density variations are now well measured as temperature fluctuations that are statistically well described by the acoustic power spectrum, the starting point was that these fluctuations were a disappointing no-show. We should have been able to see them much sooner, unless something really weird was going on…

That something weird was non-baryonic cold dark matter (CDM). For structure to grow, it needed the helping hand of the gravity of some unseen substance. Normal matter matter did not suffice. The most elegant cosmology, the Einstein-de Sitter universe, had a mass density Ωm= 1. But the measured abundances of the light elements were only consistent with the calculations of big bang nucleosynthesis if normal matter amounted to only 5% of Ωm = 1. This, plus the need to grow structure, led to the weird but seemingly unavoidable inference that the universe must be full of invisible dark matter. This dark matter needed to be some slow moving, massive particle that does not interact with light nor reside within the menagerie of particles present in the Standard Model of Particle Physics.

CDM and early universe Inflation were established in the 1980s. Inflation gave a mechanism that drove the mass density to exactly one (elegant!), and CDM gave us hope for enough mass to get to that value. Together, they gave us the Standard CDM (SCDM) paradigm with Ωm = 1.000 and H0 = 50 km/s/Mpc.

elrondwasthere
I was there when SCDM failed.

It is hard to overstate the ferver with which the SCDM paradigm was believed. Inflation required that the mass density be exactly one; Ωm < 1 was inconceivable. For an Einstein-de Sitter universe to be old enough to contain the oldest stars, the Hubble constant had to be the lower of the two (50 or 100) commonly discussed at that time. That meant that H0 > 50 was Right Out. We didn’t even discuss Λ. Λ was Unmentionable. Unclean.

SCDM was Known, Khaleesi.

scdm_rightout

Λ had attained unmentionable status in part because of its origin as Einstein’s greatest blunder, and in part through its association with the debunked Steady State model. But serious mention of it creeps back into the literature by 1990. The first time I personally heard Λ mentioned as a serious scientific possibility was by Yoshii at a conference in 1993. Yoshii based his argument on a classic cosmological test, N(m) – the number of galaxies as a function of how faint they appeared. The deeper you look, the more you see, in a way that depends on the intrinsic luminosity of galaxies, and how they fill space. Look deep enough, and you begin to trace the geometry of the cosmos.

At this time, one of the serious problems confronting the field was the faint blue galaxies problem. There were so many faint galaxies on the sky, it was incredibly difficult to explain them all. Yoshii made a simple argument. To get so many galaxies, we needed a big volume. The only way to do that in the context of the Robertson-Walker metric that describes the geometry of the universe is if we have a large cosmological constant, Λ. He was arguing for ΛCDM five years before the SN results.

gold_hat_portrayed_by_alfonso_bedoya
Lambda? We don’t need no stinking Lambda!

Yoshii was shouted down. NO! Galaxies evolve! We don’t need no stinking Λ! In retrospect, Yoshii & Peterson (1995) looks like a good detection of Λ. Perhaps Yoshii & Peterson also deserve a Nobel prize?

Indeed, there were many hints that Λ (or at least low Ωm) was needed, e.g., the baryon catastrophe in clusters, the power spectrum of IRAS galaxies, the early appearance of bound structures, the statistics of gravitational lensesand so on. Certainly by the mid-90s it was clear that we were not going to make it to Ωm = 1. Inflation was threatened – it requires Ωm = 1 – or at least a flat geometry: ΩmΛ = 1.

SCDM was in crisis.

A very influential 1995 paper by Ostriker & Steinhardt did a lot to launch ΛCDM. I was impressed by the breadth of data Ostriker & Steinhardt discussed, all of which demanded low Ωm. I thought the case for Λ was less compelling, as it hinged on the age problem in a way that might also have been solved, at that time, by simply having an open universe (low Ωm with no Λ). This would ruin Inflation, but I wasn’t bothered by that. I expect they were. Regardless, they definitely made that case for ΛCDM three years before the supernovae results. Their arguments were accepted by almost everyone who was paying attention, including myself. I heard Ostriker give a talk around this time during which he was asked “what cosmology are you assuming?” to which he replied “the right one.” Called the “concordance” cosmology by Ostriker & Steinhardt, ΛCDM had already achieved the status of most-favored cosmology by the mid-90s.

omhannotated
A simplified version of the diagram of Ostriker & Steinhardt (1995) illustrating just a few of the constraints they discussed. Direct measurements of the expansion rate, mass density, and ages of the oldest stars excluded SCDM, instead converging on a narrow window – what we now call ΛCDM.

Ostriker & Steinhardt neglected to mention an important prediction of Λ: not only should the universe expand, but that expansion rate should accelerate! In 1995, that sounded completely absurd. People had looked for such an effect, and claimed not to see it. So I wrote a brief note pointing out the predicted acceleration of the expansion rate. I meant it in a bad way: how crazy would it be if the expansion of the universe was accelerating?! This was an obvious and inevitable consequence of ΛCDM that was largely being swept under the rug at that time.

I mean[t], surely we could live with Ωm < 1 but no Λ. Can’t we all just get along? Not really, as it turned out. I remember Mike Turner pushing the SN people very hard in Aspen in 1997 to Admit Λ. He had an obvious bias: as an Inflationary cosmologist, he had spent the previous decade castigating observers for repeatedly finding Ωm < 1. That’s too little mass, you fools! Inflation demands Ωm = 1.000! Look harder!

By 1997, Turner had, like many cosmologists, finally wrapped his head around the fact that we weren’t going to find enough mass for Ωm = 1. This was a huge problem for Inflation. The only possible solution, albeit an ugly one, was if Λ made up the difference. So there he was at Aspen, pressuring the people who observed supernova to Admit Λ. One, in particular, was Richard Ellis, a great and accomplished astronomer who had led the charge in shouting down Yoshii. They didn’t yet have enough data to Admit Λ. Not.Yet.

By 1998, there were many more high redshift SNIa. Enough to see Λ. This time, after the long series of results only partially described above, we were intellectually prepared to accept it – unlike in 1993. Had the SN experiments been conducted five years earlier, and obtained exactly the same result, they would not have been awarded the Nobel prize. They would instead have been dismissed as a trick of astrophysics: the universe evolves, metallicity was lower at earlier times, that made SN then different from now, they evolve and so cannot be used as standard candles. This sounds silly now, as we’ve figured out how to calibrate for intrinsic variations in the luminosities of Type Ia SN, but that is absolutely how we would have reacted in 1993, and no amount of improvements in the method would have convinced us. This is exactly what we did with faint galaxy counts: galaxies evolve; you can’t hope to understand that well enough to constrain cosmology. Do you ever hear them cited as evidence for Λ?

Great as the supernovae experiments to measure the metric genuinely were, they were not a discovery so much as a confirmation of what cosmologists had already decided to believe. There was no singular discovery that changed the way we all thought. There was a steady drip, drip, drip of results pointing towards Λ all through the ’90s – the age problem in which the oldest stars appeared to be older than the universe in which they reside, the early appearance of massive clusters and galaxies, the power spectrum of galaxies from redshift surveys that preceded Sloan, the statistics of gravitational lenses, and the repeated measurement of 1/4 < Ωm < 1/3 in a large variety of independent ways – just to name a few. By the mid-90’s, SCDM was dead. We just refused to bury it until we could accept ΛCDM as a replacement. That was what the Type Ia SN results really provided: a fresh and dramatic reason to accept the accelerated expansion that we’d already come to terms with privately but had kept hidden in the closet.

Note that the acoustic power spectrum of temperature fluctuations in the cosmic microwave background (as opposed to the mere existence of the highly uniform CMB) plays no role in this history. That’s because temperature fluctuations hadn’t yet been measured beyond their rudimentary detection by COBE. COBE demonstrated that temperature fluctuations did indeed exist (finally!) as they must, but precious little beyond that. Eventually, after the settling of much dust, COBE was recognized as one of many reasons why Ωm ≠ 1, but it was neither the most clear nor most convincing reason at that time. Now, in the 21st century, the acoustic power spectrum provides a great way to constrain what all the parameters of ΛCDM have to be, but it was a bit player in its development. The water there was carried by traditional observational cosmology using general purpose optical telescopes in a great variety of different ways, combined with a deep astrophysical understanding of how stars, galaxies, quasars and the whole menagerie of objects found in the sky work. All the vast knowledge incorporated in textbooks like those by Harrison, by Peebles, and by Peacock – knowledge that often seems to be lacking in scientists trained in the post-WMAP era.

Despite being a late arrival, the CMB power spectrum measured in 2000 by Boomerang and 2003 by WMAP did one important new thing to corroborate the ΛCDM picture. The supernovae data didn’t detect accelerated expansion so much as exclude the deceleration we had nominally expected. The data were also roughly consistent with a coasting universe (neither accelerating nor decelerating); the case for acceleration only became clear when we assumed that the geometry of the universe was flat (ΩmΛ = 1). That didn’t have to work out, so it was a great success of the paradigm when the location of the first peak of the power spectrum appeared in exactly the right place for a flat FLRW geometry.

The consistency of these data have given ΛCDM an air of invincibility among cosmologists. But a modern reconstruction of the Ostriker & Steinhardt diagram leaves zero room remaining – hence the tension between H0 = 73 measured directly and H0 = 67 from multiparameter CMB fits.

omhannotated_cmb
Constraints from the acoustic power spectrum of the CMB overplotted on the direct measurements from the plot above. Initially in great consistency with those measurement, the best fit CMB values have steadily wandered away from the most-favored region of parameter space that established ΛCDM in the first place. This is most apparent in the tension with H0.

In cosmology, we are accustomed to having to find our way through apparently conflicting data. The difference between an expansion rate of 67 and 73 seems trivial given that the field was long riven – in living memory – by the dispute between 50 and 100. This gives rise to the expectation that the current difference is just a matter of some subtle systematic error somewhere. That may well be correct. But it is also conceivable that FLRW is inadequate to describe the universe, and we have been driven to the objectively bizarre parameters of ΛCDM because it happens to be the best approximation that can be obtained to what is really going on when we insist on approximating it with FLRW.

Though a logical possibility, that last sentence will likely drive many cosmologists to reach for their torches and pitchforks. Before killing the messenger, we should remember that we once endowed SCDM with the same absolute certainty we now attribute to ΛCDM. I was there, 3,000 internet years ago, when SCDM failed. There is nothing so sacred in ΛCDM that it can’t suffer the same fate, as has every single cosmology ever devised by humanity.

Today, we still lack definitive knowledge of either dark matter or dark energy. These add up to 95% of the mass-energy of the universe according to ΛCDM. These dark materials must exist.

It is Known, Khaleesi.

The next cosmic frontier: 21cm absorption at high redshift

The next cosmic frontier: 21cm absorption at high redshift

There are two basic approaches to cosmology: start at redshift zero and work outwards in space, or start at the beginning of time and work forward. The latter approach is generally favored by theorists, as much of the physics of the early universe follows a “clean” thermal progression, cooling adiabatically as it expands. The former approach is more typical of observers who start with what we know locally and work outwards in the great tradition of Hubble, Sandage, Tully, and the entire community of extragalactic observers that established the paradigm of the expanding universe and measured its scale. This work had established our current concordance cosmology, ΛCDM, by the mid-90s.*

Both approaches have taught us an enormous amount. Working forward in time, we understand the nucleosynthesis of the light elements in the first few minutes, followed after a few hundred thousand years by the epoch of recombination when the universe transitioned from an ionized plasma to a neutral gas, bequeathing us the cosmic microwave background (CMB) at the phenomenally high redshift of z=1090. Working outwards in redshift, large surveys like Sloan have provided a detailed map of the “local” cosmos, and narrower but much deeper surveys provide a good picture out to z = 1 (when the universe was half its current size, and roughly half its current age) and beyond, with the most distant objects now known above redshift 7, and maybe even at z > 11. JWST will provide a good view of the earliest (z ~ 10?) galaxies when it launches.

This is wonderful progress, but there is a gap from 10 < z < 1000. Not only is it hard to observe objects so distant that z > 10, but at some point they shouldn’t exist. It takes time to form stars and galaxies and the supermassive black holes that fuel quasars, especially when starting from the smooth initial condition seen in the CMB. So how do we probe redshifts z > 10?

It turns out that the universe provides a way. As photons from the CMB traverse the neutral intergalactic medium, they are subject to being absorbed by hydrogen atoms – particularly by the 21cm spin-flip transition. Long anticipated, this signal has recently been detected by the EDGES experiment. I find it amazing that the atomic physics of the early universe allows for this window of observation, and that clever scientists have figured out a way to detect this subtle signal.

So what is going on? First, a mental picture. In the image below, an observer at the left looks out to progressively higher redshift towards the right. The history of the universe unfolds from right to left.

cosmicdarkagesillustration
An observer’s view of the history of the universe. Nearby, at low redshift, we see mostly empty space sprinkled with galaxies. At some high redshift (z ~ 20?), the first stars formed, flooding the previously dark universe with UV photons that reionize the gas of the intergalactic medium. The backdrop of the CMB provides the ultimate limit to electromagnetic observations as it marks the boundary (at z = 1090) between a mostly transparent and completely opaque universe.

Pritchard & Loeb give a thorough and lucid account of the expected sequence of events. As the early universe expands, it cools. Initially, the thermal photon bath that we now observe as the CMB has enough energy to keep atoms ionized. The mean free path that a photon can travel before interacting with a charged particle in this early plasma is very short: the early universe is opaque like the interior of a thick cloud. At z = 1090, the temperature drops to the point that photons can no longer break protons and electrons apart. This epoch of recombination marks the transition from an opaque plasma to a transparent universe of neutral hydrogen and helium gas. The path length of photons becomes very long; those that we see as the CMB have traversed the length of the cosmos mostly unperturbed.

Immediately after recombination follows the dark ages. Sources of light have yet to appear. There is just neutral gas expanding into the future. This gas is mostly but not completely transparent. As CMB photons propagate through it, they are subject to absorption by the spin-flip transition of hydrogen, a subtle but, in principle, detectable effect: one should see redshifted absorption across the dark ages.

After some time – perhaps a few hundred million years? – the gas has had enough time to clump up enough to start to form the first structures. This first population of stars ends the dark ages and ushers in cosmic dawn. The photons they release into the vast intergalactic medium (IGM) of neutral gas interacts with it and heats it up, ultimately reionizing the entire universe. After this time the IGM is again a plasma, but one so thin (thanks to the expansion of the universe) that it remains transparent. Galaxies assemble and begin the long evolution characterized by the billions of years lived by the stars the contain.

This progression leads to the expectation of 21cm absorption twice: once during the dark ages, and again at cosmic dawn. There are three temperatures we need to keep track of to see how this happens: the radiation temperature Tγ, the kinetic temperature of the gas, Tk, and the spin temperature, TS. The radiation temperature is that of the CMB, and scales as (1+z). The gas temperature is what you normally think of as a temperature, and scales approximately as (1+z)2. The spin temperature describes the occupation of the quantum levels involved in the 21cm hyperfine transition. If that makes no sense to you, don’t worry: all that matters is that absorption can occur when the spin temperature is less than the radiation temperature. In general, it is bounded by Tk < TS < Tγ.

The radiation temperature and gas temperature both cool as the universe expands. Initially, the gas remains coupled to the radiation, and these temperatures remain identical until decoupling around z ~ 200. After this, the gas cools faster than the radiation. The radiation temperature is extraordinarily well measured by CMB observations, and is simply Tγ = (2.725 K)(1+z). The gas temperature is more complicated, requiring the numerical solution of the Saha equation for a hydrogen-helium gas. Clever people have written codes to do this, like the widely-used RECFAST. In this way, one can build a table of how both temperatures depend on redshift in any cosmology one cares to specify.

This may sound complicated if it is the first time you’ve encountered it, but the physics is wonderfully simple. It’s just the thermal physics of the expanding universe, and the atomic physics of a simple gas composed of hydrogen and helium in known amounts. Different cosmologies specify different expansion histories, but these have only a modest (and calculable) effect on the gas temperature.

Wonderfully, the atomic physics of the 21cm transition is such that it couples to both the radiation and gas temperatures in a way that matters in the early universe. It didn’t have to be that way – most transitions don’t. Perhaps this is fodder for people who worry that the physics of our universe is fine-tuned.

There are two ways in which the spin temperature couples to that of the gas. During the dark ages, the coupling is governed simply by atomic collisions. By cosmic dawn collisions have become rare, but the appearance of the first stars provides UV radiation that drives the WouthuysenField effect. Consequently, we expect to see two absorption troughs: one around z ~ 20 at cosmic dawn, and another at still higher redshift (z ~ 100) during the dark ages.

Observation of this signal has the potential to revolutionize cosmology like detailed observations of the CMB did. The CMB is a snapshot of the universe during the narrow window of recombination at z = 1090. In principle, one can make the same sort of observation with the 21cm line, but at each and every redshift where absorption occurs: z = 16, 17, 18, 19 during cosmic dawn and again at z = 50, 100, 150 during the dark ages, with whatever frequency resolution you can muster. It will be like having the CMB over and over and over again, each redshift providing a snapshot of the universe at a different slice in time.

The information density available from the 21cm signal is in principle quite large. Before we can make use of any of this information, we have to detect it first. Therein lies the rub. This is an incredibly weak signal – we have to be able to detect that the CMB is a little dimmer than it would have been – and we have to do it in the face of much stronger foreground signals from the interstellar medium of our Galaxy and from man-made radio interference here on Earth. Fortunately, though much brighter than the signal we seek, these foregrounds have a different frequency dependence, so it should be possible to sort out, in principle.

Saying a thing can be done and doing it are two different things. This is already a long post, so I will refrain from raving about the technical challenges. Lets just say it’s Real Hard.

Many experimentalists take that as a challenge, and there are a good number of groups working hard to detect the cosmic 21cm signal. EDGES appears to have done it, reporting the detection of the signal at cosmic dawn in February. Here some weasel words are necessary, as the foreground subtraction is a huge challenge, and we always hope to see independent confirmation of a new signal like this. Those words of caution noted, I have to add that I’ve had the chance to read up on their methods, and I’m really impressed. Unlike the BICEP claim to detect primordial gravitational waves that proved to be bogus after being rushed to press release before refereering, the EDGES team have done all manner of conceivable cross-checks on their instrumentation and analysis. Nor did they rush to publish, despite the importance of the result. In short, I get exactly the opposite vibe from BICEP, whose foreground subtraction was obviously wrong as soon as I laid eyes on the science paper. If EDGES proves to be wrong, it isn’t for want of doing things right. In the meantime, I think we’re obliged to take their result seriously, and not just hope it goes away (which seems to be the first reaction to the impossible).

Here is what EDGES saw at cosmic dawn:

nature25792-f2
Fig. 2 from the EDGES detection paper. The dip, detected repeatedly in different instrumental configurations, shows a decrease in brightness temperature at radio frequencies, as expected from the 21cm absorbing some of the radiation from the CMB.

The unbelievable aspect of the EDGES observation is that it is too strong. Feeble as this signal is (a telescope brightness decrement of half a degree Kelvin), after subtracting foregrounds a thousand times stronger, it is twice as much as is possible in ΛCDM.

I made a quick evaluation of this, and saw that the observed signal could be achieved if the baryon fraction of the universe was high – basically, if cold dark matter did not exist. I have now had the time to make a more careful calculation, and publish some further predictions. The basic result from before stands: the absorption should be stronger without dark matter than with it.

The reason for this is simple. A universe full of dark matter decelerates rapidly at early times, before the acceleration of the cosmological constant kicks in. Without dark matter, the expansion more nearly coasts. Consequently, the universe is relatively larger from 10 < z < 1000, and the CMB photons have to traverse a larger path length to get here. They have to go about twice as far through the same density of hydrogen absorbers. It’s like putting on a second pair of sunglasses.

Quantitatively, the predicted absorption, both with dark matter and without, looks like:

predict21cmsignal
The predicted 21cm absorption with dark matter (red broken line) and without (blue line). Also shown (in grey) is the signal observed by EDGES.

 

The predicted absorption is consistent with the EDGES observation, within the errors, if there is no dark matter. More importantly, ΛCDM is not consistent with the data, at greater than 95% confidence. At cosmic dawn, I show the maximum possible signal. It could be weaker, depending on the spectra of the UV radiation emitted by the first stars. But it can’t be stronger. Taken at face value, the EDGES result is impossible in ΛCDM. If the observation is corroborated by independent experiments, ΛCDM as we know it will be falsified.

There have already been many papers trying to avoid this obvious conclusion. If we insist on retaining ΛCDM, the only way to modulate the strength of the signal is to alter the ratio of the radiation temperature to the gas temperature. Either we make the radiation “hotter,” or we make the gas cooler. If we allow ourselves this freedom, we can fit any arbitrary signal strength. This is ad hoc in the way that gives ad hoc a bad name.

We do not have this freedom – not really. The radiation temperature is measured in the CMB with great accuracy. Altering this would mess up the genuine success of ΛCDM in fitting the CMB. One could postulate an additional source, something that appears after recombination but before cosmic dawn to emit enough radio power throughout the cosmos to add to the radio brightness that is being absorbed. There is zero reason to expect such sources (what part of `cosmic dawn’ was ambiguous?) and no good way to make them at the right time. If they are primordial (as people love to imagine but are loathe to provide viable models for) then they’re also present at recombination: anything powerful enough to have the necessary effect will likely screw up the CMB.

Instead of magically increasing the radiation temperature, we might decrease the gas temperature. This seems no more plausible. The evolution of the gas temperature is a straightforward numerical calculation that has been checked by several independent codes. It has to be right at the time of recombination, or again, we mess up the CMB. The suggestions that I have heard seem mostly to invoke interactions between the gas and dark matter that offload some of the thermal energy of the gas into the invisible sink of the dark matter. Given how shy dark matter has been about interacting with normal matter in the laboratory, it seems pretty rich to imagine that it is eager to do so at high redshift. Even advocates of this scenario recognize its many difficulties.

For those who are interested, I cite a number of the scientific papers that attempt these explanations in my new paper. They all seem like earnest attempts to come to terms with what is apparently impossible. Many of these ideas also strike me as a form of magical thinking that stems from ΛCDM groupthink. After all, ΛCDM is so well established, any unexpected signal must be a sign of exciting new physics (on top of the new physics of dark matter and dark energy) rather than an underlying problem with ΛCDM itself.

The more natural interpretation is that the expansion history of the universe deviates from that predicted by ΛCDM. Simply taking away the dark matter gives a result consistent with the data. Though it did not occur to me to make this specific prediction a priori for an experiment that did not yet exist, all the necessary calculations had been done 15 years ago.

Using the same model, I make a genuine a priori prediction for the dark ages. For the specific NoCDM model I built in 2004, the 21cm absorption in the dark ages should again be about twice as strong as expected in ΛCDM. This seems fairly generic, but I know the model is not complete, so I wouldn’t be upset if it were not bang on.

I would be upset if ΛCDM were not bang on. The only thing that drives the signal in the dark ages is atomic scattering. We understand this really well. ΛCDM is now so well constrained by Planck that, if right, the 21cm absorption during the dark ages must follow the red line in the inset in the figure. The amount of uncertainty is not much greater than the thickness of the line. If ΛCDM fails this test, it would be a clear falsification, and a sign that we need to try something completely different.

Unfortunately, detecting the 21cm absorption signal during the dark ages is even harder than it is at cosmic dawn. At these redshifts (z ~ 100), the 21cm line (1420 MHz on your radio dial) is shifted beyond the ionospheric cutoff of the Earth’s atmosphere at 30 MHz. Frequencies this low cannot be observed from the ground. Worse, we have made the Earth itself a bright foreground contaminant of radio frequency interference.

Undeterred, there are multiple proposals to measure this signal by placing an antenna in space – in particular, on the far side of the moon, so that the moon shades the instrument from terrestrial radio interference. This is a great idea. The mere detection of the 21cm signal from the dark ages would be an accomplishment on par with the original detection of the CMB. It appears that it might also provide a decisive new way of testing our cosmological model.

There are further tests involving the shape of the 21cm signal, its power spectrum (analogous to the power spectrum of the CMB), how structure grows in the early ages of the universe, and how massive the neutrino is. But that’s enough for now.

e694e8819c5f9d9d1638e4638a1e7bce

Most likely beer. Or a cosmo. That’d be appropriate. I make a good pomegranate cosmo.


*Note that a variety of astronomical observations had established the concordance cosmology before Type Ia supernovae detected cosmic acceleration and well-resolved observations of the CMB found a flat cosmic geometry.