Dwarf Satellite Galaxies and Low Surface Brightness Galaxies in the Field. I.

Dwarf Satellite Galaxies and Low Surface Brightness Galaxies in the Field. I.

The Milky Way and its nearest giant neighbor Andromeda (M31) are surrounded by a swarm of dwarf satellite galaxies. Aside from relatively large beasties like the Large Magellanic Cloud or M32, the majority of these are the so-called dwarf spheroidals. There are several dozen examples known around each giant host, like the Fornax dwarf pictured above.

Dwarf Spheroidal (dSph) galaxies are ellipsoidal blobs devoid of gas that typically contain a million stars, give or take an order of magnitude. Unlike globular clusters, that may have a similar star count, dSphs are diffuse, with characteristic sizes of hundreds of parsecs (vs. a few pc for globulars). This makes them among the lowest surface brightness systems known.

This subject has a long history, and has become a major industry in recent years. In addition to the “classical” dwarfs that have been known for decades, there have also been many comparatively recent discoveries, often of what have come to be called “ultrafaint” dwarfs. These are basically dSphs with luminosities less than 100,000 suns, sometimes being comprised of as little as a few hundred stars. New discoveries are being made still, and there is reason to hope that the LSST will discover many more. Summed up, the known dwarf satellites are proverbial drops in the bucket compared to their giant hosts, which contain hundreds of billions of stars. Dwarfs could rain in for a Hubble time and not perturb the mass budget of the Milky Way.

Nevertheless, tiny dwarf Spheroidals are excellent tests of theories like CDM and MOND. Going back to the beginning, in the early ’80s, Milgrom was already engaged in a discussion about the predictions of his then-new theory (before it was even published) with colleagues at the IAS, where he had developed the idea during a sabbatical visit. They were understandably skeptical, preferring – as many still do – to believe that some unseen mass was the more conservative hypothesis. Dwarf spheroidals came up even then, as their very low surface brightness meant low acceleration in MOND. This in turn meant large mass discrepancies. If you could measure their dynamics, they would have large mass-to-light ratios. Larger than could be explained by stars conventionally, and larger than the discrepancies already observed in bright galaxies like Andromeda.

This prediction of Milgrom’s – there from the very beginning – is important because of how things change (or don’t). At that time, Scott Tremaine summed up the contrasting expectation of the conventional dark matter picture:

“There is no reason to expect that dwarfs will have more dark matter than bright galaxies.” *

This was certainly the picture I had in my head when I first became interested in low surface brightness (LSB) galaxies in the mid-80s. At that time I was ignorant of MOND; my interest was piqued by the argument of Disney that there could be a lot of as-yet undiscovered LSB galaxies out there, combined with my first observing experiences with the then-newfangled CCD cameras which seemed to have a proclivity for making clear otherwise hard-to-see LSB features. At the time, I was interested in finding LSB galaxies. My interest in what made them rotate came  later.

The first indication, to my knowledge, that dSph galaxies might have large mass discrepancies was provided by Marc Aaronson in 1983. This tentative discovery was hugely important, but the velocity dispersion of Draco (one of the “classical” dwarfs) was based on only 3 stars, so was hardly definitive. Nevertheless, by the end of the ’90s, it was clear that large mass discrepancies were a defining characteristic of dSphs. Their conventionally computed M/L went up systematically as their luminosity declined. This was not what we had expected in the dark matter picture, but was, at least qualitatively, in agreement with MOND.

My own interests had focused more on LSB galaxies in the field than on dwarf satellites like Draco. Greg Bothun and Jim Schombert had identified enough of these to construct a long list of LSB galaxies that served as targets my for Ph.D. thesis. Unlike the pressure-supported ellipsoidal blobs of stars that are the dSphs, the field LSBs we studied were gas rich, rotationally supported disks – mostly late type galaxies (Sd, Sm, & Irregulars). Regardless of composition, gas or stars, low surface density means that MOND predicts low acceleration. This need not be true conventionally, as the dark matter can do whatever the heck it wants. Though I was blissfully unaware of it at the time, we had constructed the perfect sample for testing MOND.

Having studied the properties of our sample of LSB galaxies, I developed strong ideas about their formation and evolution. Everything we had learned – their blue colors, large gas fractions, and low star formation rates – suggested that they evolved slowly compared to higher surface brightness galaxies. Star formation gradually sputtered along, having a hard time gathering enough material to make stars in their low density interstellar media. Perhaps they even formed late, an idea I took a shining to in the early ’90s. This made two predictions: field LSB galaxies should be less strongly clustered than bright galaxies, and should spin slower at a given mass.

The first prediction follows because the collapse time of dark matter halos correlates with their larger scale environment. Dense things collapse first and tend to live in dense environments. If LSBs were low surface density because they collapsed late, it followed that they should live in less dense environments.

I didn’t know how to test this prediction. Fortunately, fellow postdoc and office mate in Cambridge at the time, Houjun Mo, did. It came true. The LSB galaxies I had been studying were clustered like other galaxies, but not as strongly. This was exactly what I expected, and I thought sure we were on to something. All that remained was to confirm the second prediction.

At the time, we did not have a clear idea of what dark matter halos should be like. NFW halos were still in the future. So it seemed reasonable that late forming halos should have lower densities (lower concentrations in the modern terminology). More importantly, the sum of dark and luminous density was certainly less. Dynamics follow from the distribution of mass as Velocity2 ∝ Mass/Radius. For a given mass, low surface brightness galaxies had a larger radius, by construction. Even if the dark matter didn’t play along, the reduction in the concentration of the luminous mass should lower the rotation velocity.

Indeed, the standard explanation of the Tully-Fisher relation was just this. Aaronson, Huchra, & Mould had argued that galaxies obeyed the Tully-Fisher relation because they all had essentially the same surface brightness (Freeman’s law) thereby taking variation in the radius out of the equation: galaxies of the same mass all had the same radius. (If you are a young astronomer who has never heard of Freeman’s law, you’re welcome.) With our LSB galaxies, we had a sample that, by definition, violated Freeman’s law. They had large radii for a given mass. Consequently, they should have lower rotation velocities.

Up to that point, I had not taken much interest in rotation curves. In contrast, colleagues at the University of Groningen were all about rotation curves. Working with Thijs van der Hulst, Erwin de Blok, and Martin Zwaan, we set out to quantify where LSB galaxies fell in relation to the Tully-Fisher relation. I confidently predicted that they would shift off of it – an expectation shared by many at the time. They did not.

BTFSBallwlinessmall
The Tully-Fisher relation: disk mass vs. flat rotation speed (circa 1996). Galaxies are binned by surface brightness with the highest surface brightness galaxies marked red and the lowest blue. The lines show the expected shift following the argument of Aaronson et al. Contrary to this expectation, galaxies of all surface brightnesses follow the same Tully-Fisher relation.

I was flummoxed. My prediction was wrong. That of Aaronson et al. was wrong. Poking about the literature, everyone who had made a clear prediction in the conventional context was wrong. It made no sense.

I spent months banging my head against the wall. One quick and easy solution was to blame the dark matter. Maybe the rotation velocity was set entirely by the dark matter, and the distribution of luminous mass didn’t come into it. Surely that’s what the flat rotation velocity was telling us? All about the dark matter halo?

Problem is, we measure the velocity where the luminous mass still matters. In galaxies like the Milky Way, it matters quite a lot. It does not work to imagine that the flat rotation velocity is set by some property of the dark matter halo alone. What matters to what we measure is the combination of luminous and dark mass. The luminous mass is important in high surface brightness galaxies, and progressively less so in lower surface brightness galaxies. That should leave some kind of mark on the Tully-Fisher relation, but it doesn’t.

CRVfresid
Residuals from the Tully-Fisher relation as a function of size at a given mass. Compact galaxies are to the left, diffuse ones to the right. The red dashed line is what Newton predicts: more compact galaxies should rotate faster at a given mass. Fundamental physics? Tully-Fisher don’t care. Tully-Fisher don’t give a sh*t.

I worked long and hard to understand this in terms of dark matter. Every time I thought I had found the solution, I realized that it was a tautology. Somewhere along the line, I had made an assumption that guaranteed that I got the answer I wanted. It was a hopeless fine-tuning problem. The only way to satisfy the data was to have the dark matter contribution scale up as that of the luminous mass scaled down. The more stretched out the light, the more compact the dark – in exact balance to maintain zero shift in Tully-Fisher.

This made no sense at all. Over twenty years on, I have yet to hear a satisfactory conventional explanation. Most workers seem to assert, in effect, that “dark matter does it” and move along. Perhaps they are wise to do so.

repomanfoxharris
Working on the thing can drive you mad.

As I was struggling with this issue, I happened to hear a talk by Milgrom. I almost didn’t go. “Modified gravity” was in the title, and I remember thinking, “why waste my time listening to that nonsense?” Nevertheless, against my better judgement, I went. Not knowing that anyone in the audience worked on either LSB galaxies or Tully-Fisher, Milgrom proceeded to derive the MOND prediction:

“The asymptotic circular velocity is determined only by the total mass of the galaxy: Vf4 = a0GM.”

In a few lines, he derived rather trivially what I had been struggling to understand for months. The lack of surface brightness dependence in Tully-Fisher was entirely natural in MOND. It falls right out of the modified force law, and had been explicitly predicted over a decade before I struggled with the problem.

I scraped my jaw off the floor, determined to examine this crazy theory more closely. By the time I got back to my office, cognitive dissonance had already started to set it. Couldn’t be true. I had more pressing projects to complete, so I didn’t think about it again for many moons.

When I did, I decided I should start by reading the original MOND papers. I was delighted to find a long list of predictions, many of them specifically to do with surface brightness. We had just collected fresh data on LSB galaxies, which provided a new window on the low acceleration regime. I had the data to finally falsify this stupid theory.

Or so I thought. As I went through the list of predictions, my assumption that MOND had to be wrong was challenged by each item. It was barely an afternoon’s work: check, check, check. Everything I had struggled for months to understand in terms of dark matter tumbled straight out of MOND.

I was faced with a choice. I knew this would be an unpopular result. I could walk away and simply pretend I had never run across it. That’s certainly how it had been up until then: I had been blissfully unaware of MOND and its perniciously successful predictions. No need to admit otherwise.

Had I realized just how unpopular it would prove to be, maybe that would have been the wiser course. But even contemplating such a course felt criminal. I was put in mind of Paul Gerhardt’s admonition for intellectual honesty:

“When a man lies, he murders some part of the world.”

Ignoring what I had learned seemed tantamount to just that. So many predictions coming true couldn’t be an accident. There was a deep clue here; ignoring it wasn’t going to bring us closer to the truth. Actively denying it would be an act of wanton vandalism against the scientific method.

Still, I tried. I looked long and hard for reasons not to report what I had found. Surely there must be some reason this could not be so?

Indeed, the literature provided many papers that claimed to falsify MOND. To my shock, few withstood critical examination. Commonly a straw man representing MOND was falsified, not MOND itself. At a deeper level, it was implicitly assumed that any problem for MOND was an automatic victory for dark matter. This did not obviously follow, so I started re-doing the analyses for both dark matter and MOND. More often than not, I found either that the problems for MOND were greatly exaggerated, or that the genuinely problematic cases were a problem for both theories. Dark matter has more flexibility to explain outliers, but outliers happen in astronomy. All too often the temptation was to refuse to see the forest for a few trees.

The first MOND analysis of the classical dwarf spheroidals provides a good example. Completed only a few years before I encountered the problem, these were low surface brightness systems that were deep in the MOND regime. These were gas poor, pressure supported dSph galaxies, unlike my gas rich, rotating LSB galaxies, but the critical feature was low surface brightness. This was the most directly comparable result. Better yet, the study had been made by two brilliant scientists (Ortwin Gerhard & David Spergel) whom I admire enormously. Surely this work would explain how my result was a mere curiosity.

Indeed, reading their abstract, it was clear that MOND did not work for the dwarf spheroidals. Whew: LSB systems where it doesn’t work. All I had to do was figure out why, so I read the paper.

As I read beyond the abstract, the answer became less and less clear. The results were all over the map. Two dwarfs (Sculptor and Carina) seemed unobjectionable in MOND. Two dwarfs (Draco and Ursa Minor) had mass-to-light ratios that were too high for stars, even in MOND. That is, there still appeared to be a need for dark matter even after MOND had been applied. One the flip side, Fornax had a mass-to-light ratio that was too low for the old stellar populations assumed to dominate dwarf spheroidals. Results all over the map are par for the course in astronomy, especially for a pioneering attempt like this. What were the uncertainties?

Milgrom wrote a rebuttal. By then, there were measured velocity dispersions for two more dwarfs. Of these seven dwarfs, he found that

“within just the quoted errors on the velocity dispersions and the luminosities, the MOND M/L values for all seven dwarfs are perfectly consistent with stellar values, with no need for dark matter.”

Well, he would say that, wouldn’t he? I determined to repeat the analysis and error propagation.

MdB98bFig8_dSph
Mass-to-light ratios determined with MOND for eight dwarf spheroidals (named, as published in McGaugh & de Blok 1998). The various symbols refer to different determinations. Mine are the solid circles. The dashed lines show the plausible range for stellar populations.

The net result: they were both right. M/L was still too high for Draco and Ursa Minor, and still too low for Fornax. But this was only significant at the 2σ level, if that – hardly enough to condemn a theory. Carina, Leo I, Leo II, Sculptor, and Sextans all had fairly reasonable mass-to-light ratios. The voting is different now. Instead of going 2 for 5 as Gerhard & Spergel found, MOND was now 5 for 8. One could choose to obsess about the outliers, or one could choose to see a more positive pattern.  Either a positive or a negative spin could be put on this result. But it was clearly more positive than the first attempt had indicated.

The mass estimator in MOND scales as the fourth power of velocity (or velocity dispersion in the case of isolated dSphs), so the too-high M*/L of Draco and Ursa Minor didn’t disturb me too much. A small overestimation of the velocity dispersion would lead to a large overestimation of the mass-to-light ratio. Just about every systematic uncertainty one can think of pushes in this direction, so it would be surprising if such an overestimate didn’t happen once in a while.

Given this, I was more concerned about the low M*/L of Fornax. That was weird.

Up until that point (1998), we had been assuming that the stars in dSphs were all old, like those in globular clusters. That corresponds to a high M*/L, maybe 3 in solar units in the V-band. Shortly after this time, people started to look closely at the stars in the classical dwarfs with the Hubble. Low and behold, the stars in Fornax were surprisingly young. That means a low M*/L, 1 or less. In retrospect, MOND was trying to tell us that: it returned a low M*/L for Fornax because the stars there are young. So what was taken to be a failing of the theory was actually a predictive success.

Hmm.

And Gee. This is a long post. There is a lot more to tell, but enough for now.


*I have a long memory, but it is not perfect. I doubt I have the exact wording right, but this does accurately capture the sentiment from the early ’80s when I was an undergraduate at MIT and Scott Tremaine was on the faculty there.

The next cosmic frontier: 21cm absorption at high redshift

The next cosmic frontier: 21cm absorption at high redshift

There are two basic approaches to cosmology: start at redshift zero and work outwards in space, or start at the beginning of time and work forward. The latter approach is generally favored by theorists, as much of the physics of the early universe follows a “clean” thermal progression, cooling adiabatically as it expands. The former approach is more typical of observers who start with what we know locally and work outwards in the great tradition of Hubble, Sandage, Tully, and the entire community of extragalactic observers that established the paradigm of the expanding universe and measured its scale. This work had established our current concordance cosmology, ΛCDM, by the mid-90s.*

Both approaches have taught us an enormous amount. Working forward in time, we understand the nucleosynthesis of the light elements in the first few minutes, followed after a few hundred thousand years by the epoch of recombination when the universe transitioned from an ionized plasma to a neutral gas, bequeathing us the cosmic microwave background (CMB) at the phenomenally high redshift of z=1090. Working outwards in redshift, large surveys like Sloan have provided a detailed map of the “local” cosmos, and narrower but much deeper surveys provide a good picture out to z = 1 (when the universe was half its current size, and roughly half its current age) and beyond, with the most distant objects now known above redshift 7, and maybe even at z > 11. JWST will provide a good view of the earliest (z ~ 10?) galaxies when it launches.

This is wonderful progress, but there is a gap from 10 < z < 1000. Not only is it hard to observe objects so distant that z > 10, but at some point they shouldn’t exist. It takes time to form stars and galaxies and the supermassive black holes that fuel quasars, especially when starting from the smooth initial condition seen in the CMB. So how do we probe redshifts z > 10?

It turns out that the universe provides a way. As photons from the CMB traverse the neutral intergalactic medium, they are subject to being absorbed by hydrogen atoms – particularly by the 21cm spin-flip transition. Long anticipated, this signal has recently been detected by the EDGES experiment. I find it amazing that the atomic physics of the early universe allows for this window of observation, and that clever scientists have figured out a way to detect this subtle signal.

So what is going on? First, a mental picture. In the image below, an observer at the left looks out to progressively higher redshift towards the right. The history of the universe unfolds from right to left.

cosmicdarkagesillustration
An observer’s view of the history of the universe. Nearby, at low redshift, we see mostly empty space sprinkled with galaxies. At some high redshift (z ~ 20?), the first stars formed, flooding the previously dark universe with UV photons that reionize the gas of the intergalactic medium. The backdrop of the CMB provides the ultimate limit to electromagnetic observations as it marks the boundary (at z = 1090) between a mostly transparent and completely opaque universe.

Pritchard & Loeb give a thorough and lucid account of the expected sequence of events. As the early universe expands, it cools. Initially, the thermal photon bath that we now observe as the CMB has enough energy to keep atoms ionized. The mean free path that a photon can travel before interacting with a charged particle in this early plasma is very short: the early universe is opaque like the interior of a thick cloud. At z = 1090, the temperature drops to the point that photons can no longer break protons and electrons apart. This epoch of recombination marks the transition from an opaque plasma to a transparent universe of neutral hydrogen and helium gas. The path length of photons becomes very long; those that we see as the CMB have traversed the length of the cosmos mostly unperturbed.

Immediately after recombination follows the dark ages. Sources of light have yet to appear. There is just neutral gas expanding into the future. This gas is mostly but not completely transparent. As CMB photons propagate through it, they are subject to absorption by the spin-flip transition of hydrogen, a subtle but, in principle, detectable effect: one should see redshifted absorption across the dark ages.

After some time – perhaps a few hundred million years? – the gas has had enough time to clump up enough to start to form the first structures. This first population of stars ends the dark ages and ushers in cosmic dawn. The photons they release into the vast intergalactic medium (IGM) of neutral gas interacts with it and heats it up, ultimately reionizing the entire universe. After this time the IGM is again a plasma, but one so thin (thanks to the expansion of the universe) that it remains transparent. Galaxies assemble and begin the long evolution characterized by the billions of years lived by the stars the contain.

This progression leads to the expectation of 21cm absorption twice: once during the dark ages, and again at cosmic dawn. There are three temperatures we need to keep track of to see how this happens: the radiation temperature Tγ, the kinetic temperature of the gas, Tk, and the spin temperature, TS. The radiation temperature is that of the CMB, and scales as (1+z). The gas temperature is what you normally think of as a temperature, and scales approximately as (1+z)2. The spin temperature describes the occupation of the quantum levels involved in the 21cm hyperfine transition. If that makes no sense to you, don’t worry: all that matters is that absorption can occur when the spin temperature is less than the radiation temperature. In general, it is bounded by Tk < TS < Tγ.

The radiation temperature and gas temperature both cool as the universe expands. Initially, the gas remains coupled to the radiation, and these temperatures remain identical until decoupling around z ~ 200. After this, the gas cools faster than the radiation. The radiation temperature is extraordinarily well measured by CMB observations, and is simply Tγ = (2.725 K)(1+z). The gas temperature is more complicated, requiring the numerical solution of the Saha equation for a hydrogen-helium gas. Clever people have written codes to do this, like the widely-used RECFAST. In this way, one can build a table of how both temperatures depend on redshift in any cosmology one cares to specify.

This may sound complicated if it is the first time you’ve encountered it, but the physics is wonderfully simple. It’s just the thermal physics of the expanding universe, and the atomic physics of a simple gas composed of hydrogen and helium in known amounts. Different cosmologies specify different expansion histories, but these have only a modest (and calculable) effect on the gas temperature.

Wonderfully, the atomic physics of the 21cm transition is such that it couples to both the radiation and gas temperatures in a way that matters in the early universe. It didn’t have to be that way – most transitions don’t. Perhaps this is fodder for people who worry that the physics of our universe is fine-tuned.

There are two ways in which the spin temperature couples to that of the gas. During the dark ages, the coupling is governed simply by atomic collisions. By cosmic dawn collisions have become rare, but the appearance of the first stars provides UV radiation that drives the WouthuysenField effect. Consequently, we expect to see two absorption troughs: one around z ~ 20 at cosmic dawn, and another at still higher redshift (z ~ 100) during the dark ages.

Observation of this signal has the potential to revolutionize cosmology like detailed observations of the CMB did. The CMB is a snapshot of the universe during the narrow window of recombination at z = 1090. In principle, one can make the same sort of observation with the 21cm line, but at each and every redshift where absorption occurs: z = 16, 17, 18, 19 during cosmic dawn and again at z = 50, 100, 150 during the dark ages, with whatever frequency resolution you can muster. It will be like having the CMB over and over and over again, each redshift providing a snapshot of the universe at a different slice in time.

The information density available from the 21cm signal is in principle quite large. Before we can make use of any of this information, we have to detect it first. Therein lies the rub. This is an incredibly weak signal – we have to be able to detect that the CMB is a little dimmer than it would have been – and we have to do it in the face of much stronger foreground signals from the interstellar medium of our Galaxy and from man-made radio interference here on Earth. Fortunately, though much brighter than the signal we seek, these foregrounds have a different frequency dependence, so it should be possible to sort out, in principle.

Saying a thing can be done and doing it are two different things. This is already a long post, so I will refrain from raving about the technical challenges. Lets just say it’s Real Hard.

Many experimentalists take that as a challenge, and there are a good number of groups working hard to detect the cosmic 21cm signal. EDGES appears to have done it, reporting the detection of the signal at cosmic dawn in February. Here some weasel words are necessary, as the foreground subtraction is a huge challenge, and we always hope to see independent confirmation of a new signal like this. Those words of caution noted, I have to add that I’ve had the chance to read up on their methods, and I’m really impressed. Unlike the BICEP claim to detect primordial gravitational waves that proved to be bogus after being rushed to press release before refereering, the EDGES team have done all manner of conceivable cross-checks on their instrumentation and analysis. Nor did they rush to publish, despite the importance of the result. In short, I get exactly the opposite vibe from BICEP, whose foreground subtraction was obviously wrong as soon as I laid eyes on the science paper. If EDGES proves to be wrong, it isn’t for want of doing things right. In the meantime, I think we’re obliged to take their result seriously, and not just hope it goes away (which seems to be the first reaction to the impossible).

Here is what EDGES saw at cosmic dawn:

nature25792-f2
Fig. 2 from the EDGES detection paper. The dip, detected repeatedly in different instrumental configurations, shows a decrease in brightness temperature at radio frequencies, as expected from the 21cm absorbing some of the radiation from the CMB.

The unbelievable aspect of the EDGES observation is that it is too strong. Feeble as this signal is (a telescope brightness decrement of half a degree Kelvin), after subtracting foregrounds a thousand times stronger, it is twice as much as is possible in ΛCDM.

I made a quick evaluation of this, and saw that the observed signal could be achieved if the baryon fraction of the universe was high – basically, if cold dark matter did not exist. I have now had the time to make a more careful calculation, and publish some further predictions. The basic result from before stands: the absorption should be stronger without dark matter than with it.

The reason for this is simple. A universe full of dark matter decelerates rapidly at early times, before the acceleration of the cosmological constant kicks in. Without dark matter, the expansion more nearly coasts. Consequently, the universe is relatively larger from 10 < z < 1000, and the CMB photons have to traverse a larger path length to get here. They have to go about twice as far through the same density of hydrogen absorbers. It’s like putting on a second pair of sunglasses.

Quantitatively, the predicted absorption, both with dark matter and without, looks like:

predict21cmsignal
The predicted 21cm absorption with dark matter (red broken line) and without (blue line). Also shown (in grey) is the signal observed by EDGES.

 

The predicted absorption is consistent with the EDGES observation, within the errors, if there is no dark matter. More importantly, ΛCDM is not consistent with the data, at greater than 95% confidence. At cosmic dawn, I show the maximum possible signal. It could be weaker, depending on the spectra of the UV radiation emitted by the first stars. But it can’t be stronger. Taken at face value, the EDGES result is impossible in ΛCDM. If the observation is corroborated by independent experiments, ΛCDM as we know it will be falsified.

There have already been many papers trying to avoid this obvious conclusion. If we insist on retaining ΛCDM, the only way to modulate the strength of the signal is to alter the ratio of the radiation temperature to the gas temperature. Either we make the radiation “hotter,” or we make the gas cooler. If we allow ourselves this freedom, we can fit any arbitrary signal strength. This is ad hoc in the way that gives ad hoc a bad name.

We do not have this freedom – not really. The radiation temperature is measured in the CMB with great accuracy. Altering this would mess up the genuine success of ΛCDM in fitting the CMB. One could postulate an additional source, something that appears after recombination but before cosmic dawn to emit enough radio power throughout the cosmos to add to the radio brightness that is being absorbed. There is zero reason to expect such sources (what part of `cosmic dawn’ was ambiguous?) and no good way to make them at the right time. If they are primordial (as people love to imagine but are loathe to provide viable models for) then they’re also present at recombination: anything powerful enough to have the necessary effect will likely screw up the CMB.

Instead of magically increasing the radiation temperature, we might decrease the gas temperature. This seems no more plausible. The evolution of the gas temperature is a straightforward numerical calculation that has been checked by several independent codes. It has to be right at the time of recombination, or again, we mess up the CMB. The suggestions that I have heard seem mostly to invoke interactions between the gas and dark matter that offload some of the thermal energy of the gas into the invisible sink of the dark matter. Given how shy dark matter has been about interacting with normal matter in the laboratory, it seems pretty rich to imagine that it is eager to do so at high redshift. Even advocates of this scenario recognize its many difficulties.

For those who are interested, I cite a number of the scientific papers that attempt these explanations in my new paper. They all seem like earnest attempts to come to terms with what is apparently impossible. Many of these ideas also strike me as a form of magical thinking that stems from ΛCDM groupthink. After all, ΛCDM is so well established, any unexpected signal must be a sign of exciting new physics (on top of the new physics of dark matter and dark energy) rather than an underlying problem with ΛCDM itself.

The more natural interpretation is that the expansion history of the universe deviates from that predicted by ΛCDM. Simply taking away the dark matter gives a result consistent with the data. Though it did not occur to me to make this specific prediction a priori for an experiment that did not yet exist, all the necessary calculations had been done 15 years ago.

Using the same model, I make a genuine a priori prediction for the dark ages. For the specific NoCDM model I built in 2004, the 21cm absorption in the dark ages should again be about twice as strong as expected in ΛCDM. This seems fairly generic, but I know the model is not complete, so I wouldn’t be upset if it were not bang on.

I would be upset if ΛCDM were not bang on. The only thing that drives the signal in the dark ages is atomic scattering. We understand this really well. ΛCDM is now so well constrained by Planck that, if right, the 21cm absorption during the dark ages must follow the red line in the inset in the figure. The amount of uncertainty is not much greater than the thickness of the line. If ΛCDM fails this test, it would be a clear falsification, and a sign that we need to try something completely different.

Unfortunately, detecting the 21cm absorption signal during the dark ages is even harder than it is at cosmic dawn. At these redshifts (z ~ 100), the 21cm line (1420 MHz on your radio dial) is shifted beyond the ionospheric cutoff of the Earth’s atmosphere at 30 MHz. Frequencies this low cannot be observed from the ground. Worse, we have made the Earth itself a bright foreground contaminant of radio frequency interference.

Undeterred, there are multiple proposals to measure this signal by placing an antenna in space – in particular, on the far side of the moon, so that the moon shades the instrument from terrestrial radio interference. This is a great idea. The mere detection of the 21cm signal from the dark ages would be an accomplishment on par with the original detection of the CMB. It appears that it might also provide a decisive new way of testing our cosmological model.

There are further tests involving the shape of the 21cm signal, its power spectrum (analogous to the power spectrum of the CMB), how structure grows in the early ages of the universe, and how massive the neutrino is. But that’s enough for now.

e694e8819c5f9d9d1638e4638a1e7bce

Most likely beer. Or a cosmo. That’d be appropriate. I make a good pomegranate cosmo.


*Note that a variety of astronomical observations had established the concordance cosmology before Type Ia supernovae detected cosmic acceleration and well-resolved observations of the CMB found a flat cosmic geometry.

The dwarf galaxy NGC1052-DF2

The dwarf galaxy NGC1052-DF2

A recently discovered dwarf galaxy designated NGC1052-DF2 has been in the news lately. Apparently a satellite of the giant elliptical NGC 1052, DF2 (as I’ll call it from here on out) is remarkable for having a surprisingly low velocity dispersion for a galaxy of its type. These results were reported in Nature last week by van Dokkum et al., and have caused a bit of a stir.

It is common for giant galaxies to have some dwarf satellite galaxies. As can be seen from the image published by van Dokkum et al., there are a number of galaxies in the neighborhood of NGC 1052. Whether these are associated physically into a group of galaxies or are chance projections on the sky depends on the distance to each galaxy.

NGC1052-DF2
Image of field containing DF2 from van Dokkum et al.

NGC 1052 is listed by the NASA Extragalactic Database (NED) as having a recession velocity of 1510 km/s and a distance of 20.6 Mpc. The next nearest big beastie is NGC 1042, at 1371 km/s. The difference of 139 km/s is not much different from 115 km/s, which is the velocity that Andromeda is heading towards the Milky Way, so one could imagine that this is a group similar to the Local Group. Except that NED says the distance to NGC 1042 is 7.8 Mpc, so apparently it is a foreground object seen in projection.

Van Dokkum et al. assume DF2 and NGC 1052 are both about 20 Mpc distant. They offer two independent estimates of the distance, one consistent with the distance to NGC 1052 and the other more consistent with the distance to NGC 1042. Rather than wring our hands over this, I will trust their judgement and simply note, as they do, that the nearer distance would change many of their conclusions. The redshift is 1803 km/s, larger than either of the giants. It could still be a satellite of NGC 1052, as ~300 km/s is not unreasonable for an orbital velocity.

So why the big fuss? Unlike most galaxies in the universe, DF2 appears not to require dark matter. This is inferred from the measured velocity dispersion of ten globular clusters, which is 8.4 km/s. That’s fast to you and me, but rather sluggish on the scale of galaxies. Spread over a few kiloparsecs, that adds up to a dynamical mass about equal to what we expect for the stars, leaving little room for the otherwise ubiquitous dark matter.

This is important. If the universe is composed of dark matter, it should on occasion be possible to segregate the dark from the light. Tidal interactions between galaxies can in principle do this, so a galaxy devoid of dark matter would be good evidence that this happened. It would also be evidence against a modified gravity interpretation of the missing mass problem, because the force law is always on: you can’t strip it from the luminous matter the way you can dark matter. So ironically, the occasional galaxy lacking dark matter would constitute evidence that dark matter does indeed exist!

DF2 appears to be such a case. But how weird is it? Morphologically, it resembles the dwarf spheroidal satellite galaxies of the Local Group. I have a handy compilation of those (from Lelli et al.), so we can compute the mass-to-light ratio for all of these beasties in the same fashion, shown in the figure below. It is customary to refer quantities to the radius that contains half of the total light, which is 2.2 kpc for DF2.

dwarfMLdyn
The dynamical mass-to-light ratio for Local Group dwarf Spheroidal galaxies measured within their half-light radii, as a function of luminosity (left) and average surface brightness within the half-light radius (right). DF2 is the blue cross with low M/L. The other blue cross is Crater 2, a satellite of the Milky Way discovered after the compilation of Local Group dwarfs was made. The dotted line shows M/L = 2, which is a good guess for the stellar mass-to-light ratio. That DF2 sits on this line implies that stars are the only mass that’s there.

Perhaps the most obvious respect in which DF2 is a bit unusual relative to the dwarfs of the Local Group is that it is big and bright. Most nearby dwarfs have half light radii well below 1 kpc. After DF2, the next most luminous dwarfs is Fornax, which is a factor of 5 lower in luminosity.

DF2 is called an ultradiffuse galaxy (UDG), which is apparently newspeak for low surface brightness (LSB) galaxy. I’ve been working on LSB galaxies my entire career. While DF2 is indeed low surface brightness – the stars are spread thin – I wouldn’t call it ultra diffuse. It is actually one of the higher surface brightness objects of this type. Crater 2 and And XIX (the leftmost points in the right panel) are ultradiffuse.

Astronomers love vague terminology, and as a result often reinvent terms that already exist. Dwarf, LSB, UDG, have all been used interchangeably and with considerable slop. I was sufficiently put out by this that I tried to define some categories is the mid-90s. This didn’t catch on, but by my definition, DF2 is VLSB – very LSB, but only by a little – it is much closer to regular LSB than to extremely (ELSB). Crater 2 and And XIX, now they’re ELSB, being more diffuse than DF2 by 2 orders of magnitude.

SBdefinitiontable
Surface brightness categories from McGaugh (1996).

Whatever you call it, DF2 is low surface brightness, and LSB galaxies are always dark matter dominated. Always, at least among disk galaxies: here is the analogous figure for galaxies that rotate:

MLdynDisk
Dynamical mass-to-light ratios for rotationally supported disk galaxies, analogous to the plot above for pressure supported disks. The lower the surface brightness, the higher the mass discrepancy. The correlation with luminosity is secondary, as a result of the correlation between luminosity and surface brightness. From McGaugh (2014).

Pressure supported dwarfs generally evince large mass discrepancies as well. So in this regard, DF2 is indeed very unusual. So what gives?

Perhaps DF2 formed that way, without dark matter. This is anathema to everything we know about galaxy formation in ΛCDM cosmology. Dark halos have to form first, with baryons following.

Perhaps DF2 suffered one or more tidal interactions with NGC 1052. Sub-halos in simulations are often seen to be on highly radial orbits; perhaps DF2 has had its dark matter halo stripped away by repeated close passages. Since the stars reside deep in the center of the subhalo, they’re the last thing to be stripped away. So perhaps we’ve caught this one at that special time when the dark matter has been removed but the stars still remain.

This is improbable, but ought to happen once in a while. The bigger problem I see is that one cannot simply remove the dark matter halo like yanking a tablecloth and leaving the plates. The stars must respond to the change in the gravitational potential; they too must diffuse away. That might be a good way to make the galaxy diffuse, ultimately perhaps even ultradiffuse, but the observed motions are then not representative of an equilibrium situation. This is critical to the mass estimate, which must perforce assume an equilibrium in which the gravitational potential well of the galaxy is balanced against the kinetic motion of its contents. Yank away the dark matter halo, and the assumption underlying the mass estimate gets yanked with it. While such a situation may arise, it makes it very difficult to interpret the velocities: all tests are off. This is doubly true in MOND, in which dwarfs are even more susceptible to disruption.

onedoesnotyank

Then there are the data themselves. Blaming the data should be avoided, but it does happen once in a while that some observation is misleading. In this case, I am made queasy by the fact that the velocity dispersion is estimated from only ten tracers. I’ve seen plenty of cases where the velocity dispersion changes in important ways when more data are obtained, even starting from more than 10 tracers. Andromeda II comes to mind as an example. Indeed, several people have pointed out that if we did the same exercise with Fornax, using its globular clusters as the velocity tracers, we’d get a similar answer to what we find in DF2. But we also have measurements of many hundreds of stars in Fornax, so we know that answer is wrong. Perhaps the same thing is happening with DF2? The fact that DF2 is an outlier from everything else we know empirically suggests caution.

Throwing caution and fact-checking to the wind, many people have been predictably eager to cite DF2 as a falsification of MOND. Van Dokkum et al. point out the the velocity dispersion predicted for this object by MOND is 20 km/s, more than a factor of two above their measured value. They make the MOND prediction for the case of an isolated object. DF2 is not isolated, so one must consider the external field effect (EFE).

The criterion by which to judge isolation in MOND is whether the acceleration due to the mutual self-gravity of the stars is less than the acceleration from an external source, in this case the host NGC 1052. Following the method outlined by McGaugh & Milgrom, and based on the stellar mass (adopting M/L=2 as both we and van Dokkum assume), I estimate an internal acceleration of DF2 to be gin = 0.15 a0. Here a0 is the critical acceleration scale in MOND, 1.2 x 10-10 m/s/s. Using this number and treating DF2 as isolated, I get the same 20 km/s van Dokkum et al. estimate.

Estimating the external field is more challenging. It depends on the mass of NGC 1052, and the separation between it and DF2. The projected separation at the assumed distance is 80 kpc. That is well within the range that the EFE is commonly observed to matter in the Local Group. It could be a bit further granted some distance along the line of sight, but if this becomes too large then the distance by association with NGC 1052 has to be questioned, and all bets are off. The mass of NGC 1052 is also rather uncertain, or at least I have heard wildly different values quoted in discussions about this object. Here I adopt 1011 M as estimated by SLUGGS. To get the acceleration, I estimate the asymptotic rotation velocity we’d expect in MOND, V4 = a0GM. This gives 200 km/s, which is conservative relative to the ~300 km/s quoted by van Dokkum et al. At a distance of 80 kpc, the corresponding external acceleration gex = 0.14 a0. This is very uncertain, but taken at face value is indistinguishable from the internal acceleration. Consequently, it cannot be ignored: the calculation published by van Dokkum et al. is not the correct prediction for MOND.

The velocity dispersion estimator in MOND differs when gex < gin and gex > gin (see equations 2 and 3 of McGaugh & Milgrom). Strictly speaking, these apply in the limits where one or the other field dominates. When they are comparable, the math gets more involved (see equation 59 of Famaey & McGaugh). The input data are too uncertain to warrant an elaborate calculation for a blog, so I note simply that the amplitude of the mass discrepancy in MOND depends on how deep in the MOND regime a system is. That is, how far below the critical acceleration scale it is. The lower the acceleration, the larger the discrepancy. This is why LSB galaxies appear to be dark matter dominated; their low surface densities result in low accelerations.

For DF2, the absolute magnitude of the acceleration is approximately doubled by the presence of the external field. It is not as deep in the MOND regime as assumed in the isolated case, so the mass discrepancy is smaller, decreasing the MOND-predicted velocity dispersion by roughly the square root of 2. For a factor of 2 range in the stellar mass-to-light ratio (as in McGaugh & Milgrom), this crude MOND prediction becomes

σ = 14 ± 4 km/s.

Like any erstwhile theorist, I reserve the right to modify this prediction granted more elaborate calculations, or new input data, especially given the uncertainties in the distance and mass of the host. Indeed, we should consider the possibility of tidal disruption, which can happen in MOND more readily than with dark matter. Indeed, at one point I came very close to declaring MOND dead because the velocity dispersions of the ultrafaint dwarf galaxies were off, only realizing late in the day that MOND actually predicts that these things should be getting tidally disrupted (as is also expected, albeit somewhat differently, in ΛCDM), so that the velocity dispersions might not reflect the equilibrium expectation.

In DF2, the external field almost certainly matters. Barring wild errors of the sort discussed or unforeseen, I find it hard to envision the MONDian velocity dispersion falling outside the range 10 – 18 km/s. This is not as high as the 20 km/s predicted by van Dokkum et al. for an isolated object, nor as small as they measure for DF2 (8.4 km/s). They quote a 90% confidence upper limit of 10 km/s, which is marginally consistent with the lower end of the prediction (corresponding to M/L = 1). So we cannot exclude MOND based on these data.

That said, the agreement is marginal. Still, 90% is not very high confidence by scientific standards. Based on experience with such data, this likely overstates how well we know the velocity dispersion of DF2. Put another way, I am 90% confident that when better data are obtained, the measured velocity dispersion will increase above the 10 km/s threshold.

More generally, experience has taught me three things:

  1. In matters of particle physics, do not bet against the Standard Model.
  2. In matters cosmological, do not bet against ΛCDM.
  3. In matters of galaxy dynamics, do not bet against MOND.

The astute reader will realize that these three assertions are mutually exclusive. The dark matter of ΛCDM is a bet that there are new particles beyond the Standard Model. MOND is a bet that what we call dark matter is really the manifestation of physics beyond General Relativity, on which cosmology is based. Which is all to say, there is still some interesting physics to be discovered.

Degenerating problemshift: a wedged paradigm in great tightness

Degenerating problemshift: a wedged paradigm in great tightness

Reading Merritt’s paper on the philosophy of cosmology, I was struck by a particular quote from Lakatos:

A research programme is said to be progressing as long as its theoretical growth anticipates its empirical growth, that is as long as it keeps predicting novel facts with some success (“progressive problemshift”); it is stagnating if its theoretical growth lags behind its empirical growth, that is as long as it gives only post-hoc explanations either of chance discoveries or of facts anticipated by, and discovered in, a rival programme (“degenerating problemshift”) (Lakatos, 1971, pp. 104–105).

The recent history of modern cosmology is rife with post-hoc explanations of unanticipated facts. The cusp-core problem and the missing satellites problem are prominent examples. These are explained after the fact by invoking feedback, a vague catch-all that many people agree solves these problems even though none of them agree on how it actually works.

FeedbackCartoonSilkMamon
Cartoon of the feedback explanation for the difference between the galaxy luminosity function (blue line) and the halo mass function (red line). From Silk & Mamon (2012).

There are plenty of other problems. To name just a few: satellite planes (unanticipated correlations in phase space), the emptiness of voids, and the early formation of structure  (see section 4 of Famaey & McGaugh for a longer list and section 6 of Silk & Mamon for a positive spin on our list). Each problem is dealt with in a piecemeal fashion, often by invoking solutions that contradict each other while buggering the principle of parsimony.

It goes like this. A new observation is made that does not align with the concordance cosmology. Hands are wrung. Debate is had. Serious concern is expressed. A solution is put forward. Sometimes it is reasonable, sometimes it is not. In either case it is rapidly accepted so long as it saves the paradigm and prevents the need for serious thought. (“Oh, feedback does that.”) The observation is no longer considered a problem through familiarity and exhaustion of patience with the debate, regardless of how [un]satisfactory the proffered solution is. The details of the solution are generally forgotten (if ever learned). When the next problem appears the process repeats, with the new solution often contradicting the now-forgotten solution to the previous problem.

This has been going on for so long that many junior scientists now seem to think this is how science is suppose to work. It is all they’ve experienced. And despite our claims to be interested in fundamental issues, most of us are impatient with re-examining issues that were thought to be settled. All it takes is one bold assertion that everything is OK, and the problem is perceived to be solved whether it actually is or not.

8631e895433bc3d1fa87e3d857fc7500
“Is there any more?”

That is the process we apply to little problems. The Big Problems remain the post hoc elements of dark matter and dark energy. These are things we made up to explain unanticipated phenomena. That we need to invoke them immediately casts the paradigm into what Lakatos called degenerating problemshift. Once we’re there, it is hard to see how to get out, given our propensity to overindulge in the honey that is the infinity of free parameters in dark matter models.

Note that there is another aspect to what Lakatos said about facts anticipated by, and discovered in, a rival programme. Two examples spring immediately to mind: the Baryonic Tully-Fisher Relation and the Radial Acceleration Relation. These are predictions of MOND that were unanticipated in the conventional dark matter picture. Perhaps we can come up with post hoc explanations for them, but that is exactly what Lakatos would describe as degenerating problemshift. The rival programme beat us to it.

In my experience, this is a good description of what is going on. The field of dark matter has stagnated. Experimenters look harder and harder for the same thing, repeating the same experiments in hope of a different result. Theorists turn knobs on elaborate models, gifting themselves new free parameters every time they get stuck.

On the flip side, MOND keeps predicting novel facts with some success, so it remains in the stage of progressive problemshift. Unfortunately, MOND remains incomplete as a theory, and doesn’t address many basic issues in cosmology. This is a different kind of unsatisfactory.

In the mean time, I’m still waiting to hear a satisfactory answer to the question I’ve been posing for over two decades now. Why does MOND get any predictions right? It has had many a priori predictions come true. Why does this happen? It shouldn’t. Ever.

Neutrinos got mass!

Neutrinos got mass!

In 1984, I heard Hans Bethe give a talk in which he suggested the dark matter might be neutrinos. This sounded outlandish – from what I had just been taught about the Standard Model, neutrinos were massless. Worse, I had been given the clear impression that it would screw everything up if they did have mass. This was the pervasive attitude, even though the solar neutrino problem was known at the time. This did not compute! so many of us were inclined to ignore it. But, I thought, in the unlikely event it turned out that neutrinos did have mass, surely that would be the answer to the dark matter problem.

Flash forward a few decades, and sure enough, neutrinos do have mass. Oscillations between flavors of neutrinos have been observed in both solar and atmospheric neutrinos. This implies non-zero mass eigenstates. We don’t yet know the absolute value of the neutrino mass, but the oscillations do constrain the separation between mass states (Δmν,212 = 7.53×10−5 eV2 for solar neutrinos, and Δmν,312 = 2.44×10−3 eV2 for atmospheric neutrinos).

Though the absolute values of the neutrino mass eigenstates are not yet known, there are upper limits. These don’t allow enough mass to explain the cosmological missing mass problem. The relic density of neutrinos is

Ωνh2 = ∑mν/(93.5 eV)

In order to make up the dark matter density (Ω ≈ 1/4), we need ∑mν ≈ 12 eV. The experimental upper limit on the electron neutrino mass is mν < 2 eV. There are three neutrino mass eigenstates, and the difference in mass between them is tiny, so ∑mν < 6 eV. Neutrinos could conceivably add up to more mass than baryons, but they cannot add up to be the dark matter.

In recent years, I have started to hear the assertion that we have already detected dark matter, with neutrinos given as the example. They are particles with mass that only interact with us through the weak nuclear force and gravity. In this respect, they are like WIMPs.

Here the equivalence ends. Neutrinos are Standard Model particles that have been known for decades. WIMPs are hypothetical particles that reside in a hypothetical supersymmetric sector beyond the Standard Model. Conflating the two to imply that WIMPs are just as natural as neutrinos is a false equivalency.

That said, massive neutrinos might be one of the few ways in which hierarchical cosmogony, as we currently understand it, is falsifiable. Whatever the dark matter is, we need it to be dynamically cold. This property is necessary for it to clump into dark matter halos that seed galaxy formation. Too much hot (relativistic) dark matter (neutrinos) suppresses structure formation. A nascent dark matter halo is nary a speed bump to a neutrino moving near the speed of light: if those fast neutrinos carry too much mass, they erase structure before it can form.

One of the great successes of ΛCDM is its explanation of structure formation: the growth of large scale structure from the small fluctuations in the density field at early times. This is usually quantified by the power spectrum – in the CMB at z > 1000 and from the spatial distribution of galaxies at z = 0. This all works well provided the dominant dark mass is dynamically cold, and there isn’t too much hot dark matter fighting it.

t16_galaxy_power_spectrum
The power spectrum from the CMB (low frequency/large scales) and the galaxy distribution (high frequency/”small” scales). Adapted from Whittle.

How much is too much? The power spectrum puts strong limits on the amount of hot dark matter that is tolerable. The upper limit is ∑mν < 0.12 eV. This is an order of magnitude stronger than direct experimental constraints.

Usually, it is assumed that the experimental limit will eventually come down to the structure formation limit. That does seem likely, but it is also conceivable that the neutrino mass has some intermediate value, say mν ≈ 1 eV. Such a result, were it to be obtained experimentally, would falsify the current CDM cosmogony.

Such a result seems unlikely, of course. Shooting for a narrow window such as the gap between the current cosmological and experimental limits is like drawing to an inside straight. It can happen, but it is unwise to bet the farm on it.

It should be noted that a circa 1 eV neutrino would have some desirable properties in an MONDian universe. MOND can form large scale structure, much like CDM, but it does so faster. This is good for clearing out the voids and getting structure in place early, but it tends to overproduce structure by z = 0. An admixture of neutrinos might help with that. A neutrino with an appreciable mass would also help with the residual mass discrepancy MOND suffers in clusters of galaxies.

If experiments measure a neutrino mass in excess of the cosmological limit, it would be powerful motivation to consider MOND-like theories as a driver of structure formation. If instead the neutrino does prove to be tiny, ΛCDM will have survived another test. That wouldn’t falsify MOND (or really have any bearing on it), but it would remove one potential “out” for the galaxy cluster problem.

Tiny though they be, neutrinos got mass! And it matters!

LCDM has met the enemy, and it is itself

LCDM has met the enemy, and it is itself

David Merritt recently published the article “Cosmology and convention” in Studies in History and Philosophy of Science. This article is remarkable in many respects. For starters, it is rare that a practicing scientist reads a paper on the philosophy of science, much less publishes one in a philosophy journal.

I was initially loathe to start reading this article, frankly for fear of boredom: me reading about cosmology and the philosophy of science is like coals to Newcastle. I could not have been more wrong. It is a genuine page turner that should be read by everyone interested in cosmology.

I have struggled for a long time with whether dark matter constitutes a falsifiable scientific hypothesis. It straddles the border: specific dark matter candidates (e.g., WIMPs) are confirmable – a laboratory detection is both possible and plausible – but the concept of dark matter can never be excluded. If we fail to find WIMPs in the range of mass-cross section parameters space where we expected them, we can change the prediction. This moving of the goal post has already happened repeatedly.

wimplimits2017
The cross-section vs. mass parameter space for WIMPs. The original, “natural” weak interaction cross-section (10-39) was excluded long ago, as were early attempts to map out the theoretically expected parameter space (upper pink region). Later predictions drifted to progressively lower cross-sections. These evaded experimental limits at the time, and confident predictions were made that the dark matter would be found.  More recent data show otherwise: the gray region is excluded by PandaX (2016). [This plot was generated with the help of DMTools hosted at Brown.]
I do not find it encouraging that the goal posts keep moving. This raises the question, how far can we go? Arbitrarily low cross-sections can be extracted from theory if we work at it hard enough. How hard should we work? That is, what criteria do we set whereby we decide the WIMP hypothesis is mistaken?

There has to be some criterion by which we would consider the WIMP hypothesis to be falsified. Without such a criterion, it does not satisfy the strictest definition of a scientific hypothesis. If at some point we fail to find WIMPs and are dissatisfied with the theoretical fine-tuning required to keep them hidden, we are free to invent some other dark matter candidate. No WIMPs? Must be axions. Not axions? Would you believe light dark matter? [Worst. Name. Ever.] And so on, ad infinitum. The concept of dark matter is not falsifiable, even if specific dark matter candidates are subject to being made to seem very unlikely (e.g., brown dwarfs).

Faced with this situation, we can consult the philosophy science. Merritt discusses how many of the essential tenets of modern cosmology follow from what Popper would term “conventionalist stratagems” – ways to dodge serious consideration that a treasured theory is threatened. I find this a compelling terminology, as it formalizes an attitude I have witnessed among scientists, especially cosmologists, many times. It was put more colloquially by J.K. Galbraith:

“Faced with the choice between changing one’s mind and proving that there is no need to do so, almost everybody gets busy on the proof.”

Boiled down (Keuth 2005), the conventionalist strategems Popper identifies are

  1. ad hoc hypotheses
  2. modification of ostensive definitions
  3. doubting the reliability of the experimenter
  4. doubting the acumen of the theorist

These are stratagems to be avoided according to Popper. At the least they are pitfalls to be aware of, but as Merritt discusses, modern cosmology has marched down exactly this path, doing each of these in turn.

The ad hoc hypotheses of ΛCDM are of course Λ and CDM. Faced with the observation of a metric that cannot be reconciled with the prior expectation of a decelerating expansion rate, we re-invoke Einstein’s greatest blunder, Λ. We even generalize the notion and give it a fancy new name, dark energy, which has the convenient property that it can fit any observed set of monotonic distance-redshift pairs. Faced with an excess of gravitational attraction over what can be explained by normal matter, we invoke non-baryonic dark matter: some novel form of mass that has no place in the standard model of particle physics, has yet to show any hint of itself in the laboratory, and cannot be decisively excluded by experiment.

We didn’t accept these ad hoc add-ons easily or overnight. Persuasive astronomical evidence drove us there, but all these data really show is that something dire is wrong: General Relativity plus known standard model particles cannot explain the universe. Λ and CDM are more a first guess than a final answer. They’ve been around long enough that they have become familiar, almost beyond doubt. Nevertheless, they remain unproven ad hoc hypotheses.

The sentiment that is often asserted is that cosmology works so well that dark matter and dark energy must exist. But a more conservative statement would be that our present understanding of cosmology is correct if and only if these dark entities exist. The onus is on us to detect dark matter particles in the laboratory.

That’s just the first conventionalist stratagem. I could given many examples of violations of the other three, just from my own experience. That would make for a very long post indeed.

Instead, you should go read Merritt’s paper. There are too many things there to discuss, at least in a single post. You’re best going to the source. Be prepared for some cognitive dissonance.

19133887

Crater 2: the Bullet Cluster of LCDM

Crater 2: the Bullet Cluster of LCDM

Recently I have been complaining about the low standards to which science has sunk. It has become normal to be surprised by an observation, express doubt about the data, blame the observers, slowly let it sink in, bicker and argue for a while, construct an unsatisfactory model that sort-of, kind-of explains the surprising data but not really, call it natural, then pretend like that’s what we expected all along. This has been going on for so long that younger scientists might be forgiven if they think this is how science is suppose to work. It is not.

At the root of the scientific method is hypothesis testing through prediction and subsequent observation. Ideally, the prediction comes before the experiment. The highest standard is a prediction made before the fact in ignorance of the ultimate result. This is incontrovertibly superior to post-hoc fits and hand-waving explanations: it is how we’re suppose to avoid playing favorites.

I predicted the velocity dispersion of Crater 2 in advance of the observation, for both ΛCDM and MOND. The prediction for MOND is reasonably straightforward. That for ΛCDM is fraught. There is no agreed method by which to do this, and it may be that the real prediction is that this sort of thing is not possible to predict.

The reason it is difficult to predict the velocity dispersions of specific, individual dwarf satellite galaxies in ΛCDM is that the stellar mass-halo mass relation must be strongly non-linear to reconcile the steep mass function of dark matter sub-halos with their small observed numbers. This is closely related to the M*-Mhalo relation found by abundance matching. The consequence is that the luminosity of dwarf satellites can change a lot for tiny changes in halo mass.

apj374168f11_lr
Fig. 11 from Tollerud et al. (2011, ApJ, 726, 108). The width of the bands illustrates the minimal scatter expected between dark halo and measurable properties. A dwarf of a given luminosity could reside in dark halos differing be two decades in mass, with a corresponding effect on the velocity dispersion.

Long story short, the nominal expectation for ΛCDM is a lot of scatter. Photometrically identical dwarfs can live in halos with very different velocity dispersions. The trend between mass, luminosity, and velocity dispersion is so weak that it might barely be perceptible. The photometric data should not be predictive of the velocity dispersion.

It is hard to get even a ballpark answer that doesn’t make reference to other measurements. Empirically, there is some correlation between size and velocity dispersion. This “predicts” σ = 17 km/s. That is not a true theoretical prediction; it is just the application of data to anticipate other data.

Abundance matching relations provide a highly uncertain estimate. The first time I tried to do this, I got unphysical answers (σ = 0.1 km/s, which is less than the stars alone would cause without dark matter – about 0.5 km/s). The application of abundance matching requires extrapolation of fits to data at high mass to very low mass. Extrapolating the M*-Mhalo relation over many decades in mass is very sensitive to the low mass slope of the fitted relation, so it depends on which one you pick.

he-chose-poorly

Since my first pick did not work, lets go with the value suggested to me by James Bullock: σ = 11 km/s. That is the mid-value (the blue lines in the figure above); the true value could easily scatter higher or lower. Very hard to predict with any precision. But given the luminosity and size of Crater 2, we expect numbers like 11 or 17 km/s.

The measured velocity dispersion is σ = 2.7 ± 0.3 km/s.

This is incredibly low. Shockingly so, considering the enormous size of the system (1 kpc half light radius). The NFW halos predicted by ΛCDM don’t do that.

To illustrate how far off this is, I have adopted this figure from Boylan-Kolchin et al. (2012).

mbkplusdwarfswcraterii
Fig. 1 of MNRAS, 422, 1203 illustrating the “too big to fail” problem: observed dwarfs have lower velocity dispersions than sub-halos that must exist and should host similar or even more luminous dwarfs that apparently do not exist. I have had to extend the range of the original graph to lower velocities in order to include Crater 2.

Basically, NFW halos, including the sub-halos imagined to host dwarf satellite galaxies, have rotation curves that rise rapidly and stay high in proportion to the cube root of the halo mass. This property makes it very challenging to explain a low velocity at a large radius: exactly the properties observed in Crater 2.

Lets not fail to appreciate how extremely wrong this is. The original version of the graph above stopped at 5 km/s. It didn’t extend to lower values because they were absurd. There was no reason to imagine that this would be possible. Indeed, the point of their paper was that the observed dwarf velocity dispersions were already too low. To get to lower velocity, you need an absurdly low mass sub-halo – around 107 M. In contrast, the usual inference of masses for sub-halos containing dwarfs of similar luminosity is around 109 Mto 1010 M. So the low observed velocity dispersion – especially at such a large radius – seems nigh on impossible.

More generally, there is no way in ΛCDM to predict the velocity dispersions of particular individual dwarfs. There is too much intrinsic scatter in the highly non-linear relation between luminosity and halo mass. Given the photometry, all we can say is “somewhere in this ballpark.” Making an object-specific prediction is impossible.

Except that it is possible. I did it. In advance.

The predicted velocity dispersion is σ = 2.1 +0.9/-0.6 km/s.

I’m an equal opportunity scientist. In addition to ΛCDM, I also considered MOND. The successful prediction is that of MOND. (The quoted uncertainty reflects the uncertainty in the stellar mass-to-light ratio.) The difference is that MOND makes a specific prediction for every individual object. And it comes true. Again.

MOND is a funny theory. The amplitude of the mass discrepancy it induces depends on how low the acceleration of a system is. If Crater 2 were off by itself in the middle of intergalactic space, MOND would predict it should have a velocity dispersion of about 4 km/s.

But Crater 2 is not isolated. It is close enough to the Milky Way that there is an additional, external acceleration imposed by the Milky Way. The net result is that the acceleration isn’t quite as low as it would be were Crater 2 al by its lonesome. Consequently, the predicted velocity dispersion is a measly 2 km/s. As observed.

In MOND, this is called the External Field Effect (EFE). Theoretically, the EFE is rather disturbing, as it breaks the Strong Equivalence Principle. In particular, Local Position Invariance in gravitational experiments is violated: the velocity dispersion of a dwarf satellite depends on whether it is isolated from its host or not. Weak equivalence (the universality of free fall) and the Einstein Equivalence Principle (which excludes gravitational experiments) may still hold.

We identified several pairs of photometrically identical dwarfs around Andromeda. Some are subject to the EFE while others are not. We see the predicted effect of the EFE: isolated dwarfs have higher velocity dispersions than their twins afflicted by the EFE.

If it is just a matter of sub-halo mass, the current location of the dwarf should not matter. The velocity dispersion certainly should not depend on the bizarre MOND criterion for whether a dwarf is affected by the EFE or not. It isn’t a simple distance-dependency. It depends on the ratio of internal to external acceleration. A relatively dense dwarf might still behave as an isolated system close to its host, while a really diffuse one might be affected by the EFE even when very remote.

When Crater 2 was first discovered, I ground through the math and tweeted the prediction. I didn’t want to write a paper for just one object. However, I eventually did so because I realized that Crater 2 is important as an extreme example of a dwarf so diffuse that it is affected by the EFE despite being very remote (120 kpc from the Milky Way). This is not easy to reproduce any other way. Indeed, MOND with the EFE is the only way that I am aware of whereby it is possible to predict, in advance, the velocity dispersion of this particular dwarf.

If I put my ΛCDM hat back on, it gives me pause that any method can make this prediction. As discussed above, this shouldn’t be possible. There is too much intrinsic scatter in the halo mass-luminosity relation.

If we cook up an explanation for the radial acceleration relation, we still can’t make this prediction. The RAR fit we obtained empirically predicts 4 km/s. This is indistinguishable from MOND for isolated objects. But the RAR itself is just an empirical law – it provides no reason to expect deviations, nor how to predict them. MOND does both, does it right, and has done so before, repeatedly. In contrast, the acceleration of Crater 2 is below the minimum allowed in ΛCDM according to Navarro et al.

For these reasons I consider Crater 2 to be the bullet cluster of ΛCDM. Just as the bullet cluster seems like a straight-up contradiction to MOND, so too does Crater 2 for ΛCDM. It is something ΛCDM really can’t do. The difference is that you can just look at the bullet cluster. With Crater 2 you actually have to understand MOND as well as ΛCDM, and think it through.

So what can we do to save ΛCDM?

Whatever it takes, per usual.

One possibility is that Crater II may represent the “bright” tip of the extremely low surface brightness “stealth” fossils predicted by Bovill & Ricotti. Their predictions are encouraging for getting the size and surface brightness in the right ballpark. But I see no reason in this context to expect such a low velocity dispersion. They anticipate dispersions consistent with the ΛCDM discussion above, and correspondingly high mass-to-light ratios that are greater than observed for Crater 2 (M/L ≈ 104 rather than ~50).

plausible suggestion I heard was from James Bullock. While noting that reionization should preclude the existence of galaxies in halos below 5 km/s, as we need for Crater 2, he suggested that tidal stripping could reduce an initially larger sub-halo to this point. I am dubious about this, as my impression from the simulations of Penarrubia  was that the outer regions of the sub-halo were stripped first while leaving the inner regions (where the NFW cusp predicts high velocity dispersions) largely intact until near complete dissolution. In this context, it is important to bear in mind that the low velocity dispersion of Crater 2 is observed at large radii (1 kpc, not tens of pc). Still, I can imagine ways in which this might be made to work in this particular case, depending on its orbit. Tony Sohn has an HST program to measure the proper motion; this should constrain whether the object has ever passed close enough to the center of the Milky Way to have been tidally disrupted.

Josh Bland-Hawthorn pointed out to me that he made simulations that suggest a halo with a mass as low as 107 Mcould make stars before reionization and retain them. This contradicts much of the conventional wisdom outlined above because they find a much lower (and in my opinion, more realistic) feedback efficiency for supernova feedback than assumed in most other simulations. If this is correct (as it may well be!) then it might explain Crater 2, but it would wreck all the feedback-based explanations given for all sorts of other things in ΛCDM, like the missing satellite problem and the cusp-core problem. We can’t have it both ways.

maxresdefault
Without super-efficient supernova feedback, the Local Group would be filled with a million billion ultrafaint dwarf galaxies!

I’m sure people will come up with other clever ideas. These will inevitably be ad hoc suggestions cooked up in response to a previously inconceivable situation. This will ring hollow to me until we explain why MOND can predict anything right at all.

In the case of Crater 2, it isn’t just a matter of retrospectively explaining the radial acceleration relation. One also has to explain why exceptions to the RAR occur following the very specific, bizarre, and unique EFE formulation of MOND. If I could do that, I would have done so a long time ago.

No matter what we come up with, the best we can hope to do is a post facto explanation of something that MOND predicted correctly in advance. Can that be satisfactory?