A lengthy personal experience with experimental searches for WIMPs

A lengthy personal experience with experimental searches for WIMPs

This post is adopted from a web page I wrote in 2008, before starting this blog. It covers some ground that I guess is now historic about things that were known about WIMPs from their beginnings in the 1980s, and experimental searches therefore. In part, I was just trying to keep track of experimental limits, with updates added as noted since the first writing. This is motivated now by some troll on Twitter trying to gaslight people into believing there were no predictions for WIMPs prior to the discovery of the Higgs boson. Contrary to this assertion, the field had already gone through many generations of predictions, with the theorists moving the goal posts every time a prediction was excluded. I have colleagues involved in WIMP searches that have left that field in disgust at having the goal posts moved on them: what good are the experimental searches if, every time they reach the promised land, they’re simply told the promised land is over the next horizon? You experimentalists just keep your noses to the grindstone, and don’t bother the Big Brains with any inconvenient questions!

We were already very far down this path in 2008 – so far down it, I called it the express elevator to hell, since the predicted interaction cross-section kept decreasing to evade experimental limits. Since that time, theorists have added sideways in mass to their evasion tactics, with some advocating for “light” dark matter (less in mass than the 2 GeV Lee-Weinberg limit for the minimum WIMP mass) while others advocate for undetectably high mass WIMPzillas (because there’s a lot of unexplored if unexpected parameter space at high mass to roam around in before hitting the unitarity bound. Theorists love to go free range.)

These evasion tactics had become ridiculous well before the Higgs was discovered in 2012. Many people don’t seem to have memories that long, so let’s review. Text in normal font was written in 2008; later additions are italicized.

Seeking WIMPs in all the wrong places

This article has been updated many times since it was first written in 2008, at which time we were already many years down the path it describes.

The Need for Dark Matter
Extragalactic systems like spiral galaxies and clusters of galaxies exhibit mass discrepancies. The application of Newton’s Law of Gravity to the observed stars and gas fails to explain the rapid observed motions. This leads to the inference that some form of invisible mass – dark matter – dominates the dynamics of the universe.

WIMPs
If asked what the dark matter is, most scientists working in the field will respond honestly that we have no idea. There are many possible candidates. Some, like MACHOs (Massive Compact Halo Objects, perhaps brown dwarfs) have essentially been ruled out. However, in our heart of hearts there is a huge odds-on favorite: the WIMP.

WIMP stands for Weakly Interacting Massive Particle. This is an entire class of new fundamental particles that emerge from supersymmetry. Supersymmetry (SUSY) is a theoretical notion by which known elementary particles have supersymmetric partner particles. This notion is not part of the highly successful Standard Model of particle physics, but might exist provided that the Higgs boson exists. In the so-called Minimal Supersymmetric Standard Model (MSSM), which was hypothesized to explain the hierarchy problem (i.e., why do the elementary particles have the various masses that they do), the lightest stable supersymmetric particle is the neutralino. This is the WIMP that presumably makes up the dark matter.

2020 update: the Higgs does indeed exist. Unfortunately, it is too normal. That is, it fits perfectly well with the Standard Model without any need for SUSY. Indeed, it is so normal that MSSM is pretty much excluded. One can persist with more complicated theories (as always) but to date SUSY has flunked every experimental test, including the “golden test” of the decay of the Bs meson. Never heard of the golden test? The theorists were all about it until SUSY flunked it; now they never seem to mention it.

Cosmology, meet particle physics
There is a confluence of history in the development of previously distinct fields. The need for cosmological dark matter became clear in the 1980s, the same time that MSSM was hypothesized to solve the hierarchy problem in particle physics. Moreover, it was quickly realized that the cosmological dark matter could not be normal (“baryonic“) matter. New fundamental particles therefore seemed a natural fit.

The cosmic craving for CDM
There are two cosmological reason why we need non-baryonic cold dark matter (CDM):

  1. The measured density of gravitating mass appears to considerably exceed that in normal matter as constrained by Big Bang Nucleosynthesis (BBN): Ωm = 6 Ωb (so Ωnot baryons = 5 Ωbaryons).
  2. Gravity is too weak to grow the presently observed structures (e.g., galaxies, clusters, filaments) from the smooth initial condition observed in the cosmic microwave background (CMB) unless something speeds up the process. Extra mass will do this, but it must not interact with the photons of the CMB the way ordinary matter does.

By themselves, either of these arguments are strong. Together, they were compelling enough to launch the CDM paradigm. (Like most scientists of my generation, I certainly bought into it.)

From the astronomical perspective, all that is required is that the dark matter be non-baryonic and dynamically cold. Non-baryonic so that it does not participate in Big Bang Nucleosynthesis or interact with photons (a good way to remain invisible!), and dynamically cold (i.e., slow moving, not relativistic) so that it can clump and form gravitationally bound structures. Many things might satisfy these astronomical requirements. For example, supermassive black holes fit the bill, though they would somehow have to form in the first second of creation in order not to impact BBN.

The WIMP Miracle
From a particle physics perspective, the early universe was a high energy place where energy and mass could switch from one form to the other freely as enshrined in Einstein’s E = mc2. Pairs of particles and their antiparticles could come and go. However, as the universe expands, it cools. As it cools, it loses the energy necessary to create particle pairs. When this happens for a particular particle depends on the mass of the particle – the more mass, the more energy is required, and the earlier that particle-antiparticle pair “freeze out.” After freeze-out, the remaining particle-antiparticle pairs can mutually annihilate, leaving only energy. To avoid this fate, there must either be some asymmetry (apparently there was about one extra proton for every billion proton-antiproton pairs – an asymmetry on which our existence depends even if we don’t yet understand it) or the “cross section” – the probability for interacting – must be so low that particles and their antiparticles go their separate ways without meeting often enough to annihilate completely. This process leaves some relic density that depends on the properties of the particles.

If one asks what relic density is necessary to make up the cosmic dark matter, the cross section that comes out is about that of the weak nuclear force. A particle that interacts through the weak force but not the electromagnetic force will have the about the right relic density. Moreover, it won’t interfere with BBN or the CMB. The WIMPs hypothesized by supersymmetry fit the bill for cosmologists’ CDM. This coincidence of scales – the relic density and the weak force interaction scale – is sometimes referred to as the “WIMP miracle” and was part of the motivation to adopt the WIMP as the leading candidate for cosmological dark matter.

WIMP detection experiments
WIMPs as CDM is a well posed scientific hypothesis subject to experimental verification. From astronomical measurements, we know how much we need in the solar neighborhood – about 0.3 GeV c-2 cm-3. (That means there are a few hundred WIMPs passing through your body at any given moment, depending on the exact mass of the particle.) From particle physics, we know the weak interaction cross section, so can calculate the probability of a WIMP interacting with normal matter. In this respect, WIMPs are very much like neutrinos – they can pass right through solid matter because they do not experience the electromagnetic interactions that make ordinary matter solid. But once in a very rare while, they may come close enough to an atomic nucleus to interact with it via the weak force. This is the signature that can be sought experimentally.

There is a Nobel Prize waiting for whoever discovers the dark matter, so there are now many experiments seeking to do so. Generically, these use very pure samples of some element (like Germanium or Argon or Xenon) to act as targets for the WIMPs making up the dark matter component of our Milky Way Galaxy. The sensitivity required is phenomenal, and many mundane background events (cosmic rays, natural radioactivity, clumsy colleagues dropping beer cans) that might mimic WIMPs must be screened out. For this reason, there is a strong desire to perform these experiments in deep mine shafts where the apparatus can be shielded from the cosmic rays that bombard our planet and other practical nuisances.

The technology development involved in the hunt for WIMPs is amazing. The experimentalists have accomplished phenomenal things in the hunt for dark matter. That they have so far failed to detect it should give pause to any thinking person aquainted with the history, techniques, and successes of particle physics. This failure is both a surprise and disappointment to those who understand modern cosmology. It should not come as a surprise to anyone familiar with the dynamical evidence for – and against – dark matter.

Searches for WIMPs are proceeding apace. The sensitivity of these experiments is increasing at an accelerating rate. They already provide important constraints – see the figure:


Searching for WIMPs

This 2008 graph shows the status of searches for Weakly Interacting Massive Particles (WIMPs). The abscissa is the mass of the putative WIMP particle. For reference, the proton has a mass of about one in these units. The ordinate is a measure of the probability for WIMPs to interact with normal matter. Not much! The shaded regions represent theoretical expectations for WIMPs. The light red region is the original (Ellis et al.) forecast. The blue and green regions are more recent predictions (Trotta et al. 2008). The lines are representative experimental limits. The region above each line is excluded – if WIMPs had existed in that range of mass and interaction probability, they would have been detected already. The top line (from CDMS in 2004) excluded much of the original prediction. More recent work (colored lines, circa 2008) now approach the currently expected region.

April 2011 update: XENON100 sees nada. Note how the “expected” region continues to retreat downward in cross section as experiments exclude the previous sweet spots in this parameter. This is the express elevator to hell (see below).

September 2011 update: CREST-II claims a detection. Unfortunately, their positive result violates limits imposed by several other experiments, including XENON100. Somebody is doing their false event rejection wrong.

July 2012 update: XENON100 still seeing squat. Note that the “head” of the most probable (blue) region in the figure above is now excluded.
It is interesting to compare the time sequence of their results: first | run 8 | run 10.

November 2013 update: LUX sees nothing and excludes the various claims for detections of light dark matter (see inset). This exclusion of light dark matter appears to be highly significant as the very recent prediction was for about dozen of detections per month, which should have added up to an easy detection rather than the observed absence of events in excess of the expected background. Note also that the new exclusion boundary cuts deeply into the region predicted for traditional heavy (~ 100 GeV) WIMPs by Buchmuelleur et al. as depicted by Xenon100. The Buchmuelleur et al. “prediction” is already a downscaling from the bulk of probability predicted by Trotta et al. (2008 – the blue region in the figure above). This perpetual adjustment of the expectation for the WIMP cross-section is precisely the dodgy moving of the goal posts that prompted me to first write this web page years ago.

May 2014: “Crunch time” for dark matter comes and goes.

July 2016 update: PandaX sees nada.

August 2016 update: LUX continues to see nada. The minimum of their exclusion line now reaches the bottom axis of the 2009 plot (above the line, with the now-excluded blue blob). The “predicted” WIMP (gray area in the plot within this section) appears to have migrated to higher mass in addition to the downward migration of the cross-section. I guess this is the sideways turbolift to evil-Kirk universe hell.


Indeed, the experiments have perhaps been too successful. The original region of cross section-mass parameter space in which WIMPs were expected to reside was excluded years ago. Not easily dissuaded, theorists waved their hands, invoked the Majorana see-saw mechanism, and reduced the interaction probability to safely non-detectable levels. This is the vertical separation of the reddish and blue-green regions in the figure.

To quote a particle physicist, “The most appealing possibility – a weak scale dark matter particle interacting with matter via Z-boson exchange – leads to the cross section of order 10-39 cm2 which was excluded back in the ’80s by the first round of dark matter experiments. There exists another natural possibility for WIMP dark matter: a particle interacting via Higgs boson exchange. This would lead to the cross section in the 10-42-10-46 cm2 ballpark (depending on the Higgs mass and on the coupling of dark matter to the Higgs).”

From this 2011 Resonaances post

Though set back and discouraged by this theoretical slight of hand (the WIMP “miracle” is now more of a vague coincidence, like seeing an old flame in Grand Central Station but failing to say anything because (a) s/he is way over on another platform and (b) on reflection, you’re not really sure it was him or her after all), experimentallists have been gaining ground on the newly predicted region. If all goes as planned, most of the plausible parameter space will have been explored in a few more years. (I have heard it asserted that “we’ll know what the dark matter is in 5 years” every 5 years for the past two decades. Make that three decades now.)

The express elevator to hell

We’re on an express elevator to hell – going down!

There is a slight problem with the current predictions for WIMPs. While there is a clear focus point where WIMPs most probably reside (the blue blob in the figure), there is also a long tail to low interaction cross section. If we fail to detect WIMPs when experimental sensitivity encompasses the blob, the presumption will be that we’re just unlucky and WIMPs happen to live in the low-probability tail that is not yet excluded. (Low probability regions tend to seem more reasonable as higher probability regions are rejected and we forget about them.) This is the express elevator to hell. No matter how much time, money, and effort we invest in further experimentation, the answer will always be right around the corner. This process can go on forever.

Is dark matter a falsifiable hypothesis?

The existence of dark matter is an inference, not an experimental fact. Individual candidates for the dark matter can be tested and falsified. For example, it was once reasonable to imagine that innumerable brown dwarfs could be the dark matter. That is no longer true – were there that many brown dwarfs out there, we would have seen them directly by now. The brown dwarf hypothesis has been falsified. WIMPs are falsifiable dark matter candidates – provided we don’t continually revise their interaction probability. If we keep doing this, the hypothesis ceases to have predictive power and is no longer subject to falsification.

The concept of dark matter is not falsifiable. If we exclude one candidate, we are free to make up another one. After WIMPs, the next obvious candidate is axions. Should those be falsified, we invent something else. (Particle physicists love to do this. The literature is littered with half-baked dark matter candidates invented for dubious reasons, often to explain phenomena with obvious astrophysical causes. The ludicrous uproar over the ATIC and PAMELA cosmic ray experiments is a good example.) (Circa 2008, there was a lot of enthusiasm that certain signals detected by cosmic ray experiments were caused by dark matter. These have gone away.)


September 2011 update: Fermi confirms the PAMELA positron excess. Too well for it to be dark matter: there is no upper threshold energy corresponding to the WIMP mass. Apparently these cosmic rays are astrophysical in origin, which comes as no surprise to high energy astrophysicists.

April 2013 update: AMS makes claims to detect dark matter that are so laughably absurd they do not warrant commentary.

September 2016 update: There is no update. People seem to have given up on claiming that there is any sign of dark matter in cosmic rays. There have been claims of dark matter causing signatures in gamma ray data and separately in X-ray data. These never looked credible and went away on a time scale shorter so short that on one occasion, an entire session of a 2014 conference had been planned to discuss a gamma ray signal at 126 GeV as dark matter. I asked the organizers a few months in advance if that was even going to be a thing by the time we met. It wasn’t: every speaker scheduled for that session gave some completely unrelated talk.

November 2019 update: Xenon1T sees no sign of WIMPs. (There is some hint of an excess of electron recoils. These are completely the wrong energy scale to be the signal that this experiment was designed to detect.

WIMP prediction and limits. The shaded region marks the prediction of Trotta et al. (2008) for the WIMP mass and interaction cross-section. The lighter shade depicts the 95% confidence limit, the dark region the 68% c.l., and the cross the best fit. The heavy line shows the 90% c.l. exclusion limit from the Xenon1T experiment. Everything above the line is excluded, ruling out virtually all the parameter space in which WIMPs had been predicted to reside.

2020 comment: I was present at a meeting in 2009 when the predictions of Trotta et al (above, in grey, and higher up, in blue and green) was new and fresh. I was, at that point, already feeling like we’d been led down this garden path more than one too many times. So I explicitly asked about the long tail to low cross-section. I was assured that the probability in that tail was < 2%; we would surely detect the WIMP at somewhere around the favored value (the X in the gray figure). We did not. Essentially all of that predicted parameter space has been excluded, with only a tiny fraction of the 2% tail extending below current limits. Worse, the top border of the Trotta et al prediction was based on the knowledge that the parameter space to higher cross section – where the WIMP was originally predicted to reside – had already been experimentally excluded. So the grey region understates the range of parameter space over which WIMPs were reasonably expected to exist. I’m sure there are people who would like to pretend that the right “prediction” for the WIMP is at still lower cross section. That would be an example of how those who are ignorant (or in denial) of history are doomed to repeat it.

I predict that none the big, expensive WIMP experiments will ever find what they’re looking for. It is past time to admit that the lack of detections is because WIMPs don’t exist. I could be proven wrong by the simple expedient of obtaining a credible WIMP detection. I’m sure there are many bright, ambitious scientists who will take up that challenge. To them I say: after you’ve spent your career at the bottom of a mine shaft with no result to show for it, look up at the sky and remember that I tried to warn you.


A Significant Theoretical Advance

A Significant Theoretical Advance

The missing mass problem has been with us many decades now. Going on a century if you start counting from the work of Oort and Zwicky in the 1930s. Not quite a half a century if we date it from the 1970s when most of the relevant scientific community started to take it seriously. Either way, that’s a very long time for a major problem to go unsolved in physics. The quantum revolution that overturned our classical view of physics was lightning fast in comparison – see the discussion of Bohr’s theory in the foundation of quantum mechanics in David Merritt’s new book.

To this day, despite tremendous efforts, we have yet to obtain a confirmed laboratory detection of a viable dark matter particle – or even a hint of persuasive evidence for the physics beyond the Standard Model of Particle Physics (e.g., supersymmetry) that would be required to enable the existence of such particles. We cannot credibly claim (as many of my colleagues insist they can) to know that such invisible mass exists. All we really know is that there is a discrepancy between what we see and what we get: the universe and the galaxies within it cannot be explained by General Relativity and the known stable of Standard Model particles.

If we assume that General Relativity is both correct and sufficient to explain the universe, which seems like a very excellent assumption, then we are indeed obliged to invoke non-baryonic dark matter. The amount of astronomical evidence that points in this direction is overwhelming. That is how we got to where we are today: once we make the obvious, imminently well-motivated assumption, then we are forced along a path in which we become convinced of the reality of the dark matter, not merely as a hypothetical convenience to cosmological calculations, but as an essential part of physical reality.

I think that the assumption that General Relativity is correct is indeed an excellent one. It has repeatedly passed many experimental and observational tests too numerous to elaborate here. However, I have come to doubt the assumption that it suffices to explain the universe. The only data that test it on scales where the missing mass problem arises is the data from which we infer the existence of dark matter. Which we do by assuming that General Relativity holds. The opportunity for circular reasoning is apparent – and frequently indulged.

It should not come as a shock that General Relativity might not be completely sufficient as a theory in all circumstances. This is exactly the motivation for and the working presumption of quantum theories of gravity. That nothing to do with cosmology will be affected along the road to quantum gravity is just another assumption.

I expect that some of my colleagues will struggle to wrap their heads around what I just wrote. I sure did. It was the hardest thing I ever did in science to accept that I might be wrong to be so sure it had to be dark matter – because I was sure it was. As sure of it as any of the folks who remain sure of it now. So imagine my shock when we obtained data that made no sense in terms of dark matter, but had been predicted in advance by a completely different theory, MOND.

When comparing dark matter and MOND, one must weigh all evidence in the balance. Much of the evidence is gratuitously ambiguous, so the conclusion to which one comes depends on how one weighs the more definitive lines of evidence. Some of this points very clearly to MOND, while other evidence prefers non-baryonic dark matter. One of the most important lines of evidence in favor of dark matter is the acoustic power spectrum of the cosmic microwave background (CMB) – the pattern of minute temperature fluctuations in the relic radiation field imprinted on the sky a few hundred thousand years after the Big Bang.

The equations that govern the acoustic power spectrum require General Relativity, but thankfully the small amplitude of the temperature variations permits them to be solved in the limit of linear perturbation theory. So posed, they can be written as a damped and driven oscillator. The power spectrum favors features corresponding to standing waves at the epoch of recombination when the universe transitioned rather abruptly from an opaque plasma to a transparent neutral gas. The edge of a cloud provides an analog: light inside the cloud scatters off the water molecules and doesn’t get very far: the cloud is opaque. Any light that makes it to the edge of the cloud meets no further resistance, and is free to travel to our eyes – which is how we perceive the edge of the cloud. The CMB is the expansion-redshifted edge of the plasma cloud of the early universe.

An easy way to think about a damped and a driven oscillator is a kid being pushed on a swing. The parent pushing the child is a driver of the oscillation. Any resistance – like the child dragging his feet – damps the oscillation. Normal matter (baryons) damps the oscillations – it acts as a net drag force on the photon fluid whose oscillations we observe. If there is nothing going on but General Relativity plus normal baryons, we should see a purely damped pattern of oscillations in which each peak is smaller than the one before it, as seen in the solid line here:

CMB_Pl_CLonly
The CMB acoustic power spectrum predicted by General Relativity with no cold dark matter (line) and as observed by the Planck satellite (data points).

As one can see, the case of no Cold Dark Matter (CDM) does well to explain the amplitudes of the first two peaks. Indeed, it was the only hypothesis to successfully predict this aspect of the data in advance of its observation. The small amplitude of the second peak came as a great surprise from the perspective of LCDM. However, without CDM, there is only baryonic damping. Each peak should have a progressively lower amplitude. This is not observed. Instead, the third peak is almost the same amplitude as the second, and clearly higher than expected in the pure damping scenario of no-CDM.

CDM provides a net driving force in the oscillation equations. It acts like the parent pushing the kid. Even though the kid drags his feet, the parent keeps pushing, and the amplitude of the oscillation is maintained. For the third peak at any rate. The baryons are an intransigent child and keep dragging their feet; eventually they win and the power spectrum damps away on progressively finer angular scales (large 𝓁 in the plot).

As I wrote in this review, the excess amplitude of the third peak over the no-CDM prediction is the best evidence to my mind in favor of the existence of non-baryonic CDM. Indeed, this observation is routinely cited by many cosmologists to absolutely require dark matter. It is argued that the observed power spectrum is impossible without it. The corollary is that any problem the dark matter picture encounters is a mere puzzle. It cannot be an anomaly because the CMB tells us that CDM has to exist.

Impossible is a high standard. I hope the reader can see the flaw in this line of reasoning. It is the same as above. In order to compute the oscillation power spectrum, we have assumed General Relativity. While not replacing it, the persistent predictive successes of a theory like MOND implies the existence of a more general theory. We do not know that such a theory cannot explain the CMB until we develop said theory and work out its predictions.

That said, it is a tall order. One needs a theory that provides a significant driving term without a large amount of excess invisible mass. Something has to push the swing in a universe full of stuff that only drags its feet. That does seem nigh on impossible. Or so I thought until I heard a talk by Pedro Ferreira where he showed how the scalar field in TeVeS – the relativistic MONDian theory proposed by Bekenstein – might play the same role as CDM. However, he and his collaborators soon showed that the desired effect was indeed impossible, at least in TeVeS: one could not simultaneously fit the third peak and the data preceding the first. This was nevertheless an important theoretical development, as it showed how it was possible, at least in principle, to affect the peak ratios without massive amounts of non-baryonic CDM.

At this juncture, there are two options. One is to seek a theory that might work, and develop it to the point where it can be tested. This is a lot of hard work that is bound to lead one down many blind alleys without promise of ultimate success. The much easier option is to assume that it cannot be done. This is the option adopted by most cosmologists, who have spent the last 15 years arguing that the CMB power spectrum requires the existence of CDM. Some even seem to consider it to be a detection thereof, in which case we might wonder why we bother with all those expensive underground experiments to detect the stuff.

Rather fewer people have invested in the approach that requires hard work. There are a few brave souls who have tried it; these include Constantinos Skordis and Tom Złosnik. Very recently, the have shown a version of a relativistic MOND theory (which they call RelMOND) that does fit the CMB power spectrum. Here is the plot from their paper:

CMB_RelMOND_2020

Note that black line in their plot is the fit of the LCDM model to the Planck power spectrum data. Their theory does the same thing, so it necessarily fits the data as well. Indeed, a good fit appears to follow for a range of parameters. This is important, because it implies that little or no fine-tuning is needed: this is just what happens. That is arguably better than the case for LCDM, in which the fit is very fine-tuned. Indeed, that was a large point of making the measurement, as it requires a very specific set of parameters in order to work. It also leads to tensions with independent measurements of the Hubble constant, the baryon density, and the amplitude of the matter power spectrum at low redshift.

As with any good science result, this one raises a host of questions. It will take time to explore these. But this in itself is a momentous result. Irrespective if RelMOND is the right theory or, like TeVeS, just a step on a longer path, it shows that the impossible is in fact possible. The argument that I have heard repeated by cosmologists ad nauseam like a rosary prayer, that dark matter is the only conceivable way to explain the CMB power spectrum, is simply WRONG.

A Philosophical Approach to MOND

A Philosophical Approach to MOND is a new book by David Merritt. This is a major development in the both the science of cosmology and astrophysics, on the one hand, and the philosophy and history of science on the other. It should be required reading for anyone interested in any of these topics.

For many years, David Merritt was a professor of astrophysics who specialized in gravitational dynamics, leading a number of breakthroughs in the effects of supermassive black holes in galaxies on the orbits of stars around them. He has since transitioned to the philosophy of science. This may not sound like a great leap, but it is: these are different scholarly fields, each with their own traditions, culture, and required background education. Changing fields like this is a bit like switching boats mid-stream: even a strong swimmer may flounder in the attempt given the many boulders academic disciplines traditionally place in the stream of knowledge to mark their territory. Merritt has managed the feat with remarkable grace, devouring the background reading and coming up to speed in a different discipline to the point of a lucid fluency.

For the most part, practicing scientists have little interaction with philosophers and historians of science. Worse, we tend to have little patience for them. The baseline presumption of many physical scientists is that we know what we’re doing; there is nothing the philosophers can teach us. In the daily practice of what Kuhn called normal science, this is close to true. When instead we are faced with potential paradigm shifts, the philosophy of science is critical, and the absence of training in it on the part of many scientists becomes glaring.

In my experience, most scientists seem to have heard of Popper and Kuhn. If that. Physical scientists will almost always pay lip service to Popper’s ideal of falsifiablity, and that’s pretty much the extent of it. Living up to applying that ideal is another matter. If an idea that is near and dear to their hearts and careers is under threat, the knee-jerk response is more commonly “let’s not get carried away!”

There is more to the philosophy of science than that. The philosophers of science have invested lots of effort in considering both how science works in practice (e.g., Kuhn) and how it should work (Popper, Lakatos, …) The practice and the ideal of science are not always the same thing.

The debate about dark matter and MOND hinges on the philosophy of science in a profound way. I do not think it is possible to make real progress out of our current intellectual morass without a deep examination of what science is and what it should be.

Merritt takes us through the methodology of scientific research programs, spelling out what we’ve learned from past experience (the history of science) and from careful consideration of how science should work (its philosophical basis). For example, all scientists agree that it is important for a scientific theory to have predictive power. But we are disturbingly fuzzy on what that means. I frequently hear my colleagues say things like “my theory predicts that” in reference to some observation, when in fact no such prediction was made in advance. What they usually mean is that it fits well with the theory. This is sometimes true – they could have predicted the observation in advance if they had considered that particular case. But sometimes it is retroactive fitting more than prediction – consistency, perhaps, but it could have gone a number of other ways equally well. Worse, it is sometimes a post facto assertion that is simply false: not only was the prediction not made in advance, but the observation was genuinely surprising at the time it was made. Only in retrospect is it “correctly” “predicted.”

The philosophers have considered these situations. One thing I appreciate is Merritt’s review of the various takes philosophers have on what counts as a prediction. I wish I had known these things when I wrote the recent review in which I took a very restrictive definition to avoid the foible above. The philosophers provide better definitions, of which more than one can be usefully applicable. I’m not going to go through them here: you should read Merritt’s book, and those of the philosophers he cites.

From this philosophical basis, Merritt makes a systematic, dare I say, scientific, analysis of the basic tenets of MOND and MONDian theories, and how they fare with regard to their predictions and observational tests. Along the way, he also considers the same material in the light of the dark matter paradigm. Of comparable import to confirmed predictions are surprising observations: if a new theory predicts that the sun will rise in the morning, that isn’t either new or surprising. If instead a theory expects one thing but another is observed, that is surprising, and it counts against that theory even if it can be adjusted to accommodate the new fact. I have seen this happen over and over with dark matter: surprising observations (e.g., the absence of cusps in dark matter halos, the small numbers of dwarf galaxies, downsizing in which big galaxies appear to form earliest) are at first ignored, doubted, debated, then partially explained with some mental gymnastics until it is Known and of course, we knew it all along. Merritt explicitly points out examples of this creeping determinism, in which scientists come to believe they predicted something they merely rationalized post-facto (hence the preeminence of genuinely a priori predictions that can’t be fudged).

Merritt’s book is also replete with examples of scientists failing to take alternatives seriously. This is natural: we have invested an enormous amount of time developing physical science to the point we have now reached; there is an enormous amount of background material that cannot simply be ignored or discarded. All too often, we are confronted with crackpot ideas that do exactly this. This makes us reluctant to consider ideas that sound crazy on first blush, and most of us will rightly display considerable irritation when asked to do so. For reasons both valid and not, MOND skirts this bondary. I certainly didn’t take it seriously myself, nor really considered it at all, until its predictions came true in my own data. It was so far below my radar that at first I did not even recognize that this is what had happened. But I did know I was surprised; what I was seeing did not make sense in terms of dark matter. So, from this perspective, I can see why other scientists are quick to dismiss it. I did so myself, initially. I was wrong to do so, and so are they.

A common failure mode is to ignore MOND entirely: despite dozens of confirmed predictions, it simply remains off the radar for many scientists. They seem never to have given it a chance, so they simply don’t pay attention when it gets something right. This is pure ignorance, which is not a strong foundation from which to render a scientific judgement.

Another common reaction is to acknowledge then dismiss. Merritt provides many examples where eminent scientists do exactly this with a construction like: “MOND correctly predicted X but…” where X is a single item, as if this is the only thing that [they are aware that] it does. Put this way, it is easy to dismiss – a common refrain I hear is “MOND fits rotation curves but nothing else.” This is a long-debunked falsehood that is asserted and repeated until it achieves the status of common knowledge within the echo chamber of scientists who refuse to think outside the dark matter box.

This is where the philosophy of science is crucial to finding our way forward. Merritt’s book illuminates how this is done. If you are reading these words, you owe it to yourself to read his book.

The halo mass function

The halo mass function

I haven’t written much here of late. This is mostly because I have been busy, but also because I have been actively refraining from venting about some of the sillier things being said in the scientific literature. I went into science to get away from the human proclivity for what is nowadays called “fake news,” but we scientists are human too, and are not immune from the same self-deception one sees so frequently exercised in other venues.

So let’s talk about something positive. Current grad student Pengfei Li recently published a paper on the halo mass function. What is that and why should we care?

One of the fundamental predictions of the current cosmological paradigm, ΛCDM, is that dark matter clumps into halos. Cosmological parameters are known with sufficient precision that we have a very good idea of how many of these halos there ought to be. Their number per unit volume as a function of mass (so many big halos, so many more small halos) is called the halo mass function.

An important test of the paradigm is thus to measure the halo mass function. Does the predicted number match the observed number? This is hard to do, since dark matter halos are invisible! So how do we go about it?

Galaxies are thought to form within dark matter halos. Indeed, that’s kinda the whole point of the ΛCDM galaxy formation paradigm. So by counting galaxies, we should be able to count dark matter halos. Counting galaxies was an obvious task long before we thought there was dark matter, so this should be straightforward: all one needs is the measured galaxy luminosity function – the number density of galaxies as a function of how bright they are, or equivalently, how many stars they are made of (their stellar mass). Unfortunately, this goes tragically wrong.

Galaxy stellar mass function and the predicted halo mass function
Fig. 5 from the review by Bullock & Boylan-Kolchin. The number density of objects is shown as a function of their mass. Colored points are galaxies. The solid line is the predicted number of dark matter halos. The dotted line is what one would expect for galaxies if all the normal matter associated with each dark matter halo turned into stars.

This figure shows a comparison of the observed stellar mass function of galaxies and the predicted halo mass function. It is from a recent review, but it illustrates a problem that goes back as long as I can remember. We extragalactic astronomers spent all of the ’90s obsessing over this problem. [I briefly thought that I had solved this problem, but I was wrong.] The observed luminosity function is nearly flat while the predicted halo mass function is steep. Consequently, there should be lots and lots of faint galaxies for every bright one, but instead there are relatively few. This discrepancy becomes progressively more severe to lower masses, with the predicted number of halos being off by a factor of many thousands for the faintest galaxies. The problem is most severe in the Local Group, where the faintest dwarf galaxies are known. Locally it is called the missing satellite problem, but this is just a special case of a more general problem that pervades the entire universe.

Indeed, the small number of low mass objects is just one part of the problem. There are also too few galaxies at large masses. Even where the observed and predicted numbers come closest, around the scale of the Milky Way, they still miss by a large factor (this being a log-log plot, even small offsets are substantial). If we had assigned “explain the observed galaxy luminosity function” as a homework problem and the students had returned as an answer a line that had the wrong shape at both ends and at no point intersected the data, we would flunk them. This is, in effect, what theorists have been doing for the past thirty years. Rather than entertain the obvious interpretation that the theory is wrong, they offer more elaborate interpretations.

Faced with the choice between changing one’s mind and proving that there is no need to do so, almost everybody gets busy on the proof.

J. K. Galbraith

Theorists persist because this is what CDM predicts, with or without Λ, and we need cold dark matter for independent reasons. If we are unwilling to contemplate that ΛCDM might be wrong, then we are obliged to pound the square peg into the round hole, and bend the halo mass function into the observed luminosity function. This transformation is believed to take place as a result of a variety of complex feedback effects, all of which are real and few of which are likely to have the physical effects that are required to solve this problem. That’s way beyond the scope of this post; all we need to know here is that this is the “physics” behind the transformation that leads to what is currently called Abundance Matching.

Abundance matching boils down to drawing horizontal lines in the above figure, thus matching galaxies with dark matter halos with equal number density (abundance). So, just reading off the graph, a galaxy of stellar mass M* = 108 M resides in a dark matter halo of 1011 M, one like the Milky Way with M* = 5 x 1010 M resides in a 1012 M halo, and a giant galaxy with M* = 1012 M is the “central” galaxy of a cluster of galaxies with a halo mass of several 1014 M. And so on. In effect, we abandon the obvious and long-held assumption that the mass in stars should be simply proportional to that in dark matter, and replace it with a rolling fudge factor that maps what we see to what we predict. The rolling fudge factor that follows from abundance matching is called the stellar mass–halo mass relation. Many of the discussions of feedback effects in the literature amount to a post hoc justification for this multiplication of forms of feedback.

This is a lengthy but insufficient introduction to a complicated subject. We wanted to get away from this, and test the halo mass function more directly. We do so by use of the velocity function rather than the stellar mass function.

The velocity function is the number density of galaxies as a function of how fast they rotate. It is less widely used than the luminosity function, because there is less data: one needs to measure the rotation speed, which is harder to obtain than the luminosity. Nevertheless, it has been done, as with this measurement from the HIPASS survey:

Galaxy velocity function
The number density of galaxies as a function of their rotation speed (Zwaan et al. 2010). The bottom panel shows the raw number of galaxies observed; the top panel shows the velocity function after correcting for the volume over which galaxies can be detected. Faint, slow rotators cannot be seen as far away as bright, fast rotators, so the latter are always over-represented in galaxy catalogs.

The idea here is that the flat rotation speed is the hallmark of a dark matter halo, providing a dynamical constraint on its mass. This should make for a cleaner measurement of the halo mass function. This turns out to be true, but it isn’t as clean as we’d like.

Those of you who are paying attention will note that the velocity function Martin Zwaan measured has the same basic morphology as the stellar mass function: approximately flat at low masses, with a steep cut off at high masses. This looks no more like the halo mass function than the galaxy luminosity function did. So how does this help?

To measure the velocity function, one has to use some readily obtained measure of the rotation speed like the line-width of the 21cm line. This, in itself, is not a very good measurement of the halo mass. So what Pengfei did was to fit dark matter halo models to galaxies of the SPARC sample for which we have good rotation curves. Thanks to the work of Federico Lelli, we also have an empirical relation between line-width and the flat rotation velocity. Together, these provide a connection between the line-width and halo mass:

Halo mass-line width relation
The relation Pengfei found between halo mass (M200) and line-width (W) for the NFW (ΛCDM standard) halo model fit to rotation curves from the SPARC galaxy sample.

Once we have the mass-line width relation, we can assign a halo mass to every galaxy in the HIPASS survey and recompute the distribution function. But now we have not the velocity function, but the halo mass function. We’ve skipped the conversion of light to stellar mass to total mass and used the dynamics to skip straight to the halo mass function:

Empirical halo mass function
The halo mass function. The points are the data; these are well fit by a Schechter function (black line; this is commonly used for the galaxy luminosity function). The red line is the prediction of ΛCDM for dark matter halos.

The observed mass function agrees with the predicted one! Test successful! Well, mostly. Let’s think through the various aspects here.

First, the normalization is about right. It does not have the offset seen in the first figure. As it should not – we’ve gone straight to the halo mass in this exercise, and not used the luminosity as an intermediary proxy. So that is a genuine success. It didn’t have to work out this well, and would not do so in a very different cosmology (like SCDM).

Second, it breaks down at high mass. The data shows the usual Schechter cut-off at high mass, while the predicted number of dark matter halos continues as an unabated power law. This might be OK if high mass dark matter halos contain little neutral hydrogen. If this is the case, they will be invisible to HIPASS, the 21cm survey on which this is based. One expects this, to a certain extent: the most massive galaxies tend to be gas-poor ellipticals. That helps, but only by shifting the turn-down to slightly higher mass. It is still there, so the discrepancy is not entirely cured. At some point, we’re talking about large dark matter halos that are groups or even rich clusters of galaxies, not individual galaxies. Still, those have HI in them, so it is not like they’re invisible. Worse, examining detailed simulations that include feedback effects, there do seem to be more predicted high-mass halos that should have been detected than actually are. This is a potential missing gas-rich galaxy problem at the high mass end where galaxies are easy to detect. However, the simulations currently available to us do not provide the information we need to clearly make this determination. They don’t look right, so far as we can tell, but it isn’t clear enough to make a definitive statement.

Finally, the faint-end slope is about right. That’s amazing. The problem we’ve struggled with for decades is that the observed slope is too flat. Here a steep slope just falls out. It agrees with the ΛCDM down to the lowest mass bin. If there is a missing satellite-type problem here, it is at lower masses than we probe.

That sounds great, and it is. But before we get too excited, I hope you noticed that the velocity function from the same survey is flat like the luminosity function. So why is the halo mass function steep?

When we fit rotation curves, we impose various priors. That’s statistics talk for a way of keeping parameters within reasonable bounds. For example, we have a pretty good idea of what the mass-to-light ratio of a stellar population should be. We can therefore impose as a prior that the fit return something within the bounds of reason.

One of the priors we imposed on the rotation curve fits was that they be consistent with the stellar mass-halo mass relation. Abundance matching is now part and parcel of ΛCDM, so it made sense to apply it as a prior. The total mass of a dark matter halo is an entirely notional quantity; rotation curves (and other tracers) pretty much never extend far enough to measure this. So abundance matching is great for imposing sense on a parameter that is otherwise ill-constrained. In this case, it means that what is driving the slope of the halo mass function is a prior that builds-in the right slope. That’s not wrong, but neither is it an independent test. So while the observationally constrained halo mass function is consistent with the predictions of ΛCDM; we have not corroborated the prediction with independent data. What we really need at low mass is some way to constrain the total mass of small galaxies out to much larger radii that currently available. That will keep us busy for some time to come.

Two fields divided by a common interest

Two fields divided by a common interest

Britain and America are two nations divided by a common language.

attributed to George Bernard Shaw

Physics and Astronomy are two fields divided by a common interest in how the universe works. There is a considerable amount of overlap between some sub-fields of these subjects, and practically none at all in others. The aims and goals are often in common, but the methods, assumptions, history, and culture are quite distinct. This leads to considerable confusion, as with the English language – scientists with different backgrounds sometimes use the same words to mean rather different things.

A few terms that are commonly used to describe scientists who work on the subjects that I do include astronomer, astrophysicist, and cosmologist. I could be described as any of the these. But I also know lots of scientists to whom these words could be applied, but would mean something rather different.

A common question I get is “What’s the difference between an astronomer and an astrophysicist?” This is easy to answer from my experience as a long-distance commuter. If I get on a plane, and the person next to me is chatty and asks what I do, if I feel like chatting, I am an astronomer. If I don’t, I’m an astrophysicist. The first answer starts a conversation, the second shuts it down.

Flippant as that anecdote is, it is excruciatingly accurate – both for how people react (commuting between Cleveland and Baltimore for a dozen years provided lots of examples), and for what the difference is: practically none. If I try to offer a more accurate definition, then I am sure to fail to provide a complete answer, as I don’t think there is one. But to make the attempt:

Astronomy is the science of observing the sky, encompassing all elements required to do so. That includes practical matters like the technology of telescopes and their instruments across all wavelengths of the electromagnetic spectrum, and theoretical matters that allow us to interpret what we see up there: what’s a star? a nebula? a galaxy? How does the light emitted by these objects get to us? How do we count photons accurately and interpret what they mean?

Astrophysics is the science of how things in the sky work. What makes a star shine? [Nuclear reactions]. What produces a nebular spectrum? [The atomic physics of incredibly low density interstellar plasma.] What makes a spiral galaxy rotate? [Gravity! Gravity plus, well, you know, something. Or, if you read this blog, you know that we don’t really know.] So astrophysics is the physics of the objects astronomy discovers in the sky. This is a rather broad remit, and covers lots of physics.

With this definition, astrophysics is a subset of astronomy – such a large and essential subset that the terms can and often are used interchangeably. These definitions are so intimately intertwined that the distinction is not obvious even for those of us who publish in the learned journals of the American Astronomical Society: the Astronomical Journal (AJ) and the Astrophysical Journal (ApJ). I am often hard-pressed to distinguish between them, but to attempt it in brief, the AJ is where you publish a paper that says “we observed these objects” and the ApJ is where you write “here is a model to explain these objects.” The opportunity for overlap is obvious: a paper that says “observations of these objects test/refute/corroborate this theory” could appear in either. Nevertheless, there was a clearly a sufficient need to establish a separate journal focused on the physics of how things in the sky worked to launch the Astrophysical Journal in 1895 to complement the older Astronomical Journal (dating from 1849).

Cosmology is the study of the entire universe. As a science, it is the subset of astrophysics that encompasses observations that measure the universe as a physical entity: its size, age, expansion rate, and temporal evolution. Examples are sufficiently diverse that practicing scientists who call themselves cosmologists may have rather different ideas about what it encompasses, or whether it even counts as astrophysics in the way defined above.

Indeed, more generally, cosmology is where science, philosophy, and religion collide. People have always asked the big questions – we want to understand the world in which we find ourselves, our place in it, our relation to it, and to its Maker in the religious sense – and we have always made up stories to fill in the gaping void of our ignorance. Stories that become the stuff of myth and legend until they are unquestionable aspects of a misplaced faith that we understand all of this. The science of cosmology is far from immune to myth making, and often times philosophical imperatives have overwhelmed observational facts. The lengthy persistence of SCDM in the absence of any credible evidence that Ωm = 1 is a recent example. Another that comes and goes is the desire for a Phoenix universe – one that expands, recollapses, and is then reborn for another cycle of expansion and contraction that repeats ad infinitum. This is appealing for philosophical reasons – the universe isn’t just some bizarre one-off – but there’s precious little that we know (or perhaps can know) to suggest it is a reality.

battlestar_galactica-last-supper
This has all happened before, and will all happen again.

Nevertheless, genuine and enormous empirical progress has been made. It is stunning what we know now that we didn’t a century ago. It has only been 90 years since Hubble established that there are galaxies external to the Milky Way. Prior to that, the prevailing cosmology consisted of a single island universe – the Milky Way – that tapered off into an indefinite, empty void. Until Hubble established otherwise, it was widely (though not universally) thought that the spiral nebulae were some kind of gas clouds within the Milky Way. Instead, the universe is filled with millions and billions of galaxies comparable in stature to the Milky Way.

We have sometimes let our progress blind us to the gaping holes that remain in our knowledge. Some of our more imaginative and less grounded colleagues take some of our more fanciful stories to be established fact, which sometimes just means the problem is old and familiar so boring if still unsolved. They race ahead to create new stories about entities like multiverses. To me, multiverses are manifestly metaphysical: great fun for late night bull sessions, but not a legitimate branch of physics.

So cosmology encompasses a lot. It can mean very different things to different people, and not all of it is scientific. I am not about to touch on the world-views of popular religions, all of which have some flavor of cosmology. There is controversy enough about these definitions among practicing scientists.

I started as a physicist. I earned an SB in physics from MIT in 1985, and went on to the physics (not the astrophysics) department of Princeton for grad school. I had elected to study physics because I had a burning curiosity about how the world works. It was not specific to astronomy as defined above. Indeed, astronomy seemed to me at the time to be but one of many curiosities, and not necessarily the main one.

There was no clear department of astronomy at MIT. Some people who practiced astrophysics were in the physics department; others in Earth, Atmospheric, and Planetary Science, still others in Mathematics. At the recommendation of my academic advisor Michael Feld, I wound up doing a senior thesis with George W. Clark, a high energy astrophysicist who mostly worked on cosmic rays and X-ray satellites. There was a large high energy astrophysics group at MIT who studied X-ray sources and the physics that produced them – things like neutron stars, black holes, supernova remnants, and the intracluster medium of clusters of galaxies – celestial objects with sufficiently extreme energies to make X-rays. The X-ray group needed to do optical follow-up (OK, there’s an X-ray source at this location on the sky. What’s there?) so they had joined the MDM Observatory. I had expressed a vague interest in orbital dynamics, and Clark had become interested in the structure of elliptical galaxies, motivated by the elegant orbital structures described by Martin Schwarzschild. The astrophysics group did a lot of work on instrumentation, so we had access to a new-fangled CCD. These made (and continue to make) much more sensitive detectors than photographic plates.

Empowered by this then-new technology, we embarked on a campaign to image elliptical galaxies with the MDM 1.3 m telescope. The initial goal was to search for axial twists as the predicted consequence of triaxial structure – Schwarzschild had shown that elliptical galaxies need not be oblate or prolate, but could have three distinct characteristic lengths along their principal axes. What we noticed instead with the sensitive CCD was a wonder of new features in the low surface brightness outskirts of these galaxies. Most elliptical galaxies just fade smoothly into obscurity, but every fourth or fifth case displayed distinct shells and ripples – features that were otherwise hard to spot that had only recently been highlighted by Malin & Carter.

Arp227_crop
A modern picture (courtesy of Pierre-Alain Duc) of the shell galaxy Arp 227 (NGC 474). Quantifying the surface brightness profiles of the shells in order to constrain theories for their origin became the subject of my senior thesis. I found that they were most consistent with stars on highly elliptical orbits, as expected from the shredded remnants of a cannibalized galaxy. Observations like this contributed to a sea change in the thinking about galaxies as isolated island universes that never interacted to the modern hierarchical view in which galaxy mergers are ubiquitous.

At the time I was doing this work, I was of course reading up on galaxies in general, and came across Mike Disney’s arguments as to how low surface brightness galaxies could be ubiquitous and yet missed by many surveys. This resonated with my new observing experience. Look hard enough, and you would find something new that had never before been seen. This proved to be true, and remains true to this day.

I went on only two observing runs my senior year. The weather was bad for the first one, clearing only the last night during which I collected all the useful data. The second run came too late to contribute to my thesis. But I was enchanted by the observatory as a remote laboratory, perched in the solitude of the rugged mountains, themselves alone in an empty desert of subtly magnificent beauty. And it got dark at night. You could actually see the stars. More stars than can be imagined by those confined to the light pollution of a city.

It hadn’t occurred to me to apply to an astronomy graduate program. I continued on to Princeton, where I was assigned to work in the atomic physics lab of Will Happer. There I mostly measured the efficiency of various buffer gases in moderating spin exchange between sodium and xenon. This resulted in my first published paper.

In retrospect, this is kinda cool. As an alkali, the atomic structure of sodium is basically that of a noble gas with a spare electron it’s eager to give away in a chemical reaction. Xenon is a noble gas, chemically inert as it already has nicely complete atomic shells; it wants neither to give nor receive electrons from other elements. Put the two together in a vapor, and they can form weak van der Waals molecules in which they share the unwanted valence electron like a hot potato. The nifty thing is that one can spin-polarize the electron by optical pumping with a laser. As it happens, the wave function of the electron has a lot of overlap with the nucleus of the xenon (one of the allowed states has no angular momentum). Thanks to this overlap, the spin polarization imparted to the electron can be transferred to the xenon nucleus. In this way, it is possible to create large amounts of spin-polarized xenon nuclei. This greatly enhances the signal of MRI, and has found an application in medical imaging: a patient can breathe in a chemically inert [SAFE], spin polarized noble gas, making visible all the little passageways of the lungs that are otherwise invisible to an MRI. I contributed very little to making this possible, but it is probably the closest I’ll ever come to doing anything practical.

The same technology could, in principle, be applied to make dark matter detection experiments phenomenally more sensitive to spin-dependent interactions. Giant tanks of xenon have already become one of the leading ways to search for WIMP dark matter, gobbling up a significant fraction of the world supply of this rare noble gas. Spin polarizing the xenon on the scales of tons rather than grams is a considerable engineering challenge.

Now, in that last sentence, I lapsed into a bit of physics arrogance. We understand the process. Making it work is “just” a matter of engineering. In general, there is a lot of hard work involved in that “just,” and a lot of times it is a practical impossibility. That’s probably the case here, as the polarization decays away quickly – much more quickly than one could purify and pump tons of the stuff into a vat maintained at a temperature near absolute zero.

At the time, I did not appreciate the meaning of what I was doing. I did not like working in Happer’s lab. The windowless confines kept dark but for the sickly orange glow of a sodium D laser was not a positive environment to be in day after day after day. More importantly, the science did not call to my heart. I began to dream of a remote lab on a scenic mountain top.

I also found the culture in the physics department at Princeton to be toxic. Nothing mattered but to be smarter than the next guy (and it was practically all guys). There was no agreed measure for this, and for the most part people weren’t so brazen as to compare test scores. So the thing to do was Be Arrogant. Everybody walked around like they were too frickin’ smart to be bothered to talk to anyone else, or even see them under their upturned noses. It was weird – everybody there was smart, but no human could possible be as smart as these people thought they were. Well, not everybody, of course – Jim Peebles is impossibly intelligent, sane, and even nice (perhaps he is an alien, or at least a Canadian) – but for most of Princeton arrogance was a defining characteristic that seeped unpleasantly into every interaction.

It was, in considerable part, arrogance that drove me away from physics. I was appalled by it. One of the best displays was put on by David Gross in a colloquium that marked the take-over of theoretical physics by string theory. The dude was talking confidently in bold positivist terms about predictions that were twenty orders of magnitude in energy beyond any conceivable experimental test. That, to me, wasn’t physics.

More than thirty years on, I can take cold comfort that my youthful intuition was correct. String theory has conspicuously failed to provide the vaunted “theory of everything” that was promised. Instead, we have vague “landscapes” of 10500 possible theories. Just want one. 10500 is not progress. It’s getting hopelessly lost. That’s what happens when brilliant ideologues are encouraged to wander about in their hyperactive imaginations without experimental guidance. You don’t get physics, you get metaphysics. If you think that sounds harsh, note that Gross himself takes exactly this issue with multiverses, saying the notion “smells of angels” and worrying that a generation of physicists will be misled down a garden path – exactly the way he misled a generation with string theory.

So I left Princeton, and switched to a field where progress could be made. I chose to go to the University of Michigan, because I knew it had access to the MDM telescopes (one of the M’s stood for Michigan, the other MIT, with the D for Dartmouth) and because I was getting married. My wife is an historian, and we needed a university that was good in both our fields.

When I got to Michigan, I was ready to do research. I wanted to do more on shell galaxies, and low surface brightness galaxies in general. I had had enough coursework, I reckoned; I was ready to DO science. So I was somewhat taken aback that they wanted me to do two more years of graduate coursework in astronomy.

Some of the physics arrogance had inevitably been incorporated into my outlook. To a physicist, all other fields are trivial. They are just particular realizations of some subset of physics. Chemistry is just applied atomic physics. Biology barely even counts as science, and those parts that do could be derived from physics, in principle. As mere subsets of physics, any other field can and will be picked up trivially.

After two years of graduate coursework in astronomy, I had the epiphany that the field was not trivial. There were excellent reasons, both practical and historical, why it was a separate field. I had been wrong to presume otherwise.

Modern physicists are not afflicted by this epiphany. That bad attitude I was guilty of persists and is remarkably widespread. I am frequently confronted by young physicists eager to mansplain my own field to me, who casually assume that I am ignorant of subjects that I wrote papers on before they started reading the literature, and who equate a disagreement with their interpretation on any subject with ignorance on my part. This is one place the fields diverge enormously. In physics, if it appears in a textbook, it must be true. In astronomy, we recognize that we’ve been wrong about the universe so many times, we’ve learned to be tolerant of interpretations that initially sound absurd. Today’s absurdity may be tomorrow’s obvious fact. Physicists don’t share this history, and often fail to distinguish interpretation from fact, much less cope with the possibility that a single set of facts may admit multiple interpretations.

Cosmology has often been a leader in being wrong, and consequently enjoyed a shady reputation in both physics and astronomy for much of the 20th century. When I started on the faculty at the University of Maryland in 1998, there was no graduate course in the subject. This seemed to me to be an obvious gap to fill, so I developed one. Some of the senior astronomy faculty expressed concern as to whether this could be a rigorous 3 credit graduate course, and sent a neutral representative to discuss the issue with me. He was satisfied. As would be any cosmologist – I was teaching LCDM before most other cosmologists had admitted it was a thing.

At that time, 1998, my wife was also a new faculty member at John Carroll University. They held a welcome picnic, which I attended as the spouse. So I strike up a conversation with another random spouse who is also standing around looking similarly out of place. Ask him what he does. “I’m a physicist.” Ah! common ground – what do you work on? “Cosmology and dark matter.” I was flabbergasted. How did I not know this person? It was Glenn Starkman, and this was my first indication that sometime in the preceding decade, cosmology had become an acceptable field in physics and not a suspect curiosity best left to woolly-minded astronomers.

This was my first clue that there were two entirely separate groups of professional scientists who self-identified as cosmologists. One from the astronomy tradition, one from physics. These groups use the same words to mean the same things – sometimes. There is a common language. But like British English and American English, sometimes different things are meant by the same words.

“Dark matter” is a good example. When I say dark matter, I mean the vast diversity of observational evidence for a discrepancy between measurable probes of gravity (orbital speeds, gravitational lensing, equilibrium hydrostatic temperatures, etc.) and what is predicted by the gravity of the observed baryonic material – the stars and gas we can see. When a physicist says “dark matter,” he seems usually to mean the vast array of theoretical hypotheses for what new particle the dark matter might be.

To give a recent example, a colleague who is a world-reknowned expert on dark matter, and an observational astronomer in a physics department dominated by particle cosmologists, noted that their chairperson had advocated a particular hiring plan because “we have no one who works on dark matter.” This came across as incredibly disrespectful, which it is. But it is also simply clueless. It took some talking to work through, but what we think he meant was that they had no one who worked on laboratory experiments to detect dark matter. That’s a valid thing to do, which astronomers don’t deny. But it is a severely limited way to think about it.

To date, the evidence for dark matter to date is 100% astronomical in nature. That’s all of it. Despite enormous effort and progress, laboratory experiments provide 0%. Zero point zero zero zero. And before some fool points to the cosmic microwave background, that is not a laboratory experiment. It is astronomy as defined above: information gleaned from observation of the sky. That it is done with photons from the mm and microwave part of the spectrum instead of the optical part of the spectrum doesn’t make it fundamentally different: it is still an observation of the sky.

And yet, apparently the observational work that my colleague did was unappreciated by his own department head, who I know to fancy himself an expert on the subject. Yet existence of a complementary expert in his own department didn’t ever register him. Even though, as chair, he would be responsible for reviewing the contributions of the faculty in his department on an annual basis.

To many physicists we astronomers are simply invisible. What could we possibly teach them about cosmology or dark matter? That we’ve been doing it for a lot longer is irrelevant. Only what they [re]invent themselves is valid, because astronomy is a subservient subfield populated by people who weren’t smart enough to become particle physicists. Because particle physicists are the smartest people in the world. Just ask one. He’ll tell you.

To give just one personal example of many: a few years ago, after I had published a paper in the premiere physics journal, I had a particle physics colleague ask, in apparent sincerity, “Are you an astrophysicist?” I managed to refrain from shouting YES YOU CLUELESS DUNCE! Only been doing astrophysics for my entire career!

As near as I can work out, his erroneous definition of astrophysicist involved having a Ph.D. in physics. That’s a good basis to start learning astrophysics, but it doesn’t actually qualify. Kris Davidson noted a similar sociology among his particle physics colleagues: “They simply declare themselves to be astrophysicsts.” Well, I can tell you – having made that same mistake personally – it ain’t that simple. I’m pleased that so many physicists are finally figuring out what I did in the 1980s, and welcome their interest in astrophysics and cosmology. But they need to actually learn the subject, just not assume they’ll pick it up in a snap without actually doing so.

 

Hypothesis testing with gas rich galaxies

Hypothesis testing with gas rich galaxies

This Thanksgiving, I’d highlight something positive. Recently, Bob Sanders wrote a paper pointing out that gas rich galaxies are strong tests of MOND. The usual fit parameter, the stellar mass-to-light ratio, is effectively negligible when gas dominates. The MOND prediction follows straight from the gas distribution, for which there is no equivalent freedom. We understand the 21 cm spin-flip transition well enough to relate observed flux directly to gas mass.

In any human endeavor, there are inevitably unsung heroes who carry enormous amounts of water but seem to get no credit for it. Sanders is one of those heroes when it comes to the missing mass problem. He was there at the beginning, and has a valuable perspective on how we got to where we are. I highly recommend his books, The Dark Matter Problem: A Historical Perspective and Deconstructing Cosmology.

In bright spiral galaxies, stars are usually 80% or so of the mass, gas only 20% or less. But in many dwarf galaxies,  the mass ratio is reversed. These are often low surface brightness and challenging to observe. But it is a worthwhile endeavor, as their rotation curve is predicted by MOND with extraordinarily little freedom.

Though gas rich galaxies do indeed provide an excellent test of MOND, nothing in astronomy is perfectly clean. The stellar mass-to-light ratio is an irreducible need-to-know parameter. We also need to know the distance to each galaxy, as we do not measure the gas mass directly, but rather the flux of the 21 cm line. The gas mass scales with flux and the square of the distance (see equation 7E7), so to get the gas mass right, we must first get the distance right. We also need to know the inclination of a galaxy as projected on the sky in order to get the rotation to which we’re fitting right, as the observed line of sight Doppler velocity is only sin(i) of the full, in-plane rotation speed. The 1/sin(i) correction becomes increasingly sensitive to errors as i approaches zero (face-on galaxies).

The mass-to-light ratio is a physical fit parameter that tells us something meaningful about the amount of stellar mass that produces the observed light. In contrast, for our purposes here, distance and inclination are “nuisance” parameters. These nuisance parameters can be, and generally are, measured independently from mass modeling. However, these measurements have their own uncertainties, so one has to be careful about taking these measured values as-is. One of the powerful aspects of Bayesian analysis is the ability to account for these uncertainties to allow for the distance to be a bit off the measured value, so long as it is not too far off, as quantified by the measurement uncertainties. This is what current graduate student Pengfei Li did in Li et al. (2018). The constraints on MOND are so strong in gas rich galaxies that often the nuisance parameters cannot be ignored, even when they’re well measured.

To illustrate what I’m talking about, let’s look at one famous example, DDO 154. This galaxy is over 90% gas. The stars (pictured above) just don’t matter much. If the distance and inclination are known, the MOND prediction for the rotation curve follows directly. Here is an example of a MOND fit from a recent paper:

DDO154_MOND_180805695
The MOND fit to DDO 154 from Ren et al. (2018). The black points are the rotation curve data, the green line is the Newtonian expectation for the baryons, and the red line is their MOND fit.

This is terrible! The MOND fit – essentially a parameter-free prediction – misses all of the data. MOND is falsified. If one is inclined to hate MOND, as many seem to be, then one stops here. No need to think further.

If one is familiar with the ups and downs in the history of astronomy, one might not be so quick to dismiss it. Indeed, one might notice that the shape of the MOND prediction closely tracks the shape of the data. There’s just a little difference in scale. That’s kind of amazing for a theory that is wrong, especially when it is amplifying the green line to predict the red one: it needn’t have come anywhere close.

Here is the fit to the same galaxy using the same data [already] published in Li et al.:

DDO154_RAR_Li2018
The MOND fit to DDO 154 from Li et al. (2018) using the same data as above, as tabulated in SPARC.

Now we have a good fit, using the same data! How can this be so?

I have not checked what Ren et al. did to obtain their MOND fits, but having done this exercise myself many times, I recognize the slight offset they find as a typical consequence of holding the nuisance parameters fixed. What if the measured distance is a little off?

Distance estimates to DDO 154 in the literature range from 3.02 Mpc to 6.17 Mpc. The formally most accurate distance measurement is 4.04 ± 0.08 Mpc. In the fit shown here, we obtained 3.87 ± 0.16 Mpc. The error bars on these distances overlap, so they are the same number, to measurement accuracy. These data do not falsify MOND. They demonstrate that it is sensitive enough to tell the difference between 3.8 and 4.1 Mpc.

One will never notice this from a dark matter fit. Ren et al. also make fits with self-interacting dark matter (SIDM). The nifty thing about SIDM is that it makes quasi-constant density cores in dark matter halos. Halos of this form are not predicted by “ordinary” cold dark matter (CDM), but often give better fits than either MOND of the NFW halos of dark matter-only CDM simulations. For this galaxy, Ren et al. obtain the following SIDM fit.

DDO154_SIDM_180805695
The SIDM fit to DDO 154 from Ren et al.

This is a great fit. Goes right through the data. That makes it better, right?

Not necessarily. In addition to the mass-to-light ratio (and the nuisance parameters of distance and inclination), dark matter halo fits have [at least] two additional free parameters to describe the dark matter halo, such as its mass and core radius. These parameters are highly degenerate – one can obtain equally good fits for a range of mass-to-light ratios and core radii: one makes up for what the other misses. Parameter degeneracy of this sort is usually a sign that there is too much freedom in the model. In this case, the data are adequately described by one parameter (the MOND fit M*/L, not counting the nuisances in common), so using three (M*/L, Mhalo, Rcore) is just an exercise in fitting a French curve. There is ample freedom to fit the data. As a consequence, you’ll never notice that one of the nuisance parameters might be a tiny bit off.

In other words, you can fool a dark matter fit, but not MOND. Erwin de Blok and I demonstrated this 20 years ago. A common myth at that time was that “MOND is guaranteed to fit rotation curves.” This seemed patently absurd to me, given how it works: once you stipulate the distribution of baryons, the rotation curve follows from a simple formula. If the two don’t match, they don’t match. There is no guarantee that it’ll work. Instead, it can’t be forced.

As an illustration, Erwin and I tried to trick it. We took two galaxies that are identical in the Tully-Fisher plane (NGC 2403 and UGC 128) and swapped their mass distribution and rotation curve. These galaxies have the same total mass and the same flat velocity in the outer part of the rotation curve, but the detailed distribution of their baryons differs. If MOND can be fooled, this closely matched pair ought to do the trick. It does not.

NGC2403UGC128trickMOND
An attempt to fit MOND to a hybrid galaxy with the rotation curve of NGC 2403 and the baryon distribution of UGC 128. The mass-to-light ratio is driven to unphysical values (6 in solar units), but an acceptable fit is not obtained.

Our failure to trick MOND should not surprise anyone who bothers to look at the math involved. There is a one-to-one relation between the distribution of the baryons and the resulting rotation curve. If there is a mismatch between them, a fit cannot be obtained.

We also attempted to play this same trick on dark matter. The standard dark matter halo fitting function at the time was the pseudo-isothermal halo, which has a constant density core. It is very similar to the halos of SIDM and to the cored dark matter halos produced by baryonic feedback in some simulations. Indeed, that is the point of those efforts: they  are trying to capture the success of cored dark matter halos in fitting rotation curve data.

NGC2403UGC128trickDM
A fit to the hybrid galaxy with a cored (pseudo-isothermal) dark matter halo. A satisfactory fit is readily obtained.

Dark matter halos with a quasi-constant density core do indeed provide good fits to rotation curves. Too good. They are easily fooled, because they have too many degrees of freedom. They will fit pretty much any plausible data that you throw at them. This is why the SIDM fit to DDO 154 failed to flag distance as a potential nuisance. It can’t. You could double (or halve) the distance and still find a good fit.

This is why parameter degeneracy is bad. You get lost in parameter space. Once lost there, it becomes impossible to distinguish between successful, physically meaningful fits and fitting epicycles.

Astronomical data are always subject to improvement. For example, the THINGS project obtained excellent data for a sample of nearby galaxies. I made MOND fits to all the THINGS (and other) data for the MOND review Famaey & McGaugh (2012). Here’s the residual diagram, which has been on my web page for many years:

rcresid_mondfits
Residuals of MOND fits from Famaey & McGaugh (2012).

These are, by and large, good fits. The residuals have a well defined peak centered on zero.  DDO 154 was one of the THINGS galaxies; lets see what happens if we use those data.

DDO154mond_i66
The rotation curve of DDO 154 from THINGS (points with error bars). The Newtonian expectation for stars is the green line; the gas is the blue line. The red line is the MOND prediction. Not that the gas greatly outweighs the stars beyond 1.5 kpc; the stellar mass-to-light ratio has extremely little leverage in this MOND fit.

The first thing one is likely to notice is that the THINGS data are much better resolved than the previous generation used above. The first thing I noticed was that THINGS had assumed a distance of 4.3 Mpc. This was prior to the measurement of 4.04, so lets just start over from there. That gives the MOND prediction shown above.

And it is a prediction. I haven’t adjusted any parameters yet. The mass-to-light ratio is set to the mean I expect for a star forming stellar population, 0.5 in solar units in the Sptizer 3.6 micron band. D=4.04 Mpc and i=66 as tabulated by THINGS. The result is pretty good considering that no parameters have been harmed in the making of this plot. Nevertheless, MOND overshoots a bit at large radii.

Constraining the inclinations for gas rich dwarf galaxies like DDO 154 is a bit of a nightmare. Literature values range from 20 to 70 degrees. Seriously. THINGS itself allows the inclination to vary with radius; 66 is just a typical value. Looking at the fit Pengfei obtained, i=61. Let’s try that.

DDO154mond_i61
MOND fit to the THINGS data for DDO 154 with the inclination adjusted to the value found by Li et al. (2018).

The fit is now satisfactory. One tweak to the inclination, and we’re done. This tweak isn’t even a fit to these data; it was adopted from Pengfei’s fit to the above data. This tweak to the inclination is comfortably within any plausible assessment of the uncertainty in this quantity. The change in sin(i) corresponds to a mere 4% in velocity. I could probably do a tiny bit better with further adjustment – I have left both the distance and the mass-to-light ratio fixed – but that would be a meaningless exercise in statistical masturbation. The result just falls out: no muss, no fuss.

Hence the point Bob Sanders makes. Given the distribution of gas, the rotation curve follows. And it works, over and over and over, within the bounds of the uncertainties on the nuisance parameters.

One cannot do the same exercise with dark matter. It has ample ability to fit rotation curve data, once those are provided, but zero power to predict it. If all had been well with ΛCDM, the rotation curves of these galaxies would look like NFW halos. Or any number of other permutations that have been discussed over the years. In contrast, MOND makes one unique prediction (that was not at all anticipated in dark matter), and that’s what the data do. Out of the huge parameter space of plausible outcomes from the messy hierarchical formation of galaxies in ΛCDM, Nature picks the one that looks exactly like MOND.

star_trek_tv_spock_3_copy_-_h_2018
This outcome is illogical.

It is a bad sign for a theory when it can only survive by mimicking its alternative. This is the case here: ΛCDM must imitate MOND. There are now many papers asserting that it can do just this, but none of those were written before the data were provided. Indeed, I consider it to be problematic that clever people can come with ways to imitate MOND with dark matter. What couldn’t it imitate? If the data had all looked like technicolor space donkeys, we could probably find a way to make that so as well.

Cosmologists will rush to say “microwave background!” I have some sympathy for that, because I do not know how to explain the microwave background in a MOND-like theory. At least I don’t pretend to, even if I had more predictive success there than their entire community. But that would be a much longer post.

For now, note that the situation is even worse for dark matter than I have so far made it sound. In many dwarf galaxies, the rotation velocity exceeds that attributable to the baryons (with Newton alone) at practically all radii. By a lot. DDO 154 is a very dark matter dominated galaxy. The baryons should have squat to say about the dynamics. And yet, all you need to know to predict the dynamics is the baryon distribution. The baryonic tail wags the dark matter dog.

But wait, it gets better! If you look closely at the data, you will note a kink at about 1 kpc, another at 2, and yet another around 5 kpc. These kinks are apparent in both the rotation curve and the gas distribution. This is an example of Sancisi’s Law: “For any feature in the luminosity profile there is a corresponding feature in the rotation curve and vice versa.” This is a general rule, as Sancisi observed, but it makes no sense when the dark matter dominates. The features in the baryon distribution should not be reflected in the rotation curve.

The observed baryons orbit in a disk with nearly circular orbits confined to the same plane. The dark matter moves on eccentric orbits oriented every which way to provide pressure support to a quasi-spherical halo. The baryonic and dark matter occupy very different regions of phase space, the six dimensional volume of position and momentum. The two are not strongly coupled, communicating only by the weak force of gravity in the standard CDM paradigm.

One of the first lessons of galaxy dynamics is that galaxy disks are subject to a variety of instabilities that grow bars and spiral arms. These are driven by disk self-gravity. The same features do not appear in elliptical galaxies because they are pressure supported, 3D blobs. They don’t have disks so they don’t have disk self-gravity, much less the features that lead to the bumps and wiggles observed in rotation curves.

Elliptical galaxies are a good visual analog for what dark matter halos are believed to be like. The orbits of dark matter particles are unable to sustain features like those seen in  baryonic disks. They are featureless for the same reasons as elliptical galaxies. They don’t have disks. A rotation curve dominated by a spherical dark matter halo should bear no trace of the features that are seen in the disk. And yet they’re there, often enough for Sancisi to have remarked on it as a general rule.

It gets worse still. One of the original motivations for invoking dark matter was to stabilize galactic disks: a purely Newtonian disk of stars is not a stable configuration, yet the universe is chock full of long-lived spiral galaxies. The cure was to place them in dark matter halos.

The problem for dwarfs is that they have too much dark matter. The halo stabilizes disks by  suppressing the formation of structures that stem from disk self-gravity. But you need some disk self-gravity to have the observed features. That can be tuned to work in bright spirals, but it fails in dwarfs because the halo is too massive. As a practical matter, there is no disk self-gravity in dwarfs – it is all halo, all the time. And yet, we do see such features. Not as strong as in big, bright spirals, but definitely present. Whenever someone tries to analyze this aspect of the problem, they inevitably come up with a requirement for more disk self-gravity in the form of unphysically high stellar mass-to-light ratios (something I predicted would happen). In contrast, this is entirely natural in MOND (see, e.g., Brada & Milgrom 1999 and Tiret & Combes 2008), where it is all disk self-gravity since there is no dark matter halo.

The net upshot of all this is that it doesn’t suffice to mimic the radial acceleration relation as many simulations now claim to do. That was not a natural part of CDM to begin with, but perhaps it can be done with smooth model galaxies. In most cases, such models lack the resolution to see the features seen in DDO 154 (and in NGC 1560 and in IC 2574, etc.) If they attain such resolution, they better not show such features, as that would violate some basic considerations. But then they wouldn’t be able to describe this aspect of the data.

Simulators by and large seem to remain sanguine that this will all work out. Perhaps I have become too cynical, but I recall hearing that 20 years ago. And 15. And ten… basically, they’ve always assured me that it will work out even though it never has. Maybe tomorrow will be different. Or would that be the definition of insanity?

 

 

Dwarf Satellite Galaxies. II. Non-equilibrium effects in ultrafaint dwarfs

Dwarf Satellite Galaxies. II. Non-equilibrium effects in ultrafaint dwarfs

I have been wanting to write about dwarf satellites for a while, but there is so much to tell that I didn’t think it would fit in one post. I was correct. Indeed, it was worse than I thought, because my own experience with low surface brightness (LSB) galaxies in the field is a necessary part of the context for my perspective on the dwarf satellites of the Local Group. These are very different beasts – satellites are pressure supported, gas poor objects in orbit around giant hosts, while field LSB galaxies are rotating, gas rich galaxies that are among the most isolated known. However, so far as their dynamics are concerned, they are linked by their low surface density.

Where we left off with the dwarf satellites, circa 2000, Ursa Minor and Draco remained problematic for MOND, but the formal significance of these problems was not great. Fornax, which had seemed more problematic, was actually a predictive success: MOND returned a low mass-to-light ratio for Fornax because it was full of young stars. The other known satellites, Carina, Leo I, Leo II, Sculptor, and Sextans, were all consistent with MOND.

The Sloan Digital Sky Survey resulted in an explosion in the number of satellites galaxies discovered around the Milky Way. These were both fainter and lower surface brightness than the classical dwarfs named above. Indeed, they were often invisible as objects in their own right, being recognized instead as groupings of individual stars that shared the same position in space and – critically – velocity. They weren’t just in the same place, they were orbiting the Milky Way together. To give short shrift to a long story, these came to be known as ultrafaint dwarfs.

Ultrafaint dwarf satellites have fewer than 100,000 stars. That’s tiny for a stellar system. Sometimes they had only a few hundred. Most of those stars are too faint to see directly. Their existence is inferred from a handful of red giants that are actually observed. Where there are a few red giants orbiting together, there must be a source population of fainter stars. This is a good argument, and it is likely true in most cases. But the statistics we usually rely on become dodgy for such small numbers of stars: some of the ultrafaints that have been reported in the literature are probably false positives. I have no strong opinion on how many that might be, but I’d be really surprised if it were zero.

Nevertheless, assuming the ultrafaints dwarfs are self-bound galaxies, we can ask the same questions as before. I was encouraged to do this by Joe Wolf, a clever grad student at UC Irvine. He had a new mass estimator for pressure supported dwarfs that we decided to apply to this problem. We used the Baryonic Tully-Fisher Relation (BTFR) as a reference, and looked at it every which-way. Most of the text is about conventional effects in the dark matter picture, and I encourage everyone to read the full paper. Here I’m gonna skip to the part about MOND, because that part seems to have been overlooked in more recent commentary on the subject.

For starters, we found that the classical dwarfs fall along the extrapolation of the BTFR, but the ultrafaint dwarfs deviate from it.

Fig1_annotated
Fig. 1 from McGaugh & Wolf (2010, annotated). The BTFR defined by rotating galaxies (gray points) extrapolates well to the scale of the dwarf satellites of the Local Group (blue points are the classical dwarf satellites of the Milky Way; red points are satellites of Andromeda) but not to the ultrafaint dwarfs (green points). Two of the classical dwarfs also fall off of the BTFR: Draco and Ursa Minor.

The deviation is not subtle, at least not in terms of mass. The ultrataints had characteristic circular velocities typical of systems 100 times their mass! But the BTFR is steep. In terms of velocity, the deviation is the difference between the 8 km/s typically observed, and the ~3 km/s needed to put them on the line. There are a large number of systematic effects errors that might arise, and all act to inflate the characteristic velocity. See the discussion in the paper if you’re curious about such effects; for our purposes here we will assume that the data cannot simply be dismissed as the result of systematic errors, though one should bear in mind that they probably play a role at some level.

Taken at face value, the ultrafaint dwarfs are a huge problem for MOND. An isolated system should fall exactly on the BTFR. These are not isolated systems, being very close to the Milky Way, so the external field effect (EFE) can cause deviations from the BTFR. However, these are predicted to make the characteristic internal velocities lower than the isolated case. This may in fact be relevant for the red points that deviate a bit in the plot above, but we’ll return to that at some future point. The ultrafaints all deviate to velocities that are too high, the opposite of what the EFE predicts.

The ultrafaints falsify MOND! When I saw this, all my original confirmation bias came flooding back. I had pursued this stupid theory to ever lower surface brightness and luminosity. Finally, I had found where it broke. I felt like Darth Vader in the original Star Wars:

darth-vader-i-have-you-now_1
I have you now!

The first draft of my paper with Joe included a resounding renunciation of MOND. No way could it escape this!

But…

I had this nagging feeling I was missing something. Darth should have looked over his shoulder. Should I?

Surely I had missed nothing. Many people are unaware of the EFE, just as we had been unaware that Fornax contained young stars. But not me! I knew all that. Surely this was it.

Nevertheless, the nagging feeling persisted. One part of it was sociological: if I said MOND was dead, it would be well and truly buried. But did it deserve to be? The scientific part of the nagging feeling was that maybe there had been some paper that addressed this, maybe a decade before… perhaps I’d better double check.

Indeed, Brada & Milgrom (2000) had run numerical simulations of dwarf satellites orbiting around giant hosts. MOND is a nonlinear dynamical theory; not everything can be approximated analytically. When a dwarf satellite is close to its giant host, the external acceleration of the dwarf falling towards its host can exceed the internal acceleration of the stars in the dwarf orbiting each other – hence the EFE. But the EFE is not a static thing; it varies as the dwarf orbits about, becoming stronger on closer approach. At some point, this variation becomes to fast for the dwarf to remain in equilibrium. This is important, because the assumption of dynamical equilibrium underpins all these arguments. Without it, it is hard to know what to expect short of numerically simulating each individual dwarf. There is no reason to expect them to remain on the equilibrium BTFR.

Brada & Milgrom suggested a measure to gauge the extent to which a dwarf might be out of equilibrium. It boils down to a matter of timescales. If the stars inside the dwarf have time to adjust to the changing external field, a quasi-static EFE approximation might suffice. So the figure of merit becomes the ratio of internal orbits per external orbit. If the stars inside a dwarf are swarming around many times for every time it completes an orbit around the host, then they have time to adjust. If the orbit of the dwarf around the host is as quick as the internal motions of the stars within the dwarf, not so much. At some point, a satellite becomes a collection of associated stars orbiting the host rather than a self-bound object in its own right.

Fig7_annotated
Deviations from the BTFR (left) and the isophotal shape of dwarfs (right) as a function of the number of internal orbits a star at the half-light radius makes for every orbit a dwarf makes around its giant host (Fig. 7 of McGaugh & Wolf 2010).

Brada & Milgrom provide the formula to compute the ratio of orbits, shown in the figure above. The smaller the ratio, the less chance an object has to adjust, and the more subject it is to departures from equilibrium. Remarkably, the amplitude of deviation from the BTFR – the problem I could not understand initially – correlates with the ratio of orbits. The more susceptible a dwarf is to disequilibrium effects, the farther it deviated from the BTFR.

This completely inverted the MOND interpretation. Instead of falsifying MOND, the data now appeared to corroborate the non-equilibrium prediction of Brada & Milgrom. The stronger the external influence, the more a dwarf deviated from the equilibrium expectation. In conventional terms, it appeared that the ultrafaints were subject to tidal stirring: their internal velocities were being pumped up by external influences. Indeed, the originally problematic cases, Draco and Ursa Minor, fall among the ultrafaint dwarfs in these terms. They can’t be in equilibrium in MOND.

If the ultrafaints are out of equilibrium, the might show some independent evidence of this. Stars should leak out, distorting the shape of the dwarf and forming tidal streams. Can we see this?

A definite maybe:

Ell_D_wImages
The shapes of some ultrafaint dwarfs. These objects are so diffuse that they are invisible on the sky; their shape is illustrated by contours or heavily smoothed grayscale pseudo-images.

The dwarfs that are more subject to external influence tend to be more elliptical in shape. A pressure supported system in equilibrium need not be perfectly round, but one departing from equilibrium will tend to get stretched out. And indeed, many of the ultrafaints look Messed Up.

I am not convinced that all this requires MOND. But it certainly doesn’t falsify it. Tidal disruption can happen in the dark matter context, but it happens differently. The stars are buried deep inside protective cocoons of dark matter, and do not feel tidal effects much until most of the dark matter is stripped away. There is no reason to expect the MOND measure of external influence to apply (indeed, it should not), much less that it would correlate with indications of tidal disruption as seen above.

This seems to have been missed by more recent papers on the subject. Indeed, Fattahi et al. (2018) have reconstructed very much the chain of thought I describe above. The last sentence of their abstract states “In many cases, the resulting velocity dispersions are inconsistent with the predictions from Modified Newtonian Dynamics, a result that poses a possibly insurmountable challenge to that scenario.” This is exactly what I thought. (I have you now.) I was wrong.

Fattahi et al. are wrong for the same reasons I was wrong. They are applying equilibrium reasoning to a non-equilibrium situation. Ironically, the main point of the their paper is that many systems can’t be explained with dark matter, unless they are tidally stripped – i.e., the result of a non-equilibrium process. Oh, come on. If you invoke it in one dynamical theory, you might want to consider it in the other.

To quote the last sentence of our abstract from 2010, “We identify a test to distinguish between the ΛCDM and MOND based on the orbits of the dwarf satellites of the Milky Way and how stars are lost from them.” In ΛCDM, the sub-halos that contain dwarf satellites are expected to be on very eccentric orbits, with all the damage from tidal interactions with the host accruing during pericenter passage. In MOND, substantial damage may accrue along lower eccentricity orbits, leading to the expectation of more continuous disruption.

Gaia is measuring proper motions for stars all over the sky. Some of these stars are in the dwarf satellites. This has made it possible to estimate orbits for the dwarfs, e.g., work by Amina Helmi (et al!) and Josh Simon. So far, the results are definitely mixed. There are more dwarfs on low eccentricity orbits than I had expected in ΛCDM, but there are still plenty that are on high eccentricity orbits, especially among the ultrafaints. Which dwarfs have been tidally affected by interactions with their hosts is far from clear.

In short, reality is messy. It is going to take a long time to sort these matters out. These are early days.

Astronomical Acceleration Scales

Astronomical Acceleration Scales

A quick note to put the acceleration discrepancy in perspective.

The acceleration discrepancy, as Bekenstein called it, more commonly called the missing mass or dark matter problem, is the deviation of dynamics from those of Newton and Einstein. The quantity D is the amplitude of the discrepancy, basically the ratio of total mass to that which is visible. The need for dark matter – the discrepancy – only manifests at very low accelerations, of order 10-10 m/s/s. That’s one part in 1011 of what you feel standing on the Earth.

MDacc_wclusters_uptomergingBH
The mass discrepancy as a function of acceleration. There is no discrepancy (D=1) at high acceleration: everything is normal in the solar system and at the highest accelerations probed. The discrepancy only manifests at very low accelerations.

Astronomical data span enormous, indeed, astronomical, ranges. This is why astronomers so frequently use logarithmic plots. The abscissa in the plot above spans 25 orders of magnitude, from the lowest accelerations measured in the outskirts of galaxies to the highest conceivable on the surface of a neutron star on the brink of collapse into a black hole. If we put this on a linear scale, you’d see one point (the highest) and all the rest would be crammed into x=0.

Galileo established that the we live in a regime where the acceleration due to gravity is effectively constant; g = 9.8 m/s/s. This suffices to describe the trajectories of projectiles (like baseballs) familiar to everyday experience. At least is suffices to describe the gravity; air resistance plays a non-negligible role as well. But you don’t need Newton’s Universal Law of Gravity; you just need to know everything experiences a downward acceleration of one gee.

As we move to higher altitude and on into space, this ceases to suffice. As Newton taught us, the strength of the gravitational attraction between two bodies decreases as the distance between them increases. The constant acceleration recognized by Galileo was a special case of a more general phenomenon. The surface of the Earth is a [very nearly] constant distance from its center, so gee is [very nearly] constant. Get off the Earth, and that changes.

In the plot above, the acceleration we experience here on the surface of the Earth lands pretty much in the middle of the range known to astronomical observation. This is normal to us. The orbits of the planets in the solar system stretch to lower accelerations: the surface gravity of the Earth exceeds the centripetal force it takes to keep Earth in its orbit around the sun. This decreases outward in the solar system, with Neptune experiencing less than 10-5 m/s/s in its orbit.

We understand the gravity in the solar system extraordinarily well. We’ve been watching the planets orbit for ages. The inner planets, in particular, are so well known that subtle effects have been known for ages. Most famous is the tiny excess precession of the perihelion of the orbit of Mercury, first noted by Le Verrier in 1859 but not satisfactorily* explained until Einstein applied General Relativity to the problem in 1916.

The solar system probes many decades of acceleration accurately, but there are many decades of phenomena beyond the reach of the solar system, both to higher and lower accelerations. Two objects orbiting one another intensely enough for the energy loss due to the emission of gravitational waves to have a measurable effect on their orbit are the two neutron stars that compose the binary pulsar of Hulse & Taylor. Their orbit is highly eccentric, pulling an acceleration of about 270 m/s/s at periastron (closest passage). The gravitational dynamics of the system are extraordinarily well understood, and Hulse & Taylor were awarded the 1993 Nobel prize in physics for this observation that indirectly corroborated the existence of gravitational waves.

ghostbusters-20090702101358857
The mass-energy tensor was dancing a monster jig as the fabric of space-time was rent asunder, I can tell you!

Direct detection of gravitational waves was first achieved by LIGO in 2015 (the 2017 Nobel prize). The source of these waves was the merger of a binary pair of black holes, a calamity so intense that it converted the equivalent of 3 solar masses into the energy carried away as gravitational waves. Imagine two 30 solar mass black holes orbiting each other a few hundred km apart 75 times per second just before merging – that equates to a centripetal acceleration of nearly 1011 m/s/s.

We seem to understand gravity well in this regime.

The highest acceleration illustrated in the figure above is the maximum surface gravity of a neutron star, which is just a hair under 1013 m/s/s. Anything more than this collapses to a black hole. The surface of a neutron star is not a place that suffers large mountains to exist, even if by “large” you mean “ant sized.” Good luck walking around in an exoskeleton there! Micron scale crustal adjustments correspond to monster starquakes.

High-end gravitational accelerations are 20 orders of magnitude removed from where the acceleration discrepancy appears. Dark matter is a problem restricted to the regime of tiny accelerations, of order 1 Angstrom/s/s. That isn’t much, but it is roughly what holds a star in its orbit within a galaxy. Sometimes less.

Galaxies show a large and clear acceleration discrepancy. The mob of black points is the radial acceleration relation, compressed to fit on the same graph with the high acceleration phenomena. Whatever happens, happens suddenly at this specific scale.

I also show clusters of galaxies, which follow a similar but offset acceleration relation. The discrepancy sets in a littler earlier for them (and with more scatter, but that may simply be a matter of lower precision). This offset from galaxies is a small matter on the scale considered here, but it is a serious one if we seek to modify dynamics at a universal acceleration scale. Depending on how one chooses to look at this aspect of the problem, the data for clusters are either tantalizingly close to the [far superior] data for galaxies, or they are impossibly far removed. Regardless of which attitude proves to be less incorrect, it is clear that the missing mass phenomena is restricted to low accelerations. Everything is normal until we reach the lowest decade or two of accelerations probed by current astronomical data – and extragalactic data are the only data that test gravity in this regime.

We have no other data that probe the very low acceleration regime. The lowest acceleration probe we have with solar system accuracy is from the Pioneer spacecraft. These suffer an anomalous acceleration whose source was debated for many years. Was it some subtle asymmetry in the photon pressure due thermal radiation from the spacecraft? Or new physics?

Though the effect is tiny (it is shown in the graph above, but can you see it?), it would be enormous for a MOND effect. MOND asymptotes to Newton at high accelerations. Despite the many AU Pioneer has put between itself and home, it is still in a regime 4 orders of magnitude above where MOND effects kick in. This would only be perceptible if the asymptotic approach to the Newtonian regime were incredibly slow. So slow, in fact, that it should be perceptible in the highly accurate data for the inner planets. Nowadays, the hypothesis of asymmetric photon pressure is widely accepted, which just goes to show how hard it is to construct experiments to test MOND. Not only do you have to get far enough away from the sun to probe the MOND regime (about a tenth of a light-year), but you have to control for how hard itty-bitty photons push on your projectile.

That said, it’d still be great experiment. Send a bunch of test particles out of the solar system at high speed on a variety of ballistic trajectories. They needn’t be much more than bullets with beacons to track them by. It would take a heck of a rocket to get them going fast enough to return an answer within a lifetime, but rocket scientists love a challenge to go real fast.


*Le Verrier suggested that the effect could be due to a new planet, dubbed Vulcan, that orbited the sun interior to the orbit of Mercury. In the half century prior to Einstein settling the issue, there were many claims to detect this Victorian form of dark matter.

Dwarf Satellite Galaxies and Low Surface Brightness Galaxies in the Field. I.

Dwarf Satellite Galaxies and Low Surface Brightness Galaxies in the Field. I.

The Milky Way and its nearest giant neighbor Andromeda (M31) are surrounded by a swarm of dwarf satellite galaxies. Aside from relatively large beasties like the Large Magellanic Cloud or M32, the majority of these are the so-called dwarf spheroidals. There are several dozen examples known around each giant host, like the Fornax dwarf pictured above.

Dwarf Spheroidal (dSph) galaxies are ellipsoidal blobs devoid of gas that typically contain a million stars, give or take an order of magnitude. Unlike globular clusters, that may have a similar star count, dSphs are diffuse, with characteristic sizes of hundreds of parsecs (vs. a few pc for globulars). This makes them among the lowest surface brightness systems known.

This subject has a long history, and has become a major industry in recent years. In addition to the “classical” dwarfs that have been known for decades, there have also been many comparatively recent discoveries, often of what have come to be called “ultrafaint” dwarfs. These are basically dSphs with luminosities less than 100,000 suns, sometimes being comprised of as little as a few hundred stars. New discoveries are being made still, and there is reason to hope that the LSST will discover many more. Summed up, the known dwarf satellites are proverbial drops in the bucket compared to their giant hosts, which contain hundreds of billions of stars. Dwarfs could rain in for a Hubble time and not perturb the mass budget of the Milky Way.

Nevertheless, tiny dwarf Spheroidals are excellent tests of theories like CDM and MOND. Going back to the beginning, in the early ’80s, Milgrom was already engaged in a discussion about the predictions of his then-new theory (before it was even published) with colleagues at the IAS, where he had developed the idea during a sabbatical visit. They were understandably skeptical, preferring – as many still do – to believe that some unseen mass was the more conservative hypothesis. Dwarf spheroidals came up even then, as their very low surface brightness meant low acceleration in MOND. This in turn meant large mass discrepancies. If you could measure their dynamics, they would have large mass-to-light ratios. Larger than could be explained by stars conventionally, and larger than the discrepancies already observed in bright galaxies like Andromeda.

This prediction of Milgrom’s – there from the very beginning – is important because of how things change (or don’t). At that time, Scott Tremaine summed up the contrasting expectation of the conventional dark matter picture:

“There is no reason to expect that dwarfs will have more dark matter than bright galaxies.” *

This was certainly the picture I had in my head when I first became interested in low surface brightness (LSB) galaxies in the mid-80s. At that time I was ignorant of MOND; my interest was piqued by the argument of Disney that there could be a lot of as-yet undiscovered LSB galaxies out there, combined with my first observing experiences with the then-newfangled CCD cameras which seemed to have a proclivity for making clear otherwise hard-to-see LSB features. At the time, I was interested in finding LSB galaxies. My interest in what made them rotate came  later.

The first indication, to my knowledge, that dSph galaxies might have large mass discrepancies was provided by Marc Aaronson in 1983. This tentative discovery was hugely important, but the velocity dispersion of Draco (one of the “classical” dwarfs) was based on only 3 stars, so was hardly definitive. Nevertheless, by the end of the ’90s, it was clear that large mass discrepancies were a defining characteristic of dSphs. Their conventionally computed M/L went up systematically as their luminosity declined. This was not what we had expected in the dark matter picture, but was, at least qualitatively, in agreement with MOND.

My own interests had focused more on LSB galaxies in the field than on dwarf satellites like Draco. Greg Bothun and Jim Schombert had identified enough of these to construct a long list of LSB galaxies that served as targets my for Ph.D. thesis. Unlike the pressure-supported ellipsoidal blobs of stars that are the dSphs, the field LSBs we studied were gas rich, rotationally supported disks – mostly late type galaxies (Sd, Sm, & Irregulars). Regardless of composition, gas or stars, low surface density means that MOND predicts low acceleration. This need not be true conventionally, as the dark matter can do whatever the heck it wants. Though I was blissfully unaware of it at the time, we had constructed the perfect sample for testing MOND.

Having studied the properties of our sample of LSB galaxies, I developed strong ideas about their formation and evolution. Everything we had learned – their blue colors, large gas fractions, and low star formation rates – suggested that they evolved slowly compared to higher surface brightness galaxies. Star formation gradually sputtered along, having a hard time gathering enough material to make stars in their low density interstellar media. Perhaps they even formed late, an idea I took a shining to in the early ’90s. This made two predictions: field LSB galaxies should be less strongly clustered than bright galaxies, and should spin slower at a given mass.

The first prediction follows because the collapse time of dark matter halos correlates with their larger scale environment. Dense things collapse first and tend to live in dense environments. If LSBs were low surface density because they collapsed late, it followed that they should live in less dense environments.

I didn’t know how to test this prediction. Fortunately, fellow postdoc and office mate in Cambridge at the time, Houjun Mo, did. It came true. The LSB galaxies I had been studying were clustered like other galaxies, but not as strongly. This was exactly what I expected, and I thought sure we were on to something. All that remained was to confirm the second prediction.

At the time, we did not have a clear idea of what dark matter halos should be like. NFW halos were still in the future. So it seemed reasonable that late forming halos should have lower densities (lower concentrations in the modern terminology). More importantly, the sum of dark and luminous density was certainly less. Dynamics follow from the distribution of mass as Velocity2 ∝ Mass/Radius. For a given mass, low surface brightness galaxies had a larger radius, by construction. Even if the dark matter didn’t play along, the reduction in the concentration of the luminous mass should lower the rotation velocity.

Indeed, the standard explanation of the Tully-Fisher relation was just this. Aaronson, Huchra, & Mould had argued that galaxies obeyed the Tully-Fisher relation because they all had essentially the same surface brightness (Freeman’s law) thereby taking variation in the radius out of the equation: galaxies of the same mass all had the same radius. (If you are a young astronomer who has never heard of Freeman’s law, you’re welcome.) With our LSB galaxies, we had a sample that, by definition, violated Freeman’s law. They had large radii for a given mass. Consequently, they should have lower rotation velocities.

Up to that point, I had not taken much interest in rotation curves. In contrast, colleagues at the University of Groningen were all about rotation curves. Working with Thijs van der Hulst, Erwin de Blok, and Martin Zwaan, we set out to quantify where LSB galaxies fell in relation to the Tully-Fisher relation. I confidently predicted that they would shift off of it – an expectation shared by many at the time. They did not.

BTFSBallwlinessmall
The Tully-Fisher relation: disk mass vs. flat rotation speed (circa 1996). Galaxies are binned by surface brightness with the highest surface brightness galaxies marked red and the lowest blue. The lines show the expected shift following the argument of Aaronson et al. Contrary to this expectation, galaxies of all surface brightnesses follow the same Tully-Fisher relation.

I was flummoxed. My prediction was wrong. That of Aaronson et al. was wrong. Poking about the literature, everyone who had made a clear prediction in the conventional context was wrong. It made no sense.

I spent months banging my head against the wall. One quick and easy solution was to blame the dark matter. Maybe the rotation velocity was set entirely by the dark matter, and the distribution of luminous mass didn’t come into it. Surely that’s what the flat rotation velocity was telling us? All about the dark matter halo?

Problem is, we measure the velocity where the luminous mass still matters. In galaxies like the Milky Way, it matters quite a lot. It does not work to imagine that the flat rotation velocity is set by some property of the dark matter halo alone. What matters to what we measure is the combination of luminous and dark mass. The luminous mass is important in high surface brightness galaxies, and progressively less so in lower surface brightness galaxies. That should leave some kind of mark on the Tully-Fisher relation, but it doesn’t.

CRVfresid
Residuals from the Tully-Fisher relation as a function of size at a given mass. Compact galaxies are to the left, diffuse ones to the right. The red dashed line is what Newton predicts: more compact galaxies should rotate faster at a given mass. Fundamental physics? Tully-Fisher don’t care. Tully-Fisher don’t give a sh*t.

I worked long and hard to understand this in terms of dark matter. Every time I thought I had found the solution, I realized that it was a tautology. Somewhere along the line, I had made an assumption that guaranteed that I got the answer I wanted. It was a hopeless fine-tuning problem. The only way to satisfy the data was to have the dark matter contribution scale up as that of the luminous mass scaled down. The more stretched out the light, the more compact the dark – in exact balance to maintain zero shift in Tully-Fisher.

This made no sense at all. Over twenty years on, I have yet to hear a satisfactory conventional explanation. Most workers seem to assert, in effect, that “dark matter does it” and move along. Perhaps they are wise to do so.

repomanfoxharris
Working on the thing can drive you mad.

As I was struggling with this issue, I happened to hear a talk by Milgrom. I almost didn’t go. “Modified gravity” was in the title, and I remember thinking, “why waste my time listening to that nonsense?” Nevertheless, against my better judgement, I went. Not knowing that anyone in the audience worked on either LSB galaxies or Tully-Fisher, Milgrom proceeded to derive the MOND prediction:

“The asymptotic circular velocity is determined only by the total mass of the galaxy: Vf4 = a0GM.”

In a few lines, he derived rather trivially what I had been struggling to understand for months. The lack of surface brightness dependence in Tully-Fisher was entirely natural in MOND. It falls right out of the modified force law, and had been explicitly predicted over a decade before I struggled with the problem.

I scraped my jaw off the floor, determined to examine this crazy theory more closely. By the time I got back to my office, cognitive dissonance had already started to set it. Couldn’t be true. I had more pressing projects to complete, so I didn’t think about it again for many moons.

When I did, I decided I should start by reading the original MOND papers. I was delighted to find a long list of predictions, many of them specifically to do with surface brightness. We had just collected fresh data on LSB galaxies, which provided a new window on the low acceleration regime. I had the data to finally falsify this stupid theory.

Or so I thought. As I went through the list of predictions, my assumption that MOND had to be wrong was challenged by each item. It was barely an afternoon’s work: check, check, check. Everything I had struggled for months to understand in terms of dark matter tumbled straight out of MOND.

I was faced with a choice. I knew this would be an unpopular result. I could walk away and simply pretend I had never run across it. That’s certainly how it had been up until then: I had been blissfully unaware of MOND and its perniciously successful predictions. No need to admit otherwise.

Had I realized just how unpopular it would prove to be, maybe that would have been the wiser course. But even contemplating such a course felt criminal. I was put in mind of Paul Gerhardt’s admonition for intellectual honesty:

“When a man lies, he murders some part of the world.”

Ignoring what I had learned seemed tantamount to just that. So many predictions coming true couldn’t be an accident. There was a deep clue here; ignoring it wasn’t going to bring us closer to the truth. Actively denying it would be an act of wanton vandalism against the scientific method.

Still, I tried. I looked long and hard for reasons not to report what I had found. Surely there must be some reason this could not be so?

Indeed, the literature provided many papers that claimed to falsify MOND. To my shock, few withstood critical examination. Commonly a straw man representing MOND was falsified, not MOND itself. At a deeper level, it was implicitly assumed that any problem for MOND was an automatic victory for dark matter. This did not obviously follow, so I started re-doing the analyses for both dark matter and MOND. More often than not, I found either that the problems for MOND were greatly exaggerated, or that the genuinely problematic cases were a problem for both theories. Dark matter has more flexibility to explain outliers, but outliers happen in astronomy. All too often the temptation was to refuse to see the forest for a few trees.

The first MOND analysis of the classical dwarf spheroidals provides a good example. Completed only a few years before I encountered the problem, these were low surface brightness systems that were deep in the MOND regime. These were gas poor, pressure supported dSph galaxies, unlike my gas rich, rotating LSB galaxies, but the critical feature was low surface brightness. This was the most directly comparable result. Better yet, the study had been made by two brilliant scientists (Ortwin Gerhard & David Spergel) whom I admire enormously. Surely this work would explain how my result was a mere curiosity.

Indeed, reading their abstract, it was clear that MOND did not work for the dwarf spheroidals. Whew: LSB systems where it doesn’t work. All I had to do was figure out why, so I read the paper.

As I read beyond the abstract, the answer became less and less clear. The results were all over the map. Two dwarfs (Sculptor and Carina) seemed unobjectionable in MOND. Two dwarfs (Draco and Ursa Minor) had mass-to-light ratios that were too high for stars, even in MOND. That is, there still appeared to be a need for dark matter even after MOND had been applied. One the flip side, Fornax had a mass-to-light ratio that was too low for the old stellar populations assumed to dominate dwarf spheroidals. Results all over the map are par for the course in astronomy, especially for a pioneering attempt like this. What were the uncertainties?

Milgrom wrote a rebuttal. By then, there were measured velocity dispersions for two more dwarfs. Of these seven dwarfs, he found that

“within just the quoted errors on the velocity dispersions and the luminosities, the MOND M/L values for all seven dwarfs are perfectly consistent with stellar values, with no need for dark matter.”

Well, he would say that, wouldn’t he? I determined to repeat the analysis and error propagation.

MdB98bFig8_dSph
Mass-to-light ratios determined with MOND for eight dwarf spheroidals (named, as published in McGaugh & de Blok 1998). The various symbols refer to different determinations. Mine are the solid circles. The dashed lines show the plausible range for stellar populations.

The net result: they were both right. M/L was still too high for Draco and Ursa Minor, and still too low for Fornax. But this was only significant at the 2σ level, if that – hardly enough to condemn a theory. Carina, Leo I, Leo II, Sculptor, and Sextans all had fairly reasonable mass-to-light ratios. The voting is different now. Instead of going 2 for 5 as Gerhard & Spergel found, MOND was now 5 for 8. One could choose to obsess about the outliers, or one could choose to see a more positive pattern.  Either a positive or a negative spin could be put on this result. But it was clearly more positive than the first attempt had indicated.

The mass estimator in MOND scales as the fourth power of velocity (or velocity dispersion in the case of isolated dSphs), so the too-high M*/L of Draco and Ursa Minor didn’t disturb me too much. A small overestimation of the velocity dispersion would lead to a large overestimation of the mass-to-light ratio. Just about every systematic uncertainty one can think of pushes in this direction, so it would be surprising if such an overestimate didn’t happen once in a while.

Given this, I was more concerned about the low M*/L of Fornax. That was weird.

Up until that point (1998), we had been assuming that the stars in dSphs were all old, like those in globular clusters. That corresponds to a high M*/L, maybe 3 in solar units in the V-band. Shortly after this time, people started to look closely at the stars in the classical dwarfs with the Hubble. Low and behold, the stars in Fornax were surprisingly young. That means a low M*/L, 1 or less. In retrospect, MOND was trying to tell us that: it returned a low M*/L for Fornax because the stars there are young. So what was taken to be a failing of the theory was actually a predictive success.

Hmm.

And Gee. This is a long post. There is a lot more to tell, but enough for now.


*I have a long memory, but it is not perfect. I doubt I have the exact wording right, but this does accurately capture the sentiment from the early ’80s when I was an undergraduate at MIT and Scott Tremaine was on the faculty there.

A Precise Milky Way

A Precise Milky Way

The Milky Way Galaxy in which we live seems to be a normal spiral galaxy. But it can be hard to tell. Our perspective from within it precludes a “face-on” view like the picture above, which combines some real data with a lot of artistic liberty. Some local details we can measure in extraordinary detail, but the big picture is hard. Just how big is the Milky Way? The absolute scale of our Galaxy has always been challenging to measure accurately from our spot within it.

For some time, we have had a remarkably accurate measurement of the angular speed of the sun around the center of the Galaxy provided by the proper motion of Sagittarius A*. Sgr A* is the radio source associated with the supermassive black hole at the center of the Galaxy. By watching how it appears to move across the sky, Reid & Brunthaler found our relative angular speed to be 6.379 milliarcseconds/year. That’s a pretty amazing measurement: a milliarcsecond is one one-thousandth of one arcsecond, which is one sixtieth of one arcminute, which is one sixtieth of a degree. A pretty small angle.

The proper motion of an object depends on the ratio of its speed to its distance. So this high precision measurement does not itself tell us how big the Milky Way is. We could be far from the center and moving fast, or close and moving slow. Close being a relative term when our best estimates of the distance to the Galactic center hover around 8 kpc (26,000 light-years), give or take half a kpc.

This situation has recently improved dramatically thanks to the Gravity collaboration. They have observed the close passage of a star (S2) past the central supermassive black hole Sgr A*. Their chief interest is in the resulting relativistic effects: gravitational redshift and Schwarzschild precession, which provide a test of General Relativity. Unsurprisingly, it passes with flying colors.

As a consequence of their fitting process, we get for free some other interesting numbers. The mass of the central black hole is 4.1 million solar masses, and the distance to it is 8.122 kpc. The quoted uncertainty is only 31 pc. That’s parsecs, not kiloparsecs. Previously, I had seen credible claims that the distance to the Galactic center was 7.5 kpc. Or 7.9. Or 8.3 Or 8.5. There was a time when it was commonly thought to be about 10 kpc, i.e., we weren’t even sure what column the first digit belonged in. Now we know it to several decimal places. Amazing.

Knowing both the Galactocentric distance and the proper motion of Sgr A* nails down the relative speed of the sun: 245.6 km/s. Of this, 12.2 km/s is “solar motion,” which is how much the sun deviates from a circular orbit. Correcting for this gives us the circular speed of an imaginary test particle orbiting at the sun’s location: 233.3 km/s, accurate to 1.4 km/s.

The distance and circular speed at the solar circle are the long sought Galactic Constants. These specify the scale of the Milky Way. Knowing them also pins down the rotation curve interior to the sun. This is well constrained by the “terminal velocities,” which provide a precise mapping of relative speeds, but need the Galactic Constants for an absolute scale.

A few years ago, I built a model Milky Way rotation curve that fit the terminal velocity data. What I was interested in then was to see if I could use the radial acceleration relation (RAR) to infer the mass distribution of the Galactic disk. The answer was yes. Indeed, it makes for a clear improvement over the traditional approach of assuming a purely exponential disk in the sense that the kinematically inferred bumps and wiggles in the rotation curve correspond to spiral arms known from star counts, as in external spiral galaxies.

Now that the Galactic constants are Known, it seems worth updating the model. This results in the surface density profile

SurfaceDensityProfile
The surface density profile of the Milky Way model scaled to the newly accurate distance to the Galactic center.

with the corresponding rotation curve

MW_2018_VR
The rotation curve of the Milky Way as traced by terminal velocities in the first and fourth quadrants (red and blue points). The solid line is a model that matches this rotation curve. The dashed and dotted lines are the rotation curves of the baryonic and inferred dark matter components. Yellow bands show the effect of varying the stellar mass by 5%.

The model data are available from the Milky Way section of my model pages.

Finding a model that matches both the terminal velocity and the highly accurate Galactic constants is no small feat. Indeed, I worried it was impossible: the speed at the solar circle is down to 233 km/s from a high of 249 km/s just a couple of kpc interior. This sort of variation is possible, but it requires a ring of mass outside the sun. This appears to be the effect of the Perseus spiral arm.

For the new Galactic constants and the current calibration of the RAR, the stellar mass of the Milky Way works out to just under 62 billion solar masses. The largest uncertainty in this is from the asymmetry in the terminal velocities, which are slightly different in the first and fourth quadrants. This is likely a real asymmetry in the mass distribution of the Milky Way. Treating it as an uncertainty, the range of variation corresponds to about 5% up or down in stellar mass.

With the stellar mass determined in this way, we can estimate the local density of dark matter. This is the critical number that is needed for experimental searches: just how much of the stuff should we expect? The answer is very precise: 0.257 GeV per cubic cm. This a bit less than is usually assumed, which makes it a tiny bit harder on the hard-working experimentalists.

The accuracy of the dark matter density is harder to assess. The biggest uncertainty is that in stellar mass. We known the total radial force very well now, but how much is due to stars, and how much to dark matter? (or whatever). The RAR provides a unique method for constraining the stellar contribution, and does so well enough that there is very little formal uncertainty in the dark matter density. This, however, depends on the calibration of the RAR, which itself is subject to systematic uncertainty at the 20% level. This is not as bad as it sounds, because a recalibration of the RAR changes its shape in a way that tends to trade off with stellar mass while not much changing the implied dark matter density. So even with these caveats, this is the most accurate measure of the dark matter density to date.

This is all about the radial force. One can also measure the force perpendicular to the disk. This vertical force implies about twice the dark matter density. This may be telling us something about the shape of the dark matter halo – rather than being spherical as usually assumed, it might be somewhat squashed. It is easy to say that, but it seems a strange circumstance: the stars provide most of the restoring force in the vertical direction, and apparently dominate the radial force. Subtracting off the stellar contribution is thus a challenging task: the total force isn’t much greater than that from the stars alone. Subtracting one big number from another to measure a small one is fraught with peril: the uncertainties tend to blow up in your face.

Returning to the Milky Way, it seems in all respects to be a normal spiral galaxy. With the stellar mass found here, we can compare it to other galaxies in scaling relations like Tully-Fisher. It does not stand out from the crowd: our home is a fairly normal place for this time in the Universe.

TFMW
The stellar mass Tully-Fisher relation with the Milky Way shown as the red star. It is a typical spiral galaxy.

It is possible to address many more details with a model like this. See the original!