The halo mass function

The halo mass function

I haven’t written much here of late. This is mostly because I have been busy, but also because I have been actively refraining from venting about some of the sillier things being said in the scientific literature. I went into science to get away from the human proclivity for what is nowadays called “fake news,” but we scientists are human too, and are not immune from the same self-deception one sees so frequently exercised in other venues.

So let’s talk about something positive. Current grad student Pengfei Li recently published a paper on the halo mass function. What is that and why should we care?

One of the fundamental predictions of the current cosmological paradigm, ΛCDM, is that dark matter clumps into halos. Cosmological parameters are known with sufficient precision that we have a very good idea of how many of these halos there ought to be. Their number per unit volume as a function of mass (so many big halos, so many more small halos) is called the halo mass function.

An important test of the paradigm is thus to measure the halo mass function. Does the predicted number match the observed number? This is hard to do, since dark matter halos are invisible! So how do we go about it?

Galaxies are thought to form within dark matter halos. Indeed, that’s kinda the whole point of the ΛCDM galaxy formation paradigm. So by counting galaxies, we should be able to count dark matter halos. Counting galaxies was an obvious task long before we thought there was dark matter, so this should be straightforward: all one needs is the measured galaxy luminosity function – the number density of galaxies as a function of how bright they are, or equivalently, how many stars they are made of (their stellar mass). Unfortunately, this goes tragically wrong.

Galaxy stellar mass function and the predicted halo mass function
Fig. 5 from the review by Bullock & Boylan-Kolchin. The number density of objects is shown as a function of their mass. Colored points are galaxies. The solid line is the predicted number of dark matter halos. The dotted line is what one would expect for galaxies if all the normal matter associated with each dark matter halo turned into stars.

This figure shows a comparison of the observed stellar mass function of galaxies and the predicted halo mass function. It is from a recent review, but it illustrates a problem that goes back as long as I can remember. We extragalactic astronomers spent all of the ’90s obsessing over this problem. [I briefly thought that I had solved this problem, but I was wrong.] The observed luminosity function is nearly flat while the predicted halo mass function is steep. Consequently, there should be lots and lots of faint galaxies for every bright one, but instead there are relatively few. This discrepancy becomes progressively more severe to lower masses, with the predicted number of halos being off by a factor of many thousands for the faintest galaxies. The problem is most severe in the Local Group, where the faintest dwarf galaxies are known. Locally it is called the missing satellite problem, but this is just a special case of a more general problem that pervades the entire universe.

Indeed, the small number of low mass objects is just one part of the problem. There are also too few galaxies at large masses. Even where the observed and predicted numbers come closest, around the scale of the Milky Way, they still miss by a large factor (this being a log-log plot, even small offsets are substantial). If we had assigned “explain the observed galaxy luminosity function” as a homework problem and the students had returned as an answer a line that had the wrong shape at both ends and at no point intersected the data, we would flunk them. This is, in effect, what theorists have been doing for the past thirty years. Rather than entertain the obvious interpretation that the theory is wrong, they offer more elaborate interpretations.

Faced with the choice between changing one’s mind and proving that there is no need to do so, almost everybody gets busy on the proof.

J. K. Galbraith

Theorists persist because this is what CDM predicts, with or without Λ, and we need cold dark matter for independent reasons. If we are unwilling to contemplate that ΛCDM might be wrong, then we are obliged to pound the square peg into the round hole, and bend the halo mass function into the observed luminosity function. This transformation is believed to take place as a result of a variety of complex feedback effects, all of which are real and few of which are likely to have the physical effects that are required to solve this problem. That’s way beyond the scope of this post; all we need to know here is that this is the “physics” behind the transformation that leads to what is currently called Abundance Matching.

Abundance matching boils down to drawing horizontal lines in the above figure, thus matching galaxies with dark matter halos with equal number density (abundance). So, just reading off the graph, a galaxy of stellar mass M* = 108 M resides in a dark matter halo of 1011 M, one like the Milky Way with M* = 5 x 1010 M resides in a 1012 M halo, and a giant galaxy with M* = 1012 M is the “central” galaxy of a cluster of galaxies with a halo mass of several 1014 M. And so on. In effect, we abandon the obvious and long-held assumption that the mass in stars should be simply proportional to that in dark matter, and replace it with a rolling fudge factor that maps what we see to what we predict. The rolling fudge factor that follows from abundance matching is called the stellar mass–halo mass relation. Many of the discussions of feedback effects in the literature amount to a post hoc justification for this multiplication of forms of feedback.

This is a lengthy but insufficient introduction to a complicated subject. We wanted to get away from this, and test the halo mass function more directly. We do so by use of the velocity function rather than the stellar mass function.

The velocity function is the number density of galaxies as a function of how fast they rotate. It is less widely used than the luminosity function, because there is less data: one needs to measure the rotation speed, which is harder to obtain than the luminosity. Nevertheless, it has been done, as with this measurement from the HIPASS survey:

Galaxy velocity function
The number density of galaxies as a function of their rotation speed (Zwaan et al. 2010). The bottom panel shows the raw number of galaxies observed; the top panel shows the velocity function after correcting for the volume over which galaxies can be detected. Faint, slow rotators cannot be seen as far away as bright, fast rotators, so the latter are always over-represented in galaxy catalogs.

The idea here is that the flat rotation speed is the hallmark of a dark matter halo, providing a dynamical constraint on its mass. This should make for a cleaner measurement of the halo mass function. This turns out to be true, but it isn’t as clean as we’d like.

Those of you who are paying attention will note that the velocity function Martin Zwaan measured has the same basic morphology as the stellar mass function: approximately flat at low masses, with a steep cut off at high masses. This looks no more like the halo mass function than the galaxy luminosity function did. So how does this help?

To measure the velocity function, one has to use some readily obtained measure of the rotation speed like the line-width of the 21cm line. This, in itself, is not a very good measurement of the halo mass. So what Pengfei did was to fit dark matter halo models to galaxies of the SPARC sample for which we have good rotation curves. Thanks to the work of Federico Lelli, we also have an empirical relation between line-width and the flat rotation velocity. Together, these provide a connection between the line-width and halo mass:

Halo mass-line width relation
The relation Pengfei found between halo mass (M200) and line-width (W) for the NFW (ΛCDM standard) halo model fit to rotation curves from the SPARC galaxy sample.

Once we have the mass-line width relation, we can assign a halo mass to every galaxy in the HIPASS survey and recompute the distribution function. But now we have not the velocity function, but the halo mass function. We’ve skipped the conversion of light to stellar mass to total mass and used the dynamics to skip straight to the halo mass function:

Empirical halo mass function
The halo mass function. The points are the data; these are well fit by a Schechter function (black line; this is commonly used for the galaxy luminosity function). The red line is the prediction of ΛCDM for dark matter halos.

The observed mass function agrees with the predicted one! Test successful! Well, mostly. Let’s think through the various aspects here.

First, the normalization is about right. It does not have the offset seen in the first figure. As it should not – we’ve gone straight to the halo mass in this exercise, and not used the luminosity as an intermediary proxy. So that is a genuine success. It didn’t have to work out this well, and would not do so in a very different cosmology (like SCDM).

Second, it breaks down at high mass. The data shows the usual Schechter cut-off at high mass, while the predicted number of dark matter halos continues as an unabated power law. This might be OK if high mass dark matter halos contain little neutral hydrogen. If this is the case, they will be invisible to HIPASS, the 21cm survey on which this is based. One expects this, to a certain extent: the most massive galaxies tend to be gas-poor ellipticals. That helps, but only by shifting the turn-down to slightly higher mass. It is still there, so the discrepancy is not entirely cured. At some point, we’re talking about large dark matter halos that are groups or even rich clusters of galaxies, not individual galaxies. Still, those have HI in them, so it is not like they’re invisible. Worse, examining detailed simulations that include feedback effects, there do seem to be more predicted high-mass halos that should have been detected than actually are. This is a potential missing gas-rich galaxy problem at the high mass end where galaxies are easy to detect. However, the simulations currently available to us do not provide the information we need to clearly make this determination. They don’t look right, so far as we can tell, but it isn’t clear enough to make a definitive statement.

Finally, the faint-end slope is about right. That’s amazing. The problem we’ve struggled with for decades is that the observed slope is too flat. Here a steep slope just falls out. It agrees with the ΛCDM down to the lowest mass bin. If there is a missing satellite-type problem here, it is at lower masses than we probe.

That sounds great, and it is. But before we get too excited, I hope you noticed that the velocity function from the same survey is flat like the luminosity function. So why is the halo mass function steep?

When we fit rotation curves, we impose various priors. That’s statistics talk for a way of keeping parameters within reasonable bounds. For example, we have a pretty good idea of what the mass-to-light ratio of a stellar population should be. We can therefore impose as a prior that the fit return something within the bounds of reason.

One of the priors we imposed on the rotation curve fits was that they be consistent with the stellar mass-halo mass relation. Abundance matching is now part and parcel of ΛCDM, so it made sense to apply it as a prior. The total mass of a dark matter halo is an entirely notional quantity; rotation curves (and other tracers) pretty much never extend far enough to measure this. So abundance matching is great for imposing sense on a parameter that is otherwise ill-constrained. In this case, it means that what is driving the slope of the halo mass function is a prior that builds-in the right slope. That’s not wrong, but neither is it an independent test. So while the observationally constrained halo mass function is consistent with the predictions of ΛCDM; we have not corroborated the prediction with independent data. What we really need at low mass is some way to constrain the total mass of small galaxies out to much larger radii that currently available. That will keep us busy for some time to come.

Fuzzy Thing!

Fuzzy Thing!

I was contacted today by a colleague at NASA’s Goddard Space Flight Center who was seeking to return some photographic plates of Halley’s comet that had been obtained with the Burrell Schmidt telescope. I at first misread the email – I get so many requests for data, I initially assumed that he was looking for said plates. That sent me into a frenzy of where the heck are they? about data obtained by others well before my time as the director of the Warner & Swasey Observatory. Comet Halley last came by in 1986.

Fortunately, reading comprehension kicked in, and I realized that all I really needed to figure out was where they should go. The lower pressure version of where the heck are they? That would be the Pisgah Astronomical Research Institute, which has had the good sense to archive the vast treasury of astronomical plates that many observatories obtained in the pre-digital era but don’t always have the ability to preserve. But this post isn’t about that; it is just a spark to the memory.

In 1986, I was a first-year graduate student in the Princeton physics department. As such, I had at that time little more competence in observing the sky than any other physicist (practically none). Nevertheless, I traipsed out into an open field at the dark edge of town on a clear night with a pair of binoculars and a vague knowledge of what part of the sky Comet Halley should be in. How hard could it be to spot the most famous comet in history?

Impossibly hard. There was nothing to see, so far as I could find. The apparition of 1986 was a bust. This informed in me a bad attitude towards comets. There had never been a good apparition in my lifetime (all of 22 years at that point), and Halley certainly wasn’t one. I accepted that decent comets must be a rare occurrence.

Flash forward a decade to 1996, by which time I was an accomplished observer with a good working knowledge of the celestial sphere. A new comet was discovered – Hyakutake – and with it came much hype. Yeah, yeah, I’d heard it all before. Boring. Comets were always a flop.

Comet Hyakutake made a close approach to Earth in March of 1996. Its wikipedia page is pretty good, with a nice illustration of its orbit and its path on the sky as perceived from the Earth. I was working at DTM at the time, where there were lots of planetary scientists as well as a few astronomers. Someone posted an ephemeris, so despite my distrust of comets I found myself peeking at what its trajectory would be. Nevertheless, we had a long period of cloudy weather, so there was nothing to see even if there was something to see, which I expected there wasn’t.

At this time, my elder daughter Caitlyn was two years old. I made a habit of taking her out and pointing things out in the sky. We watched the sunset, the moon set after it near new moon, and the moon rise near full moon. She seemed content to listen to her old man babble about the lights in the sky. Apparently more of that sank in than I realized.

My wife Anne was teaching at Loyola, and her department chair had invited us over for a party around the vernal equinox. We enjoyed the adult company and Caitlyn put up well with it – up to a point. It got dark and we bid our farewells and headed out. We had parked across the street, and on the way out Betsy (our hostess) said “Stacy – you’re an astronomer. Where’s the comet?”

I got this pained expression. Stupid comets. But it had cleared up for the first time in nearly a week, and looking up from the front door, I could quickly orient myself on the sky. Doing so, I realize that the comet was behind the house. So I pointed up and over, towards the back yard and through the roof: “Over there.” I continued across the street to the car with the toddler cradled in my left arm, fiddling with the keys with my right hand.

We did not have a nice car: one had to insert the key manually into the door to unlock it. As I went around the car to get to the driver’s side, I was focused on this mundane task. It did not occur to me to look up in the direction I had just pointed. I felt Caitlyn stretch her arm to point at the sky, exclaiming “Fuzzy thing!”

I looked up. There is was: a big, bright, fuzzy ball. A brilliant cometary apparition, the coma easily visible even in Baltimore. My two-year old daughter spotted it and accurately classified it before I even looked up.

Comet Hyakutake on March 22, 1996.

Comet Hyakutake was an amazing event. Not only spectacular to look at, but it drove home celestial mechanics in a visceral way. It was at this time very close to Earth (by the scale of such things). That meant it made noticeable progress in its orbit from night to night. You couldn’t see it moving just staring at it, but one night is was here, the next night it was there, the following night over there. It was skipping through the constellations at a dizzying speed for an object that takes c. 70,000 years to complete one orbit. But we were close enough that one could easily see the progress it made across the sky from night to night, if not minute to minute. If you wanted to take a picture with a telescope, you had to track the telescope to account for this – hence the star trails in the image above: the stars appear as streaks because the telescope is moving with the comet, not with the sky.

The path of Comet Huyakutake across the sky.

This figure (credit: Tom Ruen) shows the orbital path of Comet Huyakutake projected on the sky (constellations outlined in blue). Most of the time, the comet is far away near the aphelion of its orbit. As it fell in towards the sun, its path made annual ellipses due to the reflex motion of the Earth’s own orbit – the parallax. These grew in size until the comet came sweeping by in the month of March, 1996. Think about it: it spent tens of thousands of years spiraling down towards us, only to shoot by, transitioning well across the sky in only a couple of weeks. Celestial mechanics made visible.

Not long after Hyakutake started to fade, Comet Hale-Bopp became visible. Hale-Bopp did not pass as close to the Earth as Hyakutake, so it didn’t leap across the sky like Tom Bombadil. But Hale-Bopp was a physically larger comet. As such, it got bright and stayed bright for a long time, remaining visible to the naked eye for a record year and half. In the months after Hyakutake’s apparition, we could see Hale-Bopp chasing the sunset from the balcony of our apartment. Caitlyn and I would sit there and watch it as the twilight faded into dark. Her experience of comets had been the opposite of mine: where in my thirty years (before that point) they had been rare and disappointing, in her (by then) three years they had been common and spectacular.

The sky is full of marvels. You never know when you might get to see one.

New web domain

I happened to visit this blog as a visitor from a computer not mine. Seeing it that way made me realize how obnoxious the ads had become. So WordPress’s extortion worked; I’ve agreed to send them a few $ every month to get rid of the ads. With it comes a new domain name: tritonstation.com. Bookmarks to the previous website (tritonstation.wordpress.com) should redirect here. Let me know if a problem arises, or the barrage of ads fails to let up. I may restructure the web page so there is more here than just this blog, but that will have to await my attention in my copious spare time.

As it happens, I depart soon to attend an IAU meeting on galaxy dynamics. This is being held in part to honor the career of Prof. Jerry Sellwood, with whom I had the pleasure to work while a postdoc at Rutgers. He hosted a similar meeting at Rutgers in 1998; I’m sure that some of the same issues discussed then will be debated again next week.

Two fields divided by a common interest

Two fields divided by a common interest

Britain and America are two nations divided by a common language.

attributed to George Bernard Shaw

Physics and Astronomy are two fields divided by a common interest in how the universe works. There is a considerable amount of overlap between some sub-fields of these subjects, and practically none at all in others. The aims and goals are often in common, but the methods, assumptions, history, and culture are quite distinct. This leads to considerable confusion, as with the English language – scientists with different backgrounds sometimes use the same words to mean rather different things.

A few terms that are commonly used to describe scientists who work on the subjects that I do include astronomer, astrophysicist, and cosmologist. I could be described as any of the these. But I also know lots of scientists to whom these words could be applied, but would mean something rather different.

A common question I get is “What’s the difference between an astronomer and an astrophysicist?” This is easy to answer from my experience as a long-distance commuter. If I get on a plane, and the person next to me is chatty and asks what I do, if I feel like chatting, I am an astronomer. If I don’t, I’m an astrophysicist. The first answer starts a conversation, the second shuts it down.

Flippant as that anecdote is, it is excruciatingly accurate – both for how people react (commuting between Cleveland and Baltimore for a dozen years provided lots of examples), and for what the difference is: practically none. If I try to offer a more accurate definition, then I am sure to fail to provide a complete answer, as I don’t think there is one. But to make the attempt:

Astronomy is the science of observing the sky, encompassing all elements required to do so. That includes practical matters like the technology of telescopes and their instruments across all wavelengths of the electromagnetic spectrum, and theoretical matters that allow us to interpret what we see up there: what’s a star? a nebula? a galaxy? How does the light emitted by these objects get to us? How do we count photons accurately and interpret what they mean?

Astrophysics is the science of how things in the sky work. What makes a star shine? [Nuclear reactions]. What produces a nebular spectrum? [The atomic physics of incredibly low density interstellar plasma.] What makes a spiral galaxy rotate? [Gravity! Gravity plus, well, you know, something. Or, if you read this blog, you know that we don’t really know.] So astrophysics is the physics of the objects astronomy discovers in the sky. This is a rather broad remit, and covers lots of physics.

With this definition, astrophysics is a subset of astronomy – such a large and essential subset that the terms can and often are used interchangeably. These definitions are so intimately intertwined that the distinction is not obvious even for those of us who publish in the learned journals of the American Astronomical Society: the Astronomical Journal (AJ) and the Astrophysical Journal (ApJ). I am often hard-pressed to distinguish between them, but to attempt it in brief, the AJ is where you publish a paper that says “we observed these objects” and the ApJ is where you write “here is a model to explain these objects.” The opportunity for overlap is obvious: a paper that says “observations of these objects test/refute/corroborate this theory” could appear in either. Nevertheless, there was a clearly a sufficient need to establish a separate journal focused on the physics of how things in the sky worked to launch the Astrophysical Journal in 1895 to complement the older Astronomical Journal (dating from 1849).

Cosmology is the study of the entire universe. As a science, it is the subset of astrophysics that encompasses observations that measure the universe as a physical entity: its size, age, expansion rate, and temporal evolution. Examples are sufficiently diverse that practicing scientists who call themselves cosmologists may have rather different ideas about what it encompasses, or whether it even counts as astrophysics in the way defined above.

Indeed, more generally, cosmology is where science, philosophy, and religion collide. People have always asked the big questions – we want to understand the world in which we find ourselves, our place in it, our relation to it, and to its Maker in the religious sense – and we have always made up stories to fill in the gaping void of our ignorance. Stories that become the stuff of myth and legend until they are unquestionable aspects of a misplaced faith that we understand all of this. The science of cosmology is far from immune to myth making, and often times philosophical imperatives have overwhelmed observational facts. The lengthy persistence of SCDM in the absence of any credible evidence that Ωm = 1 is a recent example. Another that comes and goes is the desire for a Phoenix universe – one that expands, recollapses, and is then reborn for another cycle of expansion and contraction that repeats ad infinitum. This is appealing for philosophical reasons – the universe isn’t just some bizarre one-off – but there’s precious little that we know (or perhaps can know) to suggest it is a reality.

battlestar_galactica-last-supper
This has all happened before, and will all happen again.

Nevertheless, genuine and enormous empirical progress has been made. It is stunning what we know now that we didn’t a century ago. It has only been 90 years since Hubble established that there are galaxies external to the Milky Way. Prior to that, the prevailing cosmology consisted of a single island universe – the Milky Way – that tapered off into an indefinite, empty void. Until Hubble established otherwise, it was widely (though not universally) thought that the spiral nebulae were some kind of gas clouds within the Milky Way. Instead, the universe is filled with millions and billions of galaxies comparable in stature to the Milky Way.

We have sometimes let our progress blind us to the gaping holes that remain in our knowledge. Some of our more imaginative and less grounded colleagues take some of our more fanciful stories to be established fact, which sometimes just means the problem is old and familiar so boring if still unsolved. They race ahead to create new stories about entities like multiverses. To me, multiverses are manifestly metaphysical: great fun for late night bull sessions, but not a legitimate branch of physics.

So cosmology encompasses a lot. It can mean very different things to different people, and not all of it is scientific. I am not about to touch on the world-views of popular religions, all of which have some flavor of cosmology. There is controversy enough about these definitions among practicing scientists.

I started as a physicist. I earned an SB in physics from MIT in 1985, and went on to the physics (not the astrophysics) department of Princeton for grad school. I had elected to study physics because I had a burning curiosity about how the world works. It was not specific to astronomy as defined above. Indeed, astronomy seemed to me at the time to be but one of many curiosities, and not necessarily the main one.

There was no clear department of astronomy at MIT. Some people who practiced astrophysics were in the physics department; others in Earth, Atmospheric, and Planetary Science, still others in Mathematics. At the recommendation of my academic advisor Michael Feld, I wound up doing a senior thesis with George W. Clark, a high energy astrophysicist who mostly worked on cosmic rays and X-ray satellites. There was a large high energy astrophysics group at MIT who studied X-ray sources and the physics that produced them – things like neutron stars, black holes, supernova remnants, and the intracluster medium of clusters of galaxies – celestial objects with sufficiently extreme energies to make X-rays. The X-ray group needed to do optical follow-up (OK, there’s an X-ray source at this location on the sky. What’s there?) so they had joined the MDM Observatory. I had expressed a vague interest in orbital dynamics, and Clark had become interested in the structure of elliptical galaxies, motivated by the elegant orbital structures described by Martin Schwarzschild. The astrophysics group did a lot of work on instrumentation, so we had access to a new-fangled CCD. These made (and continue to make) much more sensitive detectors than photographic plates.

Empowered by this then-new technology, we embarked on a campaign to image elliptical galaxies with the MDM 1.3 m telescope. The initial goal was to search for axial twists as the predicted consequence of triaxial structure – Schwarzschild had shown that elliptical galaxies need not be oblate or prolate, but could have three distinct characteristic lengths along their principal axes. What we noticed instead with the sensitive CCD was a wonder of new features in the low surface brightness outskirts of these galaxies. Most elliptical galaxies just fade smoothly into obscurity, but every fourth or fifth case displayed distinct shells and ripples – features that were otherwise hard to spot that had only recently been highlighted by Malin & Carter.

Arp227_crop
A modern picture (courtesy of Pierre-Alain Duc) of the shell galaxy Arp 227 (NGC 474). Quantifying the surface brightness profiles of the shells in order to constrain theories for their origin became the subject of my senior thesis. I found that they were most consistent with stars on highly elliptical orbits, as expected from the shredded remnants of a cannibalized galaxy. Observations like this contributed to a sea change in the thinking about galaxies as isolated island universes that never interacted to the modern hierarchical view in which galaxy mergers are ubiquitous.

At the time I was doing this work, I was of course reading up on galaxies in general, and came across Mike Disney’s arguments as to how low surface brightness galaxies could be ubiquitous and yet missed by many surveys. This resonated with my new observing experience. Look hard enough, and you would find something new that had never before been seen. This proved to be true, and remains true to this day.

I went on only two observing runs my senior year. The weather was bad for the first one, clearing only the last night during which I collected all the useful data. The second run came too late to contribute to my thesis. But I was enchanted by the observatory as a remote laboratory, perched in the solitude of the rugged mountains, themselves alone in an empty desert of subtly magnificent beauty. And it got dark at night. You could actually see the stars. More stars than can be imagined by those confined to the light pollution of a city.

It hadn’t occurred to me to apply to an astronomy graduate program. I continued on to Princeton, where I was assigned to work in the atomic physics lab of Will Happer. There I mostly measured the efficiency of various buffer gases in moderating spin exchange between sodium and xenon. This resulted in my first published paper.

In retrospect, this is kinda cool. As an alkali, the atomic structure of sodium is basically that of a noble gas with a spare electron it’s eager to give away in a chemical reaction. Xenon is a noble gas, chemically inert as it already has nicely complete atomic shells; it wants neither to give nor receive electrons from other elements. Put the two together in a vapor, and they can form weak van der Waals molecules in which they share the unwanted valence electron like a hot potato. The nifty thing is that one can spin-polarize the electron by optical pumping with a laser. As it happens, the wave function of the electron has a lot of overlap with the nucleus of the xenon (one of the allowed states has no angular momentum). Thanks to this overlap, the spin polarization imparted to the electron can be transferred to the xenon nucleus. In this way, it is possible to create large amounts of spin-polarized xenon nuclei. This greatly enhances the signal of MRI, and has found an application in medical imaging: a patient can breathe in a chemically inert [SAFE], spin polarized noble gas, making visible all the little passageways of the lungs that are otherwise invisible to an MRI. I contributed very little to making this possible, but it is probably the closest I’ll ever come to doing anything practical.

The same technology could, in principle, be applied to make dark matter detection experiments phenomenally more sensitive to spin-dependent interactions. Giant tanks of xenon have already become one of the leading ways to search for WIMP dark matter, gobbling up a significant fraction of the world supply of this rare noble gas. Spin polarizing the xenon on the scales of tons rather than grams is a considerable engineering challenge.

Now, in that last sentence, I lapsed into a bit of physics arrogance. We understand the process. Making it work is “just” a matter of engineering. In general, there is a lot of hard work involved in that “just,” and a lot of times it is a practical impossibility. That’s probably the case here, as the polarization decays away quickly – much more quickly than one could purify and pump tons of the stuff into a vat maintained at a temperature near absolute zero.

At the time, I did not appreciate the meaning of what I was doing. I did not like working in Happer’s lab. The windowless confines kept dark but for the sickly orange glow of a sodium D laser was not a positive environment to be in day after day after day. More importantly, the science did not call to my heart. I began to dream of a remote lab on a scenic mountain top.

I also found the culture in the physics department at Princeton to be toxic. Nothing mattered but to be smarter than the next guy (and it was practically all guys). There was no agreed measure for this, and for the most part people weren’t so brazen as to compare test scores. So the thing to do was Be Arrogant. Everybody walked around like they were too frickin’ smart to be bothered to talk to anyone else, or even see them under their upturned noses. It was weird – everybody there was smart, but no human could possible be as smart as these people thought they were. Well, not everybody, of course – Jim Peebles is impossibly intelligent, sane, and even nice (perhaps he is an alien, or at least a Canadian) – but for most of Princeton arrogance was a defining characteristic that seeped unpleasantly into every interaction.

It was, in considerable part, arrogance that drove me away from physics. I was appalled by it. One of the best displays was put on by David Gross in a colloquium that marked the take-over of theoretical physics by string theory. The dude was talking confidently in bold positivist terms about predictions that were twenty orders of magnitude in energy beyond any conceivable experimental test. That, to me, wasn’t physics.

More than thirty years on, I can take cold comfort that my youthful intuition was correct. String theory has conspicuously failed to provide the vaunted “theory of everything” that was promised. Instead, we have vague “landscapes” of 10500 possible theories. Just want one. 10500 is not progress. It’s getting hopelessly lost. That’s what happens when brilliant ideologues are encouraged to wander about in their hyperactive imaginations without experimental guidance. You don’t get physics, you get metaphysics. If you think that sounds harsh, note that Gross himself takes exactly this issue with multiverses, saying the notion “smells of angels” and worrying that a generation of physicists will be misled down a garden path – exactly the way he misled a generation with string theory.

So I left Princeton, and switched to a field where progress could be made. I chose to go to the University of Michigan, because I knew it had access to the MDM telescopes (one of the M’s stood for Michigan, the other MIT, with the D for Dartmouth) and because I was getting married. My wife is an historian, and we needed a university that was good in both our fields.

When I got to Michigan, I was ready to do research. I wanted to do more on shell galaxies, and low surface brightness galaxies in general. I had had enough coursework, I reckoned; I was ready to DO science. So I was somewhat taken aback that they wanted me to do two more years of graduate coursework in astronomy.

Some of the physics arrogance had inevitably been incorporated into my outlook. To a physicist, all other fields are trivial. They are just particular realizations of some subset of physics. Chemistry is just applied atomic physics. Biology barely even counts as science, and those parts that do could be derived from physics, in principle. As mere subsets of physics, any other field can and will be picked up trivially.

After two years of graduate coursework in astronomy, I had the epiphany that the field was not trivial. There were excellent reasons, both practical and historical, why it was a separate field. I had been wrong to presume otherwise.

Modern physicists are not afflicted by this epiphany. That bad attitude I was guilty of persists and is remarkably widespread. I am frequently confronted by young physicists eager to mansplain my own field to me, who casually assume that I am ignorant of subjects that I wrote papers on before they started reading the literature, and who equate a disagreement with their interpretation on any subject with ignorance on my part. This is one place the fields diverge enormously. In physics, if it appears in a textbook, it must be true. In astronomy, we recognize that we’ve been wrong about the universe so many times, we’ve learned to be tolerant of interpretations that initially sound absurd. Today’s absurdity may be tomorrow’s obvious fact. Physicists don’t share this history, and often fail to distinguish interpretation from fact, much less cope with the possibility that a single set of facts may admit multiple interpretations.

Cosmology has often been a leader in being wrong, and consequently enjoyed a shady reputation in both physics and astronomy for much of the 20th century. When I started on the faculty at the University of Maryland in 1998, there was no graduate course in the subject. This seemed to me to be an obvious gap to fill, so I developed one. Some of the senior astronomy faculty expressed concern as to whether this could be a rigorous 3 credit graduate course, and sent a neutral representative to discuss the issue with me. He was satisfied. As would be any cosmologist – I was teaching LCDM before most other cosmologists had admitted it was a thing.

At that time, 1998, my wife was also a new faculty member at John Carroll University. They held a welcome picnic, which I attended as the spouse. So I strike up a conversation with another random spouse who is also standing around looking similarly out of place. Ask him what he does. “I’m a physicist.” Ah! common ground – what do you work on? “Cosmology and dark matter.” I was flabbergasted. How did I not know this person? It was Glenn Starkman, and this was my first indication that sometime in the preceding decade, cosmology had become an acceptable field in physics and not a suspect curiosity best left to woolly-minded astronomers.

This was my first clue that there were two entirely separate groups of professional scientists who self-identified as cosmologists. One from the astronomy tradition, one from physics. These groups use the same words to mean the same things – sometimes. There is a common language. But like British English and American English, sometimes different things are meant by the same words.

“Dark matter” is a good example. When I say dark matter, I mean the vast diversity of observational evidence for a discrepancy between measurable probes of gravity (orbital speeds, gravitational lensing, equilibrium hydrostatic temperatures, etc.) and what is predicted by the gravity of the observed baryonic material – the stars and gas we can see. When a physicist says “dark matter,” he seems usually to mean the vast array of theoretical hypotheses for what new particle the dark matter might be.

To give a recent example, a colleague who is a world-reknowned expert on dark matter, and an observational astronomer in a physics department dominated by particle cosmologists, noted that their chairperson had advocated a particular hiring plan because “we have no one who works on dark matter.” This came across as incredibly disrespectful, which it is. But it is also simply clueless. It took some talking to work through, but what we think he meant was that they had no one who worked on laboratory experiments to detect dark matter. That’s a valid thing to do, which astronomers don’t deny. But it is a severely limited way to think about it.

To date, the evidence for dark matter to date is 100% astronomical in nature. That’s all of it. Despite enormous effort and progress, laboratory experiments provide 0%. Zero point zero zero zero. And before some fool points to the cosmic microwave background, that is not a laboratory experiment. It is astronomy as defined above: information gleaned from observation of the sky. That it is done with photons from the mm and microwave part of the spectrum instead of the optical part of the spectrum doesn’t make it fundamentally different: it is still an observation of the sky.

And yet, apparently the observational work that my colleague did was unappreciated by his own department head, who I know to fancy himself an expert on the subject. Yet existence of a complementary expert in his own department didn’t ever register him. Even though, as chair, he would be responsible for reviewing the contributions of the faculty in his department on an annual basis.

To many physicists we astronomers are simply invisible. What could we possibly teach them about cosmology or dark matter? That we’ve been doing it for a lot longer is irrelevant. Only what they [re]invent themselves is valid, because astronomy is a subservient subfield populated by people who weren’t smart enough to become particle physicists. Because particle physicists are the smartest people in the world. Just ask one. He’ll tell you.

To give just one personal example of many: a few years ago, after I had published a paper in the premiere physics journal, I had a particle physics colleague ask, in apparent sincerity, “Are you an astrophysicist?” I managed to refrain from shouting YES YOU CLUELESS DUNCE! Only been doing astrophysics for my entire career!

As near as I can work out, his erroneous definition of astrophysicist involved having a Ph.D. in physics. That’s a good basis to start learning astrophysics, but it doesn’t actually qualify. Kris Davidson noted a similar sociology among his particle physics colleagues: “They simply declare themselves to be astrophysicsts.” Well, I can tell you – having made that same mistake personally – it ain’t that simple. I’m pleased that so many physicists are finally figuring out what I did in the 1980s, and welcome their interest in astrophysics and cosmology. But they need to actually learn the subject, just not assume they’ll pick it up in a snap without actually doing so.

 

Shameless commercialism

Shameless commercialism

I haven’t written here since late January, which not coincidentally was early in the Spring semester. Let’s just say it was… eventful. Mostly in an administrative way, which is neither a good way, nor an interesting way.

Not that plenty interesting hasn’t happened. I had a great visit to Aachen for the conference Dark Matter & Modified Gravity. Lots of emphasis on the philosophy of science, as well as history and sociology. Almost enough to make me think there is hope for the future. Almost. From there I visited CP3 in Odense where I gave both a science talk and a public talk at the Anarkist beer & food lab. It was awesome – spoke to a packed house in a place that was clearly used as a venue for rock concerts most of the time. People actually came out on a crappy night in February and paid a cover to hear about science!

I’d love to simply write my Aachen talk here, or the public Odense talk, and I should, but – writing takes a lot longer than talking. I’m continually amazed at how inefficient human communication is. Writing is painfully slow, and while I go to great lengths to write clearly and compellingly, I don’t always succeed. Even when I do, reading comprehension does not seem to be on an upward trajectory in the internet age. I routinely get accused of ignoring this or that topic by scientists too lazy to do a literature search wherein they would find I had written a paper on that. This has gotten so bad that it is currently a fad to describe as natural a phenomenon I explicitly showed over 20 years ago was the opposite of natural in LCDM. Faith in dark matter overpowers reason.

So many stories to tell, so little time to tell them. Some are positive. But far too many are the sordid sort of human behavior overriding the ideals of science. Self awareness is in short supply, and objectivity seems utterly forgotten as a concept, let alone a virtue. Many scientists no longer seem to appreciate the distinction between an a priori prediction and a post-hoc explanation pulled out of one’s arse when confronted with confounding evidence.

Consequently, I have quite intentionally refrained from ranting about bad scientific behavior too much, mostly in a mistaken but habitual “if you can’t say anything nice” sort of way. Which is another reason I have been quiet of late: I really don’t like to speak ill of my colleagues, even when they deserve it. There is so much sewage masquerading as science that I’m holding my nose while hoping it flows away under the bridge.

So, to divert myself, I have been dabbling in art. I am not a great artist by any means, but I’ve had enough people tell me “I’d buy that!” that I finally decided to take them at their word (silly, I know) and open a Zazzle store. Which immediately wants me to add links to it, which I find myself unprepared to do. I have had an academic website for a long time (since 1996, which is forever in internet years) but it seems really inappropriate to put them there. So I’m putting them here because this is the only place I’ve got readily available.

So the title “shameless commercialism” is quite literal. I hadn’t meant to advertise it here at all. Just find I need a web page, stat! It ain’t like I’ve even had time to stock the store – it is a lot more fun to do science than write it up; similarly, it is a lot more fun to create art than it is to market it. So there is only the one inaugural item so far, an Allosaurus on a T-shirt. Seems to fit the mood of the past semester.

BigAllosaur_cyan

A personal recollection of how we learned to stop worrying and love the Lambda

A personal recollection of how we learned to stop worrying and love the Lambda

There is a tendency when teaching science to oversimplify its history for the sake of getting on with the science. How it came to be isn’t necessary to learn it. But to do science requires a proper understanding of the process by which it came to be.

The story taught to cosmology students seems to have become: we didn’t believe in the cosmological constant (Λ), then in 1998 the Type Ia supernovae (SN) monitoring campaigns detected accelerated expansion, then all of a sudden we did believe in Λ. The actual history was, of course, rather more involved – to the point where this oversimplification verges on disingenuous. There were many observational indications of Λ that were essential in paving the way.

Modern cosmology starts in the early 20th century with the recognition that the universe should be expanding or contracting – a theoretical inevitability of General Relativity that Einstein initially tried to dodge by inventing the cosmological constant – and is expanding in fact, as observationally established by Hubble and Slipher and many others since. The Big Bang was largely considered settled truth after the discovery of the existence of the cosmic microwave background (CMB) in 1964.

The CMB held a puzzle, as it quickly was shown to be too smooth. The early universe was both isotropic and homogeneous. Too homogeneous. We couldn’t detect the density variations that could grow into galaxies and other immense structures. Though such density variations are now well measured as temperature fluctuations that are statistically well described by the acoustic power spectrum, the starting point was that these fluctuations were a disappointing no-show. We should have been able to see them much sooner, unless something really weird was going on…

That something weird was non-baryonic cold dark matter (CDM). For structure to grow, it needed the helping hand of the gravity of some unseen substance. Normal matter matter did not suffice. The most elegant cosmology, the Einstein-de Sitter universe, had a mass density Ωm= 1. But the measured abundances of the light elements were only consistent with the calculations of big bang nucleosynthesis if normal matter amounted to only 5% of Ωm = 1. This, plus the need to grow structure, led to the weird but seemingly unavoidable inference that the universe must be full of invisible dark matter. This dark matter needed to be some slow moving, massive particle that does not interact with light nor reside within the menagerie of particles present in the Standard Model of Particle Physics.

CDM and early universe Inflation were established in the 1980s. Inflation gave a mechanism that drove the mass density to exactly one (elegant!), and CDM gave us hope for enough mass to get to that value. Together, they gave us the Standard CDM (SCDM) paradigm with Ωm = 1.000 and H0 = 50 km/s/Mpc.

elrondwasthere
I was there when SCDM failed.

It is hard to overstate the ferver with which the SCDM paradigm was believed. Inflation required that the mass density be exactly one; Ωm < 1 was inconceivable. For an Einstein-de Sitter universe to be old enough to contain the oldest stars, the Hubble constant had to be the lower of the two (50 or 100) commonly discussed at that time. That meant that H0 > 50 was Right Out. We didn’t even discuss Λ. Λ was Unmentionable. Unclean.

SCDM was Known, Khaleesi.

scdm_rightout

Λ had attained unmentionable status in part because of its origin as Einstein’s greatest blunder, and in part through its association with the debunked Steady State model. But serious mention of it creeps back into the literature by 1990. The first time I personally heard Λ mentioned as a serious scientific possibility was by Yoshii at a conference in 1993. Yoshii based his argument on a classic cosmological test, N(m) – the number of galaxies as a function of how faint they appeared. The deeper you look, the more you see, in a way that depends on the intrinsic luminosity of galaxies, and how they fill space. Look deep enough, and you begin to trace the geometry of the cosmos.

At this time, one of the serious problems confronting the field was the faint blue galaxies problem. There were so many faint galaxies on the sky, it was incredibly difficult to explain them all. Yoshii made a simple argument. To get so many galaxies, we needed a big volume. The only way to do that in the context of the Robertson-Walker metric that describes the geometry of the universe is if we have a large cosmological constant, Λ. He was arguing for ΛCDM five years before the SN results.

gold_hat_portrayed_by_alfonso_bedoya
Lambda? We don’t need no stinking Lambda!

Yoshii was shouted down. NO! Galaxies evolve! We don’t need no stinking Λ! In retrospect, Yoshii & Peterson (1995) looks like a good detection of Λ. Perhaps Yoshii & Peterson also deserve a Nobel prize?

Indeed, there were many hints that Λ (or at least low Ωm) was needed, e.g., the baryon catastrophe in clusters, the power spectrum of IRAS galaxies, the early appearance of bound structures, the statistics of gravitational lensesand so on. Certainly by the mid-90s it was clear that we were not going to make it to Ωm = 1. Inflation was threatened – it requires Ωm = 1 – or at least a flat geometry: ΩmΛ = 1.

SCDM was in crisis.

A very influential 1995 paper by Ostriker & Steinhardt did a lot to launch ΛCDM. I was impressed by the breadth of data Ostriker & Steinhardt discussed, all of which demanded low Ωm. I thought the case for Λ was less compelling, as it hinged on the age problem in a way that might also have been solved, at that time, by simply having an open universe (low Ωm with no Λ). This would ruin Inflation, but I wasn’t bothered by that. I expect they were. Regardless, they definitely made that case for ΛCDM three years before the supernovae results. Their arguments were accepted by almost everyone who was paying attention, including myself. I heard Ostriker give a talk around this time during which he was asked “what cosmology are you assuming?” to which he replied “the right one.” Called the “concordance” cosmology by Ostriker & Steinhardt, ΛCDM had already achieved the status of most-favored cosmology by the mid-90s.

omhannotated
A simplified version of the diagram of Ostriker & Steinhardt (1995) illustrating just a few of the constraints they discussed. Direct measurements of the expansion rate, mass density, and ages of the oldest stars excluded SCDM, instead converging on a narrow window – what we now call ΛCDM.

Ostriker & Steinhardt neglected to mention an important prediction of Λ: not only should the universe expand, but that expansion rate should accelerate! In 1995, that sounded completely absurd. People had looked for such an effect, and claimed not to see it. So I wrote a brief note pointing out the predicted acceleration of the expansion rate. I meant it in a bad way: how crazy would it be if the expansion of the universe was accelerating?! This was an obvious and inevitable consequence of ΛCDM that was largely being swept under the rug at that time.

I mean[t], surely we could live with Ωm < 1 but no Λ. Can’t we all just get along? Not really, as it turned out. I remember Mike Turner pushing the SN people very hard in Aspen in 1997 to Admit Λ. He had an obvious bias: as an Inflationary cosmologist, he had spent the previous decade castigating observers for repeatedly finding Ωm < 1. That’s too little mass, you fools! Inflation demands Ωm = 1.000! Look harder!

By 1997, Turner had, like many cosmologists, finally wrapped his head around the fact that we weren’t going to find enough mass for Ωm = 1. This was a huge problem for Inflation. The only possible solution, albeit an ugly one, was if Λ made up the difference. So there he was at Aspen, pressuring the people who observed supernova to Admit Λ. One, in particular, was Richard Ellis, a great and accomplished astronomer who had led the charge in shouting down Yoshii. They didn’t yet have enough data to Admit Λ. Not.Yet.

By 1998, there were many more high redshift SNIa. Enough to see Λ. This time, after the long series of results only partially described above, we were intellectually prepared to accept it – unlike in 1993. Had the SN experiments been conducted five years earlier, and obtained exactly the same result, they would not have been awarded the Nobel prize. They would instead have been dismissed as a trick of astrophysics: the universe evolves, metallicity was lower at earlier times, that made SN then different from now, they evolve and so cannot be used as standard candles. This sounds silly now, as we’ve figured out how to calibrate for intrinsic variations in the luminosities of Type Ia SN, but that is absolutely how we would have reacted in 1993, and no amount of improvements in the method would have convinced us. This is exactly what we did with faint galaxy counts: galaxies evolve; you can’t hope to understand that well enough to constrain cosmology. Do you ever hear them cited as evidence for Λ?

Great as the supernovae experiments to measure the metric genuinely were, they were not a discovery so much as a confirmation of what cosmologists had already decided to believe. There was no singular discovery that changed the way we all thought. There was a steady drip, drip, drip of results pointing towards Λ all through the ’90s – the age problem in which the oldest stars appeared to be older than the universe in which they reside, the early appearance of massive clusters and galaxies, the power spectrum of galaxies from redshift surveys that preceded Sloan, the statistics of gravitational lenses, and the repeated measurement of 1/4 < Ωm < 1/3 in a large variety of independent ways – just to name a few. By the mid-90’s, SCDM was dead. We just refused to bury it until we could accept ΛCDM as a replacement. That was what the Type Ia SN results really provided: a fresh and dramatic reason to accept the accelerated expansion that we’d already come to terms with privately but had kept hidden in the closet.

Note that the acoustic power spectrum of temperature fluctuations in the cosmic microwave background (as opposed to the mere existence of the highly uniform CMB) plays no role in this history. That’s because temperature fluctuations hadn’t yet been measured beyond their rudimentary detection by COBE. COBE demonstrated that temperature fluctuations did indeed exist (finally!) as they must, but precious little beyond that. Eventually, after the settling of much dust, COBE was recognized as one of many reasons why Ωm ≠ 1, but it was neither the most clear nor most convincing reason at that time. Now, in the 21st century, the acoustic power spectrum provides a great way to constrain what all the parameters of ΛCDM have to be, but it was a bit player in its development. The water there was carried by traditional observational cosmology using general purpose optical telescopes in a great variety of different ways, combined with a deep astrophysical understanding of how stars, galaxies, quasars and the whole menagerie of objects found in the sky work. All the vast knowledge incorporated in textbooks like those by Harrison, by Peebles, and by Peacock – knowledge that often seems to be lacking in scientists trained in the post-WMAP era.

Despite being a late arrival, the CMB power spectrum measured in 2000 by Boomerang and 2003 by WMAP did one important new thing to corroborate the ΛCDM picture. The supernovae data didn’t detect accelerated expansion so much as exclude the deceleration we had nominally expected. The data were also roughly consistent with a coasting universe (neither accelerating nor decelerating); the case for acceleration only became clear when we assumed that the geometry of the universe was flat (ΩmΛ = 1). That didn’t have to work out, so it was a great success of the paradigm when the location of the first peak of the power spectrum appeared in exactly the right place for a flat FLRW geometry.

The consistency of these data have given ΛCDM an air of invincibility among cosmologists. But a modern reconstruction of the Ostriker & Steinhardt diagram leaves zero room remaining – hence the tension between H0 = 73 measured directly and H0 = 67 from multiparameter CMB fits.

omhannotated_cmb
Constraints from the acoustic power spectrum of the CMB overplotted on the direct measurements from the plot above. Initially in great consistency with those measurement, the best fit CMB values have steadily wandered away from the most-favored region of parameter space that established ΛCDM in the first place. This is most apparent in the tension with H0.

In cosmology, we are accustomed to having to find our way through apparently conflicting data. The difference between an expansion rate of 67 and 73 seems trivial given that the field was long riven – in living memory – by the dispute between 50 and 100. This gives rise to the expectation that the current difference is just a matter of some subtle systematic error somewhere. That may well be correct. But it is also conceivable that FLRW is inadequate to describe the universe, and we have been driven to the objectively bizarre parameters of ΛCDM because it happens to be the best approximation that can be obtained to what is really going on when we insist on approximating it with FLRW.

Though a logical possibility, that last sentence will likely drive many cosmologists to reach for their torches and pitchforks. Before killing the messenger, we should remember that we once endowed SCDM with the same absolute certainty we now attribute to ΛCDM. I was there, 3,000 internet years ago, when SCDM failed. There is nothing so sacred in ΛCDM that it can’t suffer the same fate, as has every single cosmology ever devised by humanity.

Today, we still lack definitive knowledge of either dark matter or dark energy. These add up to 95% of the mass-energy of the universe according to ΛCDM. These dark materials must exist.

It is Known, Khaleesi.

Hypothesis testing with gas rich galaxies

Hypothesis testing with gas rich galaxies

This Thanksgiving, I’d highlight something positive. Recently, Bob Sanders wrote a paper pointing out that gas rich galaxies are strong tests of MOND. The usual fit parameter, the stellar mass-to-light ratio, is effectively negligible when gas dominates. The MOND prediction follows straight from the gas distribution, for which there is no equivalent freedom. We understand the 21 cm spin-flip transition well enough to relate observed flux directly to gas mass.

In any human endeavor, there are inevitably unsung heroes who carry enormous amounts of water but seem to get no credit for it. Sanders is one of those heroes when it comes to the missing mass problem. He was there at the beginning, and has a valuable perspective on how we got to where we are. I highly recommend his books, The Dark Matter Problem: A Historical Perspective and Deconstructing Cosmology.

In bright spiral galaxies, stars are usually 80% or so of the mass, gas only 20% or less. But in many dwarf galaxies,  the mass ratio is reversed. These are often low surface brightness and challenging to observe. But it is a worthwhile endeavor, as their rotation curve is predicted by MOND with extraordinarily little freedom.

Though gas rich galaxies do indeed provide an excellent test of MOND, nothing in astronomy is perfectly clean. The stellar mass-to-light ratio is an irreducible need-to-know parameter. We also need to know the distance to each galaxy, as we do not measure the gas mass directly, but rather the flux of the 21 cm line. The gas mass scales with flux and the square of the distance (see equation 7E7), so to get the gas mass right, we must first get the distance right. We also need to know the inclination of a galaxy as projected on the sky in order to get the rotation to which we’re fitting right, as the observed line of sight Doppler velocity is only sin(i) of the full, in-plane rotation speed. The 1/sin(i) correction becomes increasingly sensitive to errors as i approaches zero (face-on galaxies).

The mass-to-light ratio is a physical fit parameter that tells us something meaningful about the amount of stellar mass that produces the observed light. In contrast, for our purposes here, distance and inclination are “nuisance” parameters. These nuisance parameters can be, and generally are, measured independently from mass modeling. However, these measurements have their own uncertainties, so one has to be careful about taking these measured values as-is. One of the powerful aspects of Bayesian analysis is the ability to account for these uncertainties to allow for the distance to be a bit off the measured value, so long as it is not too far off, as quantified by the measurement uncertainties. This is what current graduate student Pengfei Li did in Li et al. (2018). The constraints on MOND are so strong in gas rich galaxies that often the nuisance parameters cannot be ignored, even when they’re well measured.

To illustrate what I’m talking about, let’s look at one famous example, DDO 154. This galaxy is over 90% gas. The stars (pictured above) just don’t matter much. If the distance and inclination are known, the MOND prediction for the rotation curve follows directly. Here is an example of a MOND fit from a recent paper:

DDO154_MOND_180805695
The MOND fit to DDO 154 from Ren et al. (2018). The black points are the rotation curve data, the green line is the Newtonian expectation for the baryons, and the red line is their MOND fit.

This is terrible! The MOND fit – essentially a parameter-free prediction – misses all of the data. MOND is falsified. If one is inclined to hate MOND, as many seem to be, then one stops here. No need to think further.

If one is familiar with the ups and downs in the history of astronomy, one might not be so quick to dismiss it. Indeed, one might notice that the shape of the MOND prediction closely tracks the shape of the data. There’s just a little difference in scale. That’s kind of amazing for a theory that is wrong, especially when it is amplifying the green line to predict the red one: it needn’t have come anywhere close.

Here is the fit to the same galaxy using the same data [already] published in Li et al.:

DDO154_RAR_Li2018
The MOND fit to DDO 154 from Li et al. (2018) using the same data as above, as tabulated in SPARC.

Now we have a good fit, using the same data! How can this be so?

I have not checked what Ren et al. did to obtain their MOND fits, but having done this exercise myself many times, I recognize the slight offset they find as a typical consequence of holding the nuisance parameters fixed. What if the measured distance is a little off?

Distance estimates to DDO 154 in the literature range from 3.02 Mpc to 6.17 Mpc. The formally most accurate distance measurement is 4.04 ± 0.08 Mpc. In the fit shown here, we obtained 3.87 ± 0.16 Mpc. The error bars on these distances overlap, so they are the same number, to measurement accuracy. These data do not falsify MOND. They demonstrate that it is sensitive enough to tell the difference between 3.8 and 4.1 Mpc.

One will never notice this from a dark matter fit. Ren et al. also make fits with self-interacting dark matter (SIDM). The nifty thing about SIDM is that it makes quasi-constant density cores in dark matter halos. Halos of this form are not predicted by “ordinary” cold dark matter (CDM), but often give better fits than either MOND of the NFW halos of dark matter-only CDM simulations. For this galaxy, Ren et al. obtain the following SIDM fit.

DDO154_SIDM_180805695
The SIDM fit to DDO 154 from Ren et al.

This is a great fit. Goes right through the data. That makes it better, right?

Not necessarily. In addition to the mass-to-light ratio (and the nuisance parameters of distance and inclination), dark matter halo fits have [at least] two additional free parameters to describe the dark matter halo, such as its mass and core radius. These parameters are highly degenerate – one can obtain equally good fits for a range of mass-to-light ratios and core radii: one makes up for what the other misses. Parameter degeneracy of this sort is usually a sign that there is too much freedom in the model. In this case, the data are adequately described by one parameter (the MOND fit M*/L, not counting the nuisances in common), so using three (M*/L, Mhalo, Rcore) is just an exercise in fitting a French curve. There is ample freedom to fit the data. As a consequence, you’ll never notice that one of the nuisance parameters might be a tiny bit off.

In other words, you can fool a dark matter fit, but not MOND. Erwin de Blok and I demonstrated this 20 years ago. A common myth at that time was that “MOND is guaranteed to fit rotation curves.” This seemed patently absurd to me, given how it works: once you stipulate the distribution of baryons, the rotation curve follows from a simple formula. If the two don’t match, they don’t match. There is no guarantee that it’ll work. Instead, it can’t be forced.

As an illustration, Erwin and I tried to trick it. We took two galaxies that are identical in the Tully-Fisher plane (NGC 2403 and UGC 128) and swapped their mass distribution and rotation curve. These galaxies have the same total mass and the same flat velocity in the outer part of the rotation curve, but the detailed distribution of their baryons differs. If MOND can be fooled, this closely matched pair ought to do the trick. It does not.

NGC2403UGC128trickMOND
An attempt to fit MOND to a hybrid galaxy with the rotation curve of NGC 2403 and the baryon distribution of UGC 128. The mass-to-light ratio is driven to unphysical values (6 in solar units), but an acceptable fit is not obtained.

Our failure to trick MOND should not surprise anyone who bothers to look at the math involved. There is a one-to-one relation between the distribution of the baryons and the resulting rotation curve. If there is a mismatch between them, a fit cannot be obtained.

We also attempted to play this same trick on dark matter. The standard dark matter halo fitting function at the time was the pseudo-isothermal halo, which has a constant density core. It is very similar to the halos of SIDM and to the cored dark matter halos produced by baryonic feedback in some simulations. Indeed, that is the point of those efforts: they  are trying to capture the success of cored dark matter halos in fitting rotation curve data.

NGC2403UGC128trickDM
A fit to the hybrid galaxy with a cored (pseudo-isothermal) dark matter halo. A satisfactory fit is readily obtained.

Dark matter halos with a quasi-constant density core do indeed provide good fits to rotation curves. Too good. They are easily fooled, because they have too many degrees of freedom. They will fit pretty much any plausible data that you throw at them. This is why the SIDM fit to DDO 154 failed to flag distance as a potential nuisance. It can’t. You could double (or halve) the distance and still find a good fit.

This is why parameter degeneracy is bad. You get lost in parameter space. Once lost there, it becomes impossible to distinguish between successful, physically meaningful fits and fitting epicycles.

Astronomical data are always subject to improvement. For example, the THINGS project obtained excellent data for a sample of nearby galaxies. I made MOND fits to all the THINGS (and other) data for the MOND review Famaey & McGaugh (2012). Here’s the residual diagram, which has been on my web page for many years:

rcresid_mondfits
Residuals of MOND fits from Famaey & McGaugh (2012).

These are, by and large, good fits. The residuals have a well defined peak centered on zero.  DDO 154 was one of the THINGS galaxies; lets see what happens if we use those data.

DDO154mond_i66
The rotation curve of DDO 154 from THINGS (points with error bars). The Newtonian expectation for stars is the green line; the gas is the blue line. The red line is the MOND prediction. Not that the gas greatly outweighs the stars beyond 1.5 kpc; the stellar mass-to-light ratio has extremely little leverage in this MOND fit.

The first thing one is likely to notice is that the THINGS data are much better resolved than the previous generation used above. The first thing I noticed was that THINGS had assumed a distance of 4.3 Mpc. This was prior to the measurement of 4.04, so lets just start over from there. That gives the MOND prediction shown above.

And it is a prediction. I haven’t adjusted any parameters yet. The mass-to-light ratio is set to the mean I expect for a star forming stellar population, 0.5 in solar units in the Sptizer 3.6 micron band. D=4.04 Mpc and i=66 as tabulated by THINGS. The result is pretty good considering that no parameters have been harmed in the making of this plot. Nevertheless, MOND overshoots a bit at large radii.

Constraining the inclinations for gas rich dwarf galaxies like DDO 154 is a bit of a nightmare. Literature values range from 20 to 70 degrees. Seriously. THINGS itself allows the inclination to vary with radius; 66 is just a typical value. Looking at the fit Pengfei obtained, i=61. Let’s try that.

DDO154mond_i61
MOND fit to the THINGS data for DDO 154 with the inclination adjusted to the value found by Li et al. (2018).

The fit is now satisfactory. One tweak to the inclination, and we’re done. This tweak isn’t even a fit to these data; it was adopted from Pengfei’s fit to the above data. This tweak to the inclination is comfortably within any plausible assessment of the uncertainty in this quantity. The change in sin(i) corresponds to a mere 4% in velocity. I could probably do a tiny bit better with further adjustment – I have left both the distance and the mass-to-light ratio fixed – but that would be a meaningless exercise in statistical masturbation. The result just falls out: no muss, no fuss.

Hence the point Bob Sanders makes. Given the distribution of gas, the rotation curve follows. And it works, over and over and over, within the bounds of the uncertainties on the nuisance parameters.

One cannot do the same exercise with dark matter. It has ample ability to fit rotation curve data, once those are provided, but zero power to predict it. If all had been well with ΛCDM, the rotation curves of these galaxies would look like NFW halos. Or any number of other permutations that have been discussed over the years. In contrast, MOND makes one unique prediction (that was not at all anticipated in dark matter), and that’s what the data do. Out of the huge parameter space of plausible outcomes from the messy hierarchical formation of galaxies in ΛCDM, Nature picks the one that looks exactly like MOND.

star_trek_tv_spock_3_copy_-_h_2018
This outcome is illogical.

It is a bad sign for a theory when it can only survive by mimicking its alternative. This is the case here: ΛCDM must imitate MOND. There are now many papers asserting that it can do just this, but none of those were written before the data were provided. Indeed, I consider it to be problematic that clever people can come with ways to imitate MOND with dark matter. What couldn’t it imitate? If the data had all looked like technicolor space donkeys, we could probably find a way to make that so as well.

Cosmologists will rush to say “microwave background!” I have some sympathy for that, because I do not know how to explain the microwave background in a MOND-like theory. At least I don’t pretend to, even if I had more predictive success there than their entire community. But that would be a much longer post.

For now, note that the situation is even worse for dark matter than I have so far made it sound. In many dwarf galaxies, the rotation velocity exceeds that attributable to the baryons (with Newton alone) at practically all radii. By a lot. DDO 154 is a very dark matter dominated galaxy. The baryons should have squat to say about the dynamics. And yet, all you need to know to predict the dynamics is the baryon distribution. The baryonic tail wags the dark matter dog.

But wait, it gets better! If you look closely at the data, you will note a kink at about 1 kpc, another at 2, and yet another around 5 kpc. These kinks are apparent in both the rotation curve and the gas distribution. This is an example of Sancisi’s Law: “For any feature in the luminosity profile there is a corresponding feature in the rotation curve and vice versa.” This is a general rule, as Sancisi observed, but it makes no sense when the dark matter dominates. The features in the baryon distribution should not be reflected in the rotation curve.

The observed baryons orbit in a disk with nearly circular orbits confined to the same plane. The dark matter moves on eccentric orbits oriented every which way to provide pressure support to a quasi-spherical halo. The baryonic and dark matter occupy very different regions of phase space, the six dimensional volume of position and momentum. The two are not strongly coupled, communicating only by the weak force of gravity in the standard CDM paradigm.

One of the first lessons of galaxy dynamics is that galaxy disks are subject to a variety of instabilities that grow bars and spiral arms. These are driven by disk self-gravity. The same features do not appear in elliptical galaxies because they are pressure supported, 3D blobs. They don’t have disks so they don’t have disk self-gravity, much less the features that lead to the bumps and wiggles observed in rotation curves.

Elliptical galaxies are a good visual analog for what dark matter halos are believed to be like. The orbits of dark matter particles are unable to sustain features like those seen in  baryonic disks. They are featureless for the same reasons as elliptical galaxies. They don’t have disks. A rotation curve dominated by a spherical dark matter halo should bear no trace of the features that are seen in the disk. And yet they’re there, often enough for Sancisi to have remarked on it as a general rule.

It gets worse still. One of the original motivations for invoking dark matter was to stabilize galactic disks: a purely Newtonian disk of stars is not a stable configuration, yet the universe is chock full of long-lived spiral galaxies. The cure was to place them in dark matter halos.

The problem for dwarfs is that they have too much dark matter. The halo stabilizes disks by  suppressing the formation of structures that stem from disk self-gravity. But you need some disk self-gravity to have the observed features. That can be tuned to work in bright spirals, but it fails in dwarfs because the halo is too massive. As a practical matter, there is no disk self-gravity in dwarfs – it is all halo, all the time. And yet, we do see such features. Not as strong as in big, bright spirals, but definitely present. Whenever someone tries to analyze this aspect of the problem, they inevitably come up with a requirement for more disk self-gravity in the form of unphysically high stellar mass-to-light ratios (something I predicted would happen). In contrast, this is entirely natural in MOND (see, e.g., Brada & Milgrom 1999 and Tiret & Combes 2008), where it is all disk self-gravity since there is no dark matter halo.

The net upshot of all this is that it doesn’t suffice to mimic the radial acceleration relation as many simulations now claim to do. That was not a natural part of CDM to begin with, but perhaps it can be done with smooth model galaxies. In most cases, such models lack the resolution to see the features seen in DDO 154 (and in NGC 1560 and in IC 2574, etc.) If they attain such resolution, they better not show such features, as that would violate some basic considerations. But then they wouldn’t be able to describe this aspect of the data.

Simulators by and large seem to remain sanguine that this will all work out. Perhaps I have become too cynical, but I recall hearing that 20 years ago. And 15. And ten… basically, they’ve always assured me that it will work out even though it never has. Maybe tomorrow will be different. Or would that be the definition of insanity?