In order to agree on an interpretation, we first have to agree on the facts. Even when we agree on the facts, the available set of facts may admit multiple interpretations. This was an obvious and widely accepted truth early in my career*. Since then, the field has decayed into a haphazardly conceived set of unquestionable absolutes that are based on a large but well-curated subset of facts that gratuitously ignores any subset of facts that are inconvenient.
Sadly, we seem to have entered a post-truth period in which facts are drowned out by propaganda. I went into science to get away from people who place faith before facts, and comfortable fictions ahead of uncomfortable truths. Unfortunately, a lot of those people seem to have followed me here. This manifests as people who quote what are essentially pro-dark matter talking points at me like I don’t understand LCDM, when all it really does is reveal that they are posers** who picked up on some common myths about the field without actually reading the relevant journal articles.
Indeed, a recent experience taught me a new psychology term: identity protective cognition. Identity protective cognition is the tendency for people in a group to selectively credit or dismiss evidence in patterns that reflect the beliefs that predominate in their group. When it comes to dark matter, the group happens to be a scientific one, but the psychology is the same: I’ve seen people twist themselves into logical knots to protect their belief in dark matter from being subject to critical examination. They do it without even recognizing that this is what they’re doing. I guess this is a human foible we cannot escape.
I’ve addressed these issues before, but here I’m going to start a series of posts on what I think some of the essential but underappreciated facts are. This is based on a talk that I gave at a conference on the philosophy of science in 2019, back when we had conferences, and published in Studies in History and Philosophy of Science. I paid the exorbitant open access fee (the journal changed its name – and publication policy – during the publication process), so you can read the whole thing all at once if you are eager. I’ve already written it to be accessible, so mostly I’m going to post it here in what I hope are digestible chunks, and may add further commentary if it seems appropriate.
Cosmology is the science of the origin and evolution of the universe: the biggest of big pictures. The modern picture of the hot big bang is underpinned by three empirical pillars: an expanding universe (Hubble expansion), Big Bang Nucleosynthesis (BBN: the formation of the light elements through nuclear reactions in the early universe), and the relic radiation field (the Cosmic Microwave Background: CMB) (Harrison, 2000; Peebles, 1993). The discussion here will take this framework for granted.
The three empirical pillars fit beautifully with General Relativity (GR). Making the simplifying assumptions of homogeneity and isotropy, Einstein’s equations can be applied to treat the entire universe as a dynamical entity. As such, it is compelled either to expand or contract. Running the observed expansion backwards in time, one necessarily comes to a hot, dense, early phase. This naturally explains the CMB, which marks the transition from an opaque plasma to a transparent gas (Sunyaev and Zeldovich, 1980; Weiss, 1980). The abundances of the light elements can be explained in detail with BBN provided the universe expands in the first few minutes as predicted by GR when radiation dominates the mass-energy budget of the universe (Boesgaard & Steigman, 1985).
The marvelous consistency of these early universe results with the expectations of GR builds confidence that the hot big bang is the correct general picture for cosmology. It also builds overconfidence that GR is completely sufficient to describe the universe. Maintaining consistency with modern cosmological data is only possible with the addition of two auxiliary hypotheses: dark matter and dark energy. These invisible entities are an absolute requirement of the current version of the most-favored cosmological model, ΛCDM. The very name of this model is born of these dark materials: Λ is Einstein’s cosmological constant, of which ‘dark energy’ is a generalization, and CDM is cold dark matter.
Dark matter, on the other hand, plays an intimate and essential role in galaxy formation. The term ‘dark matter’ is dangerously crude, as it can reasonably be used to mean anything that is not seen. In the cosmic context, there are at least two forms of unseen mass: normal matter that happens not to glow in a way that is easily seen — not all ordinary material need be associated with visible stars — and non-baryonic cold dark matter. It is the latter form of unseen mass that is thought to dominate the mass budget of the universe and play a critical role in galaxy formation.
Cold Dark Matter
Cold dark matter is some form of slow moving, non-relativistic (‘cold’) particulate mass that is not composed of normal matter (baryons). Baryons are the family of particles that include protons and neutrons. As such, they compose the bulk of the mass of normal matter, and it has become conventional to use this term to distinguish between normal, baryonic matter and the non-baryonic dark matter.
The distinction between baryonic and non-baryonic dark matter is no small thing. Non-baryonic dark matter must be a new particle that resides in a new ‘dark sector’ that is completely distinct from the usual stable of elementary particles. We do not just need some new particle, we need one (or many) that reside in some sector beyond the framework of the stubbornly successful Standard Model of particle physics. Whatever the solution to the mass discrepancy problem turns out to be, it requires new physics.
The cosmic dark matter must be non-baryonic for two basic reasons. First, the mass density of the universe measured gravitationally (Ωm ≈ 0.3, e.g., Faber and Gallagher, 1979; Davis et al., 1980, 1992) clearly exceeds the mass density in baryons as constrained by BBN (Ωb ≈ 0.05, e.g., Walker et al., 1991). There is something gravitating that is not ordinary matter: Ωm > Ωb.
The second reason follows from the absence of large fluctuations in the CMB (Peebles and Yu, 1970; Silk, 1968; Sunyaev and Zeldovich, 1980). The CMB is extraordinarily uniform in temperature across the sky, varying by only ~ 1 part in 105 (Smoot et al., 1992). These small temperature variations correspond to variations in density. Gravity is an attractive force; it will make the rich grow richer. Small density excesses will tend to attract more mass, making them larger, attracting more mass, and leading to the formation of large scale structures, including galaxies. But gravity is also a weak force: this process takes a long time. In the long but finite age of the universe, gravity plus known baryonic matter does not suffice to go from the initially smooth, highly uniform state of the early universe to the highly clumpy, structured state of the local universe (Peebles, 1993). The solution is to boost the process with an additional component of mass — the cold dark matter — that gravitates without interacting with the photons, thus getting a head start on the growth of structure while not aggravating the amplitude of temperature fluctuations in the CMB.
Taken separately, one might argue away the need for dark matter. Taken together, these two distinct arguments convinced nearly everyone, including myself, of the absolute need for non-baryonic dark matter. Consequently, CDM became established as the leading paradigm during the 1980s (Peebles, 1984; Steigman and Turner, 1985). The paradigm has snowballed since that time, the common attitude among cosmologists being that CDM has to exist.
From an astronomical perspective, the CDM could be any slow-moving, massive object that does not interact with photons nor participate in BBN. The range of possibilities is at once limitless yet highly constrained. Neutrons would suffice if they were stable in vacuum, but they are not. Primordial black holes are a logical possibility, but if made of normal matter, they must somehow form in the first second after the Big Bang to not impair BBN. At this juncture, microlensing experiments have excluded most plausible mass ranges that primordial black holes could occupy (Mediavilla et al., 2017). It is easy to invent hypothetical dark matter candidates, but difficult for them to remain viable.
From a particle physics perspective, the favored candidate is a Weakly Interacting Massive Particle (WIMP: Peebles, 1984; Steigman and Turner, 1985). WIMPs are expected to be the lightest stable supersymmetric partner particle that resides in the hypothetical supersymmetric sector (Martin, 1998). The WIMP has been the odds-on favorite for so long that it is often used synonymously with the more generic term ‘dark matter.’ It is the hypothesized particle that launched a thousand experiments. Experimental searches for WIMPs have matured over the past several decades, making extraordinary progress in not detecting dark matter (Aprile et al., 2018). Virtually all of the parameter space in which WIMPs had been predicted to reside (Trotta et al., 2008) is now excluded. Worse, the existence of the supersymmetric sector itself, once seemingly a sure thing, remains entirely hypothetical, and appears at this juncture to be a beautiful idea that nature declined to implement.
In sum, we must have cold dark matter for both galaxies and cosmology, but we have as yet no clue to what it is.
* There is a trope that late in their careers, great scientists come to the opinion that everything worth discovering has been discovered, because they themselves already did everything worth doing. That is not a concern I have – I know we haven’t discovered all there is to discover. Yet I see no prospect for advancing our fundamental understanding simply because there aren’t enough of us pulling in the right direction. Most of the community is busy barking up the wrong tree, and refuses to be distracted from their focus on the invisible squirrel that isn’t there.
** Many of these people are the product of the toxic culture that Simon White warned us about. They wave the sausage of galaxy formation and feedback like a magic wand that excuses all faults while being proudly ignorant of how the sausage was made. Bitch, please. I was there when that sausage was made. I helped make the damn sausage. I know what went into it, and I recognize when it tastes wrong.
This expression exactly depicts the progression of the radial acceleration relation. Some people were ahead of this curve, others are still behind it, but it quite accurately depicts the mass sociology. This is how we react to startling new facts.
For quotation purists, I’m not sure exactly what the original phrasing was. I have paraphrased it to be succinct and have substituted orthodoxy for religion, because even scientists can have orthodoxies: holy cows that must not be slaughtered.
I might even add a precursor stage zero to the list above:
0. It goes unrecognized.
This is to say, that if a new fact is sufficiently startling, we don’t just disbelieve it (stage 1); at first we fail to see it at all. We lack the cognitive framework to even recognize how important it is. An example is provided by the 1941 detection of the microwave background by Andrew McKellar. In retrospect, this is as persuasive as the 1964 detection of Penzias and Wilson to which we usually ascribe the discovery. At the earlier time, there was simply no framework for recognizing what it was that was being detected. It appears to me that P&Z didn’t know what they were looking at either until Peebles explained it to them.
The radial acceleration relation was first posed as the mass discrepancy-acceleration relation. They’re fundamentally the same thing, just plotted in a slightly different way. The mass discrepancy-acceleration relation shows the ratio of total mass to that which is visible. This is basically the ratio of the observed acceleration to that predicted by the observed baryons. This is useful to see how much dark matter is needed, but by construction the axes are not independent, as both measured quantities are used in forming the ratio.
The radial acceleration relation shows independent observations along each axis: observed vs. predicted acceleration. Though measured independently, they are not physically independent, as the baryons contribute some to the total observed acceleration – they do have mass, after all. One can construct a halo acceleration relation by subtracting the baryonic contribution away from the total; in principle the remainders are physically independent. Unfortunately, the axes again become observationally codependent, and the uncertainties blow up, especially in the baryon dominated regime. Which of these depictions is preferable depends a bit on what you’re looking to see; here I just want to note that they are the same information packaged somewhat differently.
To the best of my knowledge, the first mention of the mass discrepancy-acceleration relation in the scientific literature is by Sanders (1990). Its existence is explicit in MOND (Milgrom 1983), but here it is possible to draw a clear line between theory and data. I am only speaking of the empirical relation as it appears in the data, irrespective of anything specific to MOND.
I met Bob Sanders, along with many other talented scientists, in a series of visits to the University of Groningen in the early 1990s. Despite knowing him and having talked to him about rotation curves, I was unaware that he had done this.
Stage 0: It goes unrecognized.
For me, stage one came later in the decade at the culmination of a several years’ campaign to examine the viability of the dark matter paradigm from every available perspective. That’s a long paper, which nevertheless drew considerable praise from many people who actually read it. If you go to the bother of reading it today, you will see the outlines of many issues that are still debated and others that have been forgotten (e.g., the fine-tuning issues).
Around this time (1998), the dynamicists at Rutgers were organizing a meeting on galaxy dynamics, and asked me to be one of the speakers. I couldn’t possibly discuss everything in the paper in the time allotted, so was looking for a way to show the essence of the challenge the data posed. Consequently, I reinvented the wheel, coming up with the mass discrepancy-acceleration relation. Here I show the same data that I had then in the form of the radial acceleration relation:
I recognize this version of the plot as having been made by Federico Lelli. I’ve made this plot many times, but this is version I came across first, and it is better than mine in that the opacity of the points illustrates where the data are concentrated. I had been working on low surface brightness galaxies; these have low accelerations, so that part of the plot is well populated.
The data show a clear correlation. By today’s standards, it looks crude. Going on what we had then, it was fantastic. Correlations practically never look this good in extragalactic astronomy, and they certainly don’t happen by accident. Low quality data can hide a correlation – uncertainties cause scatter – but they can’t create a correlation where one doesn’t exist.
I showed the same result later that year (1998) at a meeting on the campus of the University of Maryland where I was a brand new faculty member. It was a much shorter presentation, so I didn’t have time to justify the context or explain much about the data. Contrary to the reception at Rutgers where I had adequate time to speak, the hostility of the audience to the result was palpable, their stony silence eloquent. They didn’t want to believe it, and plenty of peoplegot busyquestioning the data.
Stage 1: It is not true.
I spent the next five years expanding and improving the data. More rotation curves became available thanks to the work of many, particularly Erwin de Blok, Marc Verheijen, and Rob Swaters. That was great, but the more serious limitation was how well we could measure the stellar mass distribution needed to predict the baryonic acceleration.
The mass models we could build at the time were based on optical images. A mass model takes the observed light distribution, assigns a mass-to-light ratio, and makes a numerical solution of the Poisson equation to obtain the the gravitational force corresponding to the observed stellar mass distribution. This is how we obtain the stellar contribution to the predicted baryonic force; the same procedure is applied to the observed gas distribution. The blue part of the spectrum is the best place in which to observe low contrast, low surface brightness galaxies as the night sky is darkest there, at least during new moon. That’s great for measuring the light distribution, but what we want is the stellar mass distribution. The mass-to-light ratio is expected to have a lot of scatter in the blue band simply from the happenstance of recent star formation, which makes bright blue stars that are short-lived. If there is a stochastic uptick in the star formation rate, then the mass-to-light ratio goes down because there are lots of bright stars. Wait a few hundred million years and these die off, so the mass-to-light ratio gets bigger (in the absence of further new star formation). The time-integrated stellar mass may not change much, but the amount of blue light it produces does. Consequently, we expect to see well-observed galaxies trace distinct lines in the radial acceleration plane, even if there is a single universal relation underlying the phenomenon. This happens simply because we expect to get M*/L wrong from one galaxy to the next: in 1998, I had simply assumed all galaxies had the same M*/L for lack of any better prescription. Clearly, a better prescription was warranted.
In those days, I traveled through Tucson to observe at Kitt Peak with some frequency. On one occasion, I found myself with a few hours to kill between coming down from the mountain and heading to the airport. I wandered over to the Steward Observatory at the University of Arizona to see who I might see. A chance meeting in the wild west: I encountered Eric Bell and Roelof de Jong, who were postdocs there at the time. I knew Eric from his work on the stellar populations of low surface brightness galaxies, an interest closely aligned with my own, and Roelof from my visits to Groningen.
As we got to talking, Eric described to me work they were doing on stellar populations, and how they thought it would be possible to break the age-metallicity degeneracy using near-IR colors in addition to optical colors. They were mostly focused on improving the age constraints on stars in LSB galaxies, but as I listened, I realized they had constructed a more general, more powerful tool. At my encouragement (read their acknowledgements), they took on this more general task, ultimately publishing the classic Bell & de Jong (2001). In it, they built a table that enabled one to look up the expected mass-to-light ratio of a complex stellar population – one actively forming stars – as a function of color. This was a big step forward over my educated guess of a constant mass-to-light ratio: there was now a way to use a readily observed property, color, to improve the estimated M*/L of each galaxy in a well-calibrated way.
Combining the new stellar population models with all the rotation curves then available, I obtained an improved mass discrepancy-acceleration relation:
Again, the relation is clear, but with scatter. Even with the improved models of Bell & de Jong, some individual galaxies have M*/L that are wrong – that’s inevitable in this game. What you cannot know is which ones! Note, however, that there are now 74 galaxies in this plot, and almost all of them fall on top of each other where the point density is large. There are some obvious outliers; those are presumably just that: the trees that fall outside the forest because of the expected scatter in M*/L estimates.
I tried a variety of prescriptions for M*/L in addition to that of Bell & de Jong. Though they differed in texture, they all told a consistent story. A relation was clearly present; only its detailed form varied with the adopted prescription.
The prescription that minimized the scatter in the relation was the M*/L obtained in MOND fits. That’s a tautology: by construction, a MOND fit finds the M*/L that puts a galaxy on this relation. However, we can generalize the result. Maybe MOND is just a weird, unexpected way of picking a number that has this property; it doesn’t have to be the true mass-to-light ratio in nature. But one can then define a ratio Q
that relates the “true” mass-to-light ratio to the number that gives a MOND fit. They don’t have to be identical, but MOND does return M*/L that are reasonable in terms of stellar populations, so Q ~ 1. Individual values could vary, and the mean could be a bit more or less than unity, but not radically different. One thing that impressed me at the time about the MOND fits (most of which were made by Bob Sanders) was how well they agreed with the stellar population models, recovering the correct amplitude, the correct dependence on color in different bandpasses, and also giving the expected amount of scatter (more in the blue than in the near-IR).
The obvious interpretation is that we should take seriously a theory that obtains good fits with a single free parameter that checks out admirably well with independent astrophysical constraints, in this case the M*/L expected for stellar populations. But I knew many people would not want to do that, so I defined Q to generalize to any M*/L in any (dark matter) context one might want to consider.
Indeed, Q allows us to write a general expression for the rotation curve of the dark matter halo (essentially the HAR alluded to above) in terms of that of the stars and gas:
The stars and the gas are observed, and μ is the MOND interpolation function assumed in the fit that leads to Q. Except now the interpolation function isn’t part of some funny new theory; it is just the shape of the radial acceleration relation – a relation that is there empirically. The only fit factor between these data and any given model is Q – a single number of order unity. This does leave some wiggle room, but not much.
I went off to a conference to describe this result. At the 2006 meeting Galaxies in the Cosmic Web in New Mexico, I went out of my way at the beginning of the talk to show that even if we ignore MOND, this relation is present in the data, and it provides a strong constraint on the required distribution of dark matter. We may not know why this relation happens, but we can use it, modulo only the modest uncertainty in Q.
Having bent over backwards to distinguish the data from the theory, I was disappointed when, immediately at the end of my talk, prominent galaxy formation theorist Anatoly Klypin loudly shouted
“We don’t have to explain MOND!”
But you do have to explain the data. The problem was and is that the data look like MOND. It is easy to conflate one with the other; I have noticed that a lot of people have trouble keeping the two separate. Just because you don’t like the theory doesn’t mean that the data are wrong. What Anatoly was saying was that
2. It is contrary to orthodoxy.
Despite phrasing the result in a way that would be useful to galaxy formation theorists, they did not, by and large, claim to explain it at the time – it was contrary to orthodoxy so didn’t need to be explained. Looking at the list of papers that cite this result, the early adopters were not the target audience of galaxy formation theorists, but rather others citing it to say variations of “no way dark matter explains this.”
At this point, it was clear to me that further progress required a better way to measure the stellar mass distribution. Looking at the stellar population models, the best hope was to build mass models from near-infrared rather than optical data. The near-IR is dominated by old stars, especially red giants. Galaxies that have been forming stars actively for a Hubble time tend towards a quasi-equilibrium in which red giants are replenished by stellar evolution at about the same rate they move on to the next phase. One therefore expects the mass-to-light ratio to be more nearly constant in the near-IR. Not perfectly so, of course, but a 2 or 3 micron image is as close to a map of the stellar mass of a galaxy as we’re likely to get.
Around this time, the University of Maryland had begun a collaboration with Kitt Peak to build a big infrared camera, NEWFIRM, for the 4m telescope. Rob Swaters was hired to help write software to cope with the massive data flow it would produce. The instrument was divided into quadrants, each of which had a field of view sufficient to hold a typical galaxy. When it went on the telescope, we developed an efficient observing method that I called “four-shooter”, shuffling the target galaxy from quadrant to quadrant so that in processing we could remove the numerous instrumental artifacts intrinsic to its InSb detectors. This eventually became one of the standard observing modes in which the instrument was operated.
I was optimistic that we could make rapid progress, and at first we did. But despite all the work, despite all the active cooling involved, we were still on the ground. The night sky was painfully bright in the IR. Indeed, the thermal component dominated, so we could observe during full moon. To an observer of low surface brightness galaxies attuned to any hint of scattered light from so much as a crescent moon, I cannot describe how discombobulating it was to walk outside the dome and see the full fricking moon. So bright. So wrong. And that wasn’t even the limiting factor: the thermal background was.
We had hit a surface brightness wall, again. We could do the bright galaxies this way, but the LSBs that sample the low acceleration end of the radial acceleration relation were rather less accessible. Not inaccessible, but there was a better way.
The Spitzer Space Telescope was active at this time. Jim Schombert and I started winning time to observe LSB galaxies with it. We discovered that space is dark. There was no atmosphere to contend with. No scattered light from the clouds or the moon or the OH lines that afflict that part of the sky spectrum. No ground-level warmth. The data were fantastic. In some sense, they were too good: the biggest headache we faced was blotting out all the background galaxies that shown right through the optically thin LSB galaxies.
Still, it took a long time to collect and analyze the data. We were starting to get results by the early-teens, but it seemed like it would take forever to get through everything I hoped to accomplish. Fortunately, when I moved to Case Western, I was able to hire Federico Lelli as a postdoc. Federico’s involvement made all the difference. After many months of hard, diligent, and exacting work, he constructed what is now the SPARC database. Finally all the elements were in place to construct an empirical radial acceleration relation with absolutely minimal assumptions about the stellar mass-to-light ratio.
In parallel with the observational work, Jim Schombert had been working hard to build realistic stellar population models that extended to the 3.6 micron band of Spitzer. Spitzer had been built to look redwards of this, further into the IR. 3.6 microns was its shortest wavelength passband. But most models at the time stopped at the K-band, the 2.2 micron band that is the reddest passband that is practically accessible from the ground. They contain pretty much the same information, but we still need to calculate the band-specific value of M*/L.
Being a thorough and careful person, Jim considered not just the star formation history of a model stellar population as a variable, and not just its average metallicity, but also the metallicity distribution of its stars, making sure that these were self-consistent with the star formation history. Realistic metallicity distributions are skewed; it turn out that this subtle effect tends to counterbalance the color dependence of the age effect on M*/L in the near-IR part of the spectrum. The net results is that we expect M*/L to be very nearly constant for all late type galaxies.
This is the best possible result. To a good approximation, we expected all of the galaxies in the SPARC sample to have the same mass-to-light ratio. What you see is what you get. No variable M*/L, no equivocation, just data in, result out.
We did still expect some scatter, as that is an irreducible fact of life in this business. But even that we expected to be small, between 0.1 and 0.15 dex (roughly 25 – 40%). Still, we expected the occasional outlier, galaxies that sit well off the main relation just because our nominal M*/L didn’t happen to apply in that case.
One day as I walked past Federico’s office, he called for me to come look at something. He had plotted all the data together assuming a single M*/L. There… were no outliers. The assumption of a constant M*/L in the near-IR didn’t just work, it worked far better than we had dared to hope. The relation leapt straight out of the data:
Over 150 galaxies, with nearly 2700 resolved measurements within each galaxy, each with their own distinctive mass distribution, all pile on top of each other without effort. There was plenty of effort in building the database, but once it was there, the result appeared, no muss, no fuss. No fitting or fiddling. Just the measurements and our best estimate of the mean M*/L, applied uniformly to every individual galaxy in the sample. The scatter was only 0.12 dex, within the range expected from the population models.
No MOND was involved in the construction of this relation. It may look like MOND, but we neither use MOND nor need it in any way to see the relation. It is in the data. Perhaps this is the sort of result for which we would have to invent MOND if it did not already exist. But the dark matter paradigm is very flexible, and many papers have since appeared that claim to explain the radial acceleration relation. We have reached
3. We knew it all along.
On the one hand, this is good: the community is finally engaging with a startling fact that has been pointedly ignored for decades. On the other hand, many of the claims to explain the radial acceleration relation are transparently incorrecton their face, being nothing more than elaborations of models I considered and discarded as obviously unworkable long ago. They do not provide a satisfactory explanation of the predictive power of MOND, and inevitably fail to address important aspects of the problem, like disk stability. Rather than grapple with the deep issues the new and startling fact poses, it has become fashionable to simply assert that one’s favorite model explains the radial acceleration relation, and does so naturally.
There is nothing natural about the radial acceleration relation in the context of dark matter. Indeed, it is difficult to imagine a less natural result – hence stages one and two. So on the one hand, I welcome the belated engagement, and am willing to consider serious models. On the other hand, if someone asserts that this is natural and that we expected it all along, then the engagement isn’t genuine: they’re just fooling themselves.
Early Days. This was one of Vera Rubin’s favorite expressions. I always had a hard time with it, as many things are very well established. Yet it seems that we have yet to wrap our heads around the problem. Vera’s daughter, Judy Young, once likened the situation to the parable of the blind men and the elephant. Much is known, yes, but the problem is so vast that each of us can perceive only a part of the whole, and the whole may be quite different from the part that is right before us.
So I guess Vera is right as always: these remain Early Days.
It’s early in the new year, so what better time to violate my own resolutions? I prefer to be forward-looking and not argue over petty details, or chase wayward butterflies. But sometimes the devil is in the details, and the occasional butterfly can be entertaining if distracting. Today’s butterfly is the galaxy AGC 114905, which has recently been in the news.
There are a couple of bandwagons here: one to rebrand very low surface brightness galaxies as ultradiffuse, and another to get overly excited when these types of galaxies appear to lack dark matter. The nomenclature is terrible, but that’s normal for astronomy so I would overlook it, except that in this case it gives the impression that there is some new population of galaxies behaving in an unexpected fashion, when instead it looks to me like the opposite is the case. The extent to which there are galaxies lacking dark matter is fundamental to our interpretation of the acceleration discrepancy (aka the missing mass problem), so bears closer scrutiny. The evidence for galaxies devoid of dark matter is considerably weaker than the current bandwagon portrays.
If it were just one butterfly (e.g., NGC 1052-DF2), I wouldn’t bother. Indeed, it was that specific case that made me resolve to ignore such distractions as a waste of time. I’ve seen this movie literally hundreds of times, I know how it goes:
Observations of this one galaxy falsify MOND!
Hmm, doing the calculation right, that’s what MOND predicts.
OK, but better data shrink the error bars and now MOND falsified.
Are you sure about…?
Yes. We like this answer, let’s stop thinking about it now.
As the data continue to improve, it approaches what MOND predicts.
Over and over again. DF44 is another example that has followed this trajectory, and there are many others. This common story is not widely known – people lose interest once they get the answer they want. Irrespective of whether we can explain this weird case or that, there is a deeper story here about data analysis and interpretation that seems not to be widely appreciated.
My own experience inevitably colors my attitude about this, as it does for us all, so let’s start thirty years ago when I was writing a dissertation on low surface brightness (LSB) galaxies. I did many things in my thesis, most of them well. One of the things I tried to do then was derive rotation curves for some LSB galaxies. This was not the main point of the thesis, and arose almost as an afterthought. It was also not successful, and I did not publish the results because I didn’t believe them. It wasn’t until a few years later, with improved data, analysis software, and the concerted efforts of Erwin de Blok, that we started to get a handle on things.
The thing that really bugged me at the time was not the Doppler measurements, but the inclinations. One has to correct the observed velocities by the inclination of the disk, 1/sin(i). The inclination can be constrained by the shape of the image and by the variation of velocities across the face of the disk. LSB galaxies presented raggedy images and messy velocity fields. I found it nigh on impossible to constrain their inclinations at the time, and it remains a frequent struggle to this day.
Here is an example of the LSB galaxy F577-V1 that I find lurking around on disk from all those years ago:
A uniform disk projected on the sky at some inclination will have a fixed corresponding eccentricity, with zero being the limit of a circular disk seen perfectly face-on (i = 0). Do you see a constant value of the eccentricity in the graph above? If you say yes, go get your eyes checked.
What we see in this case is a big transition from a fairly eccentric disk to one that is more nearly face on. The inclination doesn’t have a sudden warp; the problem is that the assumption of a uniform disk is invalid. This galaxy has a bar – a quasi-linear feature that is common in many spiral galaxies that is supported by non-circular orbits. Even face-on, the bar will look elongated simply because it is. Indeed, the sudden change in eccentricity is one way to define the end of the bar, which the human eye-brain can do easily by looking at the image. So in a case like this, one might adopt the inclination from the outer points, and that might even be correct. But note that there are spiral arms along the outer edge that is visible to the eye, so it isn’t clear that even these isophotes are representative of the shape of the underlying disk. Worse, we don’t know what happens beyond the edge of the data; the shape might settle down at some other level that we can’t see.
This was so frustrating, I swore never to have anything to do with galaxy kinematics ever again. Over 50 papers on the subject later, all I can say is D’oh! Repeatedly.
Bars are rare in LSB galaxies, but it struck me as odd that we saw any at all. We discovered unexpectedly that they were dark matter dominated – the inferred dark halo outweighs the disk, even within the edge defined by the stars – but that meant that the disks should be stable against the formation of bars. My colleague Chris Mihos agreed, and decided to look into it. The answer was yes, LSB galaxies should be stable against bar formation, at least internally generated bars. Sometimes bars are driven by external perturbations, so we decided to simulate the close passage of a galaxy of similar mass – basically, whack it real hard and see what happens:
This was a conventional simulation, with a dark matter halo constructed to be consistent with the observed properties of the LSB galaxy UGC 128. The results are not specific to this case; it merely provides numerical corroboration of the more general case that we showed analytically.
Consider the image above in the context of determining galaxy inclinations from isophotal shapes. We know this object is face-on because we can control our viewing angle in the simulation. However, we would not infer i=0 from this image. If we didn’t know it had been perturbed, we would happily infer a substantial inclination – in this case, easily as much as 60 degrees! This is an intentionally extreme case, but it illustrates how a small departure from a purely circular shape can be misinterpreted as an inclination. This is a systematic error, and one that usually makes the inclination larger than it is: it is possible to appear oval when face-on, but it is not possible to appear more face-on than perfectly circular.
Around the same time, Erwin and I were making fits to the LSB galaxy data – with both dark matter halos and MOND. By this point in my career, I had deeply internalized that the data for LSB galaxies were never perfect. So we sweated every detail, and worked through every “what if?” This was a particularly onerous task for the dark matter fits, which could do many different things if this or that were assumed – we discussed all the plausible possibilities at the time. (Subsequently, a rich literature sprang up discussing many unreasonable possibilities.) By comparison, the MOND fits were easy. They had fewer knobs, and in 2/3 of the cases they simply worked, no muss, no fuss.
For the other 1/3 of the cases, we noticed that the shape of the MOND-predicted rotation curves was usually right, but the amplitude was off. How could it work so often, and yet miss in this weird way? That sounded like a systematic error, and the inclination was the most obvious culprit, with 1/sin(i) making a big difference for small inclinations. So we decided to allow this as a fit parameter, to see whether a fit could be obtained, and judge how [un]reasonable this was. Here is an example for two galaxies:
The case of UGC 1230 is memorable to me because it had a good rotation curve, despite being more face-on than widely considered acceptable for analysis. And for good reason: the difference between 22 and 17 degrees make a huge difference to the fit, changing it from way off to picture perfect.
What I took away from this exercise is how hard it is to tell the difference between inclination values for relatively face-on galaxies. UGC 1230 is obvious: the ovals for the two inclinations are practically on top of each other. The difference in the case of UGC 5005 is more pronounced, but look at the galaxy. The shape of the outer isophote where we’re trying to measure this is raggedy as all get out; this is par for the course for LSB galaxies. Worse, look further in – this galaxy has a bar! The central bar is almost orthogonal to the kinematic major axis. If we hadn’t observed as deeply as we had, we’d think the minor axis was the major axis, and the inclination was something even higher.
I remember Erwin quipping that he should write a paper on how to use MOND to determine inclinations. This was a joke between us, but only half so: using the procedure in this way would be analogous to using Tully-Fisher to measure distances. We would simply be applying an empirically established procedure to constrain a property of a galaxy – luminosity from line-width in that case of Tully-Fisher; inclination from rotation curve shape here. That we don’t understand why this works has never stopped astronomers before.
Systematic errors in inclination happen all the time. Big surveys don’t have time to image deeply – they have too much sky area to cover – and if there is follow-up about the gas content, it inevitably comes in the form of a single dish HI measurement. This is fine; it is what we can do en masse. But an unresolved single dish measurement provides no information about the inclination, only a pre-inclination line-width (which itself is a crude proxy for the flat rotation speed). The inclination we have to take from the optical image, which would key on the easily detected, high surface brightness central region of the image. That’s the part that is most likely to show a bar-like distortion, so one can expect lots of systematic errors in the inclinations determined in this way. I provided a long yet still incomplete discussion of these issues in McGaugh (2012). This is both technical and intensely boring, so not even the pros read it.
This brings us to the case of AGC 114905, which is part of a sample of ultradiffuse galaxies discussed previously by some of the same authors. On that occasion, I kept to the code, and refrained from discussion. But for context, here are those data on a recent Baryonic Tully-Fisher plot. Spoiler alert: that post was about a different sample of galaxies that seemed to be off the relation but weren’t.
On the face of it, these ultradiffuse galaxies (UDGs) are all very serious outliers. This is weird – they’re not some scatter off to one side, they’re just way off on their own island, with no apparent connection to the rest of established reality. By calling them a new name, UDG, it makes it sound plausible that these are some entirely novel population of galaxies that behave in a new way. But they’re not. They are exactly the same kinds of galaxies I’ve been talking about. They’re all blue, gas rich, low surface brightness, fairly isolated galaxies – all words that I’ve frequently used to describe my thesis sample. These UDGs are all a few billion solar mass is baryonic mass, very similar to F577-V1 above. You could give F577-V1 a different name, slip into the sample, and nobody would notice that it wasn’t like one of the others.
The one slight difference is implied by the name: UDGs are a little lower in surface brightness. Indeed, once filter transformations are taken into account, the definition of ultradiffuse is equal to what I arbitrarily called very low surface brightness in 1996. Most of my old LSB sample galaxies have central stellar surface brightnesses at or a bit above 10 solar masses per square parsec while the UDGs here are a bit under this threshold. For comparison, in typical high surface brightness galaxies this quantity is many hundreds, often around a thousand. Nothing magic happens at the threshold of 10 solar masses per square parsec, so this line of definition between LSB and UDG is an observational distinction without a physical difference. So what are the odds of a different result for the same kind of galaxies?
Indeed, what really matters is the baryonic surface density, not just the stellar surface brightness. A galaxy made purely of gas but no stars would have zero optical surface brightness. I don’t know of any examples of that extreme, but we came close to it with the gas rich sample of Trachternach et al. (2009) when we tried this exact same exercise a decade ago. Despite selecting that sample to maximize the chance of deviations from the Baryonic Tully-Fisher relation, we found none – at least none that were credible: there were deviant cases, but their data were terrible. There were no deviants among the better data. This sample is comparable or even extreme than the UDGs in terms of baryonic surface density, so the UDGs can’t be exception because they’re a genuinely new population, whatever name we call them by.
The key thing is the credibility of the data, so let’s consider the data for AGC 114905. The kinematics are pretty well ordered; the velocity field is well observed for this kind of beast. It ought to be; they invested over 40 hours of JVLA time into this one galaxy. That’s more than went into my entire LSB thesis sample. The authors are all capable, competent people. I don’t think they’ve done anything wrong, per se. But they do seem to have climbed aboard the bandwagon of dark matter-free UDGs, and have talked themselves into believing smaller error bars on the inclination than I am persuaded is warranted.
AGC 114905 in stars (left) and gas (right). The contours of the gas distribution are shown on top of the stars in white. Figure 1 from Mancera Piña et al. (2021).
This messy morphology is typical of very low surface brightness galaxies – hence their frequent classification as Irregular galaxies. Though messier, it shares some morphological traits with the LSB galaxies shown above. The central light distribution is elongated with a major axis that is not aligned with that of the gas. The gas is raggedy as all get out. The contours are somewhat boxy; this is a hint that something hinky is going on beyond circular motion in a tilted axisymmetric disk.
The authors do the right thing and worry about the inclination, checking to see what it would take to be consistent with either LCDM or MOND, which is about i=11o in stead of the 30o indicated by the shape of the outer isophote. They even build a model to check the plausibility of the smaller inclination:
Clearly the black line (i=30o) is a better fit to the shape of the gas distribution than the blue dashed line (i=11o). Consequently, they “find it unlikely that we are severely overestimating the inclination of our UDG, although this remains the largest source of uncertainty in our analysis.” I certainly agree with the latter phrase, but not the former. I think it is quite likely that they are overestimating the inclination. I wouldn’t even call it a severe overestimation; more like par for the course with this kind of object.
As I have emphasized above and elsewhere, there are many things that can go wrong in this sort of analysis. But if I were to try to put my finger on the most important thing, here it would be the inclination. The modeling exercise is good, but it assumes “razor-thin axisymmetric discs.” That’s a reasonable thing to do when building such a model, but we have to bear in mind that real disks are neither. The thickness of the disk probably doesn’t matter too much for a nearly face-on case like this, but the assumption of axisymmetry is extraordinarily dubious for an Irregular galaxy. That’s how they got the name.
It is hard to build models that are not axisymmetric. Once you drop this simplifying assumption, where do you even start? So I don’t fault them for stopping at this juncture, but I can also imagine doing as de Blok suggested, using MOND to set the inclination. Then one could build models with asymmetric features by trial and error until a match is obtained. Would we know that such a model would be a better representation of reality? No. Could we exclude such a model? Also no. So the bottom line is that I am not convinced that the uncertainty in the inclination is anywhere near as small as the adopted ±3o.
That’s very deep in the devilish details. If one is worried about a particular result, one can back off and ask if it makes sense in the context of what we already know. I’ve illustrated this process previously. First, check the empirical facts. Every other galaxy in the universe with credible data falls on the Baryonic Tully-Fisher relation, including very similar galaxies that go by a slightly different name. Hmm, strike one. Second, check what we expect from theory. I’m not a fan of theory-informed data interpretation, but we know that LCDM, unlike SCDM before it, at least gets the amplitude of the rotation speed in the right ballpark (Vflat ~ V200). Except here. Strike two. As much as we might favor LCDM as the standard cosmology, it has now been extraordinarily well established that MOND has considerable success in not just explaining but predicting these kind of data, with literally hundreds of examples. One hundred was the threshold Vera Rubin obtained to refute excuses made to explain away the first few flat rotation curves. We’ve crossed that threshold: MOND phenomenology is as well established now as flat rotation curves were at the inception of the dark matter paradigm. So while I’m open to alternative explanations for the MOND phenomenology, seeing that a few trees stand out from the forest is never going to be as important as the forest itself.
This brings us to a physical effect that people should be aware of. We touched on the bar stability above, and how a galaxy might look oval even when seen face on. This happens fairly naturally in MOND simulations of isolated disk galaxies. They form bars and spirals and their outer parts wobble about. See, for example, this simulation by Nils Wittenburg. This particular example is a relatively massive galaxy; the lopsidedness reminds me of M101 (Watkins et al. 2017). Lower mass galaxies deeper in the MOND regime are likely even more wobbly. This happens because disks are only marginally stable in MOND, not the over-stabilized entities that have to be hammered to show a response as in our early simulation of UGC 128 above. The point is that there is good reason to expect even isolated face-on dwarf Irregulars to look, well, irregular, leading to exactly the issues with inclination determinations discussed above. Rather than being a contradiction to MOND, AGC 114905 may illustrate one of its inevitable consequences.
I don’t like to bicker at this level of detail, but it makes a profound difference to the interpretation. I do think we should be skeptical of results that contradict well established observational reality – especially when over-hyped. God knows I was skeptical of our own results, which initially surprised the bejeepers out of me, but have been repeatedlycorroborated by subsequent observations.
I guess I’m old now, so I wonder how I come across to younger practitioners; perhaps as some scary undead monster. But mates, these claims about UDGs deviating from established scaling relations are off the edge of the map.
An important issue in science is what’s right and what’s wrong. Another is who gets credit for what. The former issue is scientific while the second is social. It matters little to the progress of science who discovers what. It matters a lot to the people who do it. We like to get credit where due.
discovered by me but contested by masses of unbelievers, [who asserted] that there exist no bona fide clusters of stable or stationary clusters of galaxies.
Were Zwicky alive today, a case could be made that he deserves the Nobel Prize in physics for the discovery of dark matter. However, Zwicky was not the first or only person to infer the existence of dark matter early on. Jan Oort was concerned that dark mass was necessary to explain the accelerations of stars perpendicular to the plane of the Milky Way as early as 1932. Where Zwicky’s discrepancy was huge, over a factor of 100, Oort’s was a more modest factor of 2. Oort was taken seriously at the time while Zwicky was largely ignored.
The reasons for this difference in response are many and varied. I wasn’t around at the time, so I will refrain from speculating too much. But in many ways, this divide reflects the difference in cultures between physics and astronomy. Oort was thoroughly empirical and immaculately detailed in his observational work and conservative in its interpretation, deeply impressing his fellow astronomers. Zwicky was an outsider and self-described lone wolf, and may have come across as a wild-eyed crackpot. That he had good reason for that didn’t alter the perception. That he is now posthumously recognized as having been basically correct does nothing to aid him personally, only our memory of him.
Nowadays, nearly every physicist I hear talk about the subject credits Zwicky with the discovery of dark matter. When I mention Oort, most have never heard of him, and they rarely seem prepared to share the credit. This is how history gets rewritten, by oversimplification and omission: Oort goes unmentioned in the education of physicists, the omission gets promulgated by those who never heard of him, then it becomes fact since an omission so glaring cannot possibly be correct. I’m doing that myself here by omitting mention of Opik and perhaps others I haven’t heard of myself.
Zwicky got that treatment in real time, leading to some of the best published rants in all of science. I’ll let him speak for himself, quoting from the CSCGPEG. One of his great resentments was his exclusion from premiere observational facilities:
I myself was allowed the use of the 100-inch telescope only in 1948, after I was fifty years of age, and of the 200-inch telescope on Palomar Mountain only after I was 54 years old, although I had built and successfully operated the 18-inch Schmidt telescope in 1936, and had been professor of physics and of astrophysics at the California Institute of Technology since 1927 and 1942 respectively.
Zwicky, in the introduction to the CSCGPEG
For reference, I have observed many hundreds of nights at various observatories. Only a handful of those nights have been in my fifties. Observing is mostly a young person’s occupation.
I do not know why Zwicky was excluded. Perhaps there is a book on the subject; there should be. Maybe it was personal, as he clearly suspects. Applying for telescope time can be highly competitive, even within one’s own institution, which hardly matters if it crossed departmental lines. Perhaps his proposals lacked grounding in the expectations of the field, or some intangible quality that made them less persuasive than those of his colleagues. Maybe he simply didn’t share their scientific language, a perpetual problem I see at the interface between physics and astronomy. Perhaps all these things contributed.
More amusing if inappropriate are his ad hominem attacks on individuals:
a shining example of a most deluded individual we need only quote the high pope of American Astronomy, one Henry Norris Russell…
or his more generalized condemnation of the entire field:
Today’s sycophants and plain thieves seem to be free, in American Astronomy in particular, to appropriate discoveries and inventions made by lone wolves and non-conformists, for whom there is never any appeal to the hierarchies and for whom even the public Press is closed, because of censoring committees within the scientific institutions.
or indeed, of human nature:
we note that again and again scientists and technical specialists arrive at stagnation points where they think they know it all.
Zwicky, CSCGPEG, emphasis his.
He’s not wrong.
I have heard Zwicky described as a “spherical bastard”: a bastard viewed from any angle. You can see why from these quotes. But you can also see why he might have felt this way. The CSCGPEG was published about 35 years after his pioneering work on clusters of galaxies. That’s a career-lifetime lacking recognition for what would now be consider Nobel prize worthy work. Dark matter would come to prominence in the following decade, by which time he was dead.
I have also heard that “spherical bastard” was a phrase invented by Zwicky to apply to others. I don’t know who was the bigger bastard, and I am reluctant to attribute his lack of popularity in his own day to his personality. The testimony I am aware of is mostly from those who disagreed with him, and may themselves have been spherical bastards. Indeed, I strongly suspect those who sing his praises most loudly now would have been among his greatest detractors had they been contemporaries.
I know from my own experience that people are lousy at distinguishing between a scientific hypothesis that they don’t like and the person who advocates it. Often they are willing and eager to attribute a difference in scientific judgement to a failure of character: “He disagrees with me, therefore he is a bastard.” Trash talk by mediocre wannabes is common, and slander works wonders to run down a reputation. I imagine Zwicky was a victim of this human failing.
Of course, the correctness of a scientific hypothesis has nothing to do with how likeable its proponents might be. Indeed, a true scientist has an obligation to speak the facts, even if they are unpopular, as Zwicky reminded us with a quote of his own in the preface to the CSCGPEG:
The more things change, the more they stay the same.
I read somewhere – I don’t think it was Kuhn himself, but someone analyzing Kuhn – that there came a point in the history of science where there was a divergence between scientists, with different scientists disagreeing about what counts as a theory, what counts as a test of a theory, what even counts as evidence. We have reached that point with the mass discrepancy problem.
For many years, I worried that if the field ever caught up with me, it would zoom past. That hasn’t happened. Instead, it has diverged towards a place that I barely recognize as science. It looks more like the Matrix – a simulation – that is increasingly sophisticated yet self-contained, making only parsimonious contact with observational reality and unable to make predictions that apply to real objects. Scaling relations and statistical properties, sure. Actual galaxies with NGC numbers, not so much. That, to me, is not science.
I have found it increasingly difficult to communicate across the gap built on presumptions buried so deep that they cannot be questioned. One obvious one is the existence of dark matter. This has been fueled by cosmologists who take it for granted and particle physicists eager to discover it who repeat “we know dark matter exists*; we just need to find it” like a religious mantra. This is now ingrained so deeply that it has become difficult to convey even the simple concept that what we call “dark matter” is really just evidence of a discrepancy: we do not know whether it is literally some kind of invisible mass, or a breakdown of the equations that lead us to infer invisible mass.
I try to look at all sides of a problem. I can say nice things about dark matter (and cosmology); I can point out problems with it. I can say nice things about MOND; I can point out problems with it. The more common approach is to presume that any failing of MOND is an automatic win for dark matter. This is a simple-minded logical fallacy: just because MOND gets something wrong doesn’t mean dark matter gets it right. Indeed, my experience has been that cases that don’t make any sense in MOND don’t make any sense in terms of dark matter either. Nevertheless, this attitude persists.
I don’t know what is right, but I’m pretty sure this attitude is wrong. Indeed, it empowers a form of magical thinking: dark matter has to be correct, so any data that appear to contradict it are either wrong, or can be explained with feedback. Indeed, the usual trajectory has been denial first (that can’t be true!) and explanation later (we knew it all along!) This attitude is an existential threat to the scientific method, and I am despondent in part because I worry we are slipping into a post-scientific reality, where even scientists are little more than priests of a cold, dark religion.
*If we’re sure dark matter exists, it is not obvious that we need to be doing expensive experiments to find it.
It often happens that data are ambiguous and open to multiple interpretations. The evidence for dark matter is an obvious example. I frequently hear permutations on the statement
We know dark matter exists; we just need to find it.
This is said in all earnestness by serious scientists who clearly believe what they say. They mean it. Unfortunately, meaning something in all seriousness, indeed, believing it with the intensity of religious fervor, does not guarantee that it is so.
The way the statement above is phrased is a dangerous half-truth. What the data show beyond any dispute is that there is a discrepancy between what we observe in extragalactic systems (including cosmology) and the predictions of Newton & Einstein as applied to the visible mass. If we assume that the equations Newton & Einstein taught us are correct, then we inevitably infer the need for invisible mass. That seems like a very reasonable assumption, but it is just that: an assumption. Moreover, it is an assumption that is only tested on the relevant scales by the data that show a discrepancy. One could instead infer that theory fails this test – it does not work to predict observed motions when applied to the observed mass. From this perspective, it could just as legitimately be said that
A more general theory of dynamics must exist; we just need to figure out what it is.
That puts an entirely different complexion on exactly the same problem. The data are the same; they are not to blame. The difference is how we interpret them.
Neither of these statements are correct: they are both half-truths; two sides of the same coin. As such, one risks being wildly misled. If one only hears one, the other gets discounted. That’s pretty much where the field is now, and has it been stuck there for a long time.
That’s certainly where I got my start. I was a firm believer in the standard dark matter interpretation. The evidence was obvious and overwhelming. Not only did there need to be invisible mass, it had to be some new kind of particle, like a WIMP. Almost certainly a WIMP. Any other interpretation (like MACHOs) was obviously stupid, as it violated some strong constraint, like Big Bang Nucleosynthesis (BBN). It had to be non-baryonic cold dark matter. HAD. TO. BE. I was sure of this. We were all sure of this.
What gets us in trouble is not what we don’t know. It’s what we know for sure that just ain’t so.
I realized in the 1990s that the above reasoning was not airtight. Indeed, it has a gaping hole: we were not even considering modifications of dynamical laws (gravity and inertia). That this was a possibility, even a remote one, came as a profound and deep shock to me. It took me ages of struggle to admit it might be possible, during which I worked hard to save the standard picture. I could not. So it pains me to watch the entire community repeat the same struggle, repeat the same failures, and pretend like it is a success. That last step follows from the zeal of religious conviction: the outcome is predetermined. The answer still HAS TO BE dark matter.
So I asked myself – what if we’re wrong? How could we tell? Once one has accepted that the universe is filled with invisible mass that can’t be detected by any craft available known to us, how can we disabuse ourselves of this notion should it happen to be wrong?
One approach that occurred to me was a test in the power spectrum of the cosmic microwave background. Before any of the peaks had been measured, the only clear difference one expected was a bigger second peak with dark matter, and a smaller one without it for the same absolute density of baryons as set by BBN. I’ve written about the lead up to this prediction before, and won’t repeat it here. Rather, I’ll discuss some of the immediate fall out – some of which I’ve only recently pieced together myself.
The first experiment to provide a test of the prediction for the second peak was Boomerang. The second was Maxima-1. I of course checked the new data when they became available. Maxima-1 showed what I expected. So much so that it barely warranted comment. One is only supposed to write a scientific paper when one has something genuinely new to say. This didn’t rise to that level. It was more like checking a tick box. Besides, lots more data were coming; I couldn’t write a new paper every time someone tacked on an extra data point.
There was one difference. The Maxima-1 data had a somewhat higher normalization. The shape of the power spectrum was consistent with that of Boomerang, but the overall amplitude was a bit higher. The latter mattered not at all to my prediction, which was for the relative amplitude of the first to second peaks.
Systematic errors, especially in the amplitude, were likely in early experiments. That’s like rule one of observing the sky. After examining both data sets and the model expectations, I decided the Maxima-1 amplitude was more likely to be correct, so I asked what offset was necessary to reconcile the two. About 14% in temperature. This was, to me, no big deal – it was not relevant to my prediction, and it is exactly the sort of thing one expects to happen in the early days of a new kind of observation. It did seem worth remarking on, if not writing a full blown paper about, so I put it in a conference presentation (McGaugh 2000), which was published in a journal (IJMPA, 16, 1031) as part of the conference proceedings. This correctly anticipated the subsequent recalibration of Boomerang.
The figure from McGaugh (2000) is below. Basically, I said “gee, looks like the Boomerang calibration needs to be adjusted upwards a bit.” This has been done in the figure. The amplitude of the second peak remained consistent with the prediction for a universe devoid of dark matter. In fact, if got better (see Table 4 of McGaugh 2004).
This much was trivial. There was nothing new to see, at least as far as the test I had proposed was concerned. New data were pouring in, but there wasn’t really anything worth commenting on until WMAP data appeared several years later, which persisted in corroborating the peak ratio prediction. By this time, the cosmological community had decided that despite persistent corroborations, my prediction was wrong.
That’s right. I got it right, but then right turned into wrong according to the scuttlebutt of cosmic gossip. This was a falsehood, but it took root, and seems to have become one of the things that cosmologists know for sure that just ain’t so.
How did this come to pass? I don’t know. People never asked me. My first inkling was 2003, when it came up in a chance conversation with Marv Leventhal (then chair of Maryland Astronomy), who opined “too bad the data changed on you.” This shocked me. Nothing relevant in the data had changed, yet here was someone asserting that it had like it was common knowledge. Which I suppose it was by then, just not to me.
Over the years, I’ve had the occasional weird conversation on the subject. In retrospect, I think the weirdness stemmed from a divergence of assumed knowledge. They knew I was right then wrong. I knew the second peak prediction had come true and remained true in all subsequent data, but the third peak was a different matter. So there were many opportunities for confusion. In retrospect, I think many of these people were laboring under the mistaken impression that I had been wrong about the second peak.
I now suspect this started with the discrepancy between the calibration of Boomerang and Maxima-1. People seemed to be aware that my prediction was consistent with the Boomerang data. Then they seem to have confused the prediction with those data. So when the data changed – i.e., Maxima-1 was somewhat different in amplitude, then it must follow that the prediction now failed.
This is wrong on many levels. The prediction is independent of the data that test it. It is incredibly sloppy thinking to confuse the two. More importantly, the prediction, as phrased, was not sensitive to this aspect of the data. If one had bothered to measure the ratio in the Maxima-1 data, one would have found a number consistent with the no-CDM prediction. This should be obvious from casual inspection of the figure above. Apparently no one bothered to check. They didn’t even bother to understand the prediction.
Understanding a prediction before dismissing it is not a hard ask. Unless, of course, you already know the answer. Then laziness is not only justified, but the preferred course of action. This sloppy thinking compounds a number of well known cognitive biases (anchoring bias, belief bias, confirmation bias, to name a few).
I mistakenly assumed that other people were seeing the same thing in the data that I saw. It was pretty obvious, after all. (Again, see the figure above.) It did not occur to me back then that other scientists would fail to see the obvious. I fully expected them to complain and try and wriggle out of it, but I could not imagine such complete reality denial.
The reality denial was twofold: clearly, people were looking for any excuse to ignore anything associated with MOND, however indirectly. But they also had no clear prior for LCDM, which I did establish as a point of comparison. A theory is only as good as its prior, and all LCDM models made before these CMB data showed the same thing: a bigger second peak than was observed. This can be fudged: there are ample free parameters, so it can be made to fit; one just had to violate BBN (as it was then known) by three or four sigma.
In retrospect, I think the very first time I had this alternate-reality conversation was at a conference at the University of Chicago in 2001. Andrey Kravtsov had just joined the faculty there, and organized a conference to get things going. He had done some early work on the cusp-core problem, which was still very much a debated thing at the time. So he asked me to come address that topic. I remember being on the plane – a short ride from Cleveland – when I looked at the program. Nearly did a spit take when I saw that I was to give the first talk. There wasn’t a lot of time to organize my transparencies (we still used overhead projectors in those days) but I’d given the talk many times before, so it was enough.
I only talked about the rotation curves of low surface brightness galaxies in the context of the cusp-core problem. That was the mandate. I didn’t talk about MOND or the CMB. There’s only so much you can address in a half hour talk. [This is a recurring problem. No matter what I say, there always seems to be someone who asks “why didn’t you address X?” where X is usually that person’s pet topic. Usually I could do so, but not in the time allotted.]
About halfway through this talk on the cusp-core problem, I guess it became clear that I wasn’t going to talk about things that I hadn’t been asked to talk about, and I was interrupted by Mike Turner, who did want to talk about the CMB. Or rather, extract a confession from me that I had been wrong about it. I forget how he phrased it exactly, but it was the academic equivalent of “Have you stopped beating your wife lately?” Say yes, and you admit to having done so in the past. Say no, and you’re still doing it. What I do clearly remember was him prefacing it with “As a test of your intellectual honesty” as he interrupted to ask a dishonest and intentionally misleading question that was completely off-topic.
Of course, the pretext for his attack question was the Maxima-1 result. He phrased it in a way that I had to agree that those disproved my prediction, or be branded a liar. Now, at the time, there were rumors swirling that the experiment – some of the people who worked on it were there – had detected the third peak, so I thought that was what he was alluding to. Those data had not yet been published and I certainly had not seen them, so I could hardly answer that question. Instead, I answered the “intellectual honesty” affront by pointing to a case where I had said I was wrong. At one point, I thought low surface brightness galaxies might explain the faint blue galaxy problem. On closer examination, it became clear that they could not provide a complete explanation, so I said so. Intellectual honesty is really important to me, and should be to all scientists. I have no problem admitting when I’m wrong. But I do have a problem with demands to admit that I’m wrong when I’m not.
To me, it was obvious that the Maxima-1 data were consistent with the second peak. The plot above was already published by then. So it never occurred to me that he thought the Maxima-1 data were in conflict with what I had predicted – it was already known that it was not. Only to him, it was already known that it was. Or so I gather – I have no way to know what others were thinking. But it appears that this was the juncture in which the field suffered a psychotic break. We are not operating on the same set of basic facts. There has been a divergence in personal realities ever since.
Arthur Kosowsky gave the summary talk at the end of the conference. He told me that he wanted to address the elephant in the room: MOND. I did not think the assembled crowd of luminary cosmologists were mature enough for that, so advised against going there. He did, and was incredibly careful in what he said: empirical, factual, posing questions rather than making assertions. Why does MOND work as well as it does?
The room dissolved into chaotic shouting. Every participant was vying to say something wrong more loudly than the person next to him. (Yes, everyone shouting was male.) Joel Primack managed to say something loudly enough for it to stick with me, asserting that gravitational lensing contradicted MOND in a way that I had already shown it did not. It was just one of dozens of superficial falsehoods that people take for granted to be true if they align with one’s confirmation bias.
The uproar settled down, the conference was over, and we started to disperse. I wanted to offer Arthur my condolences, having been in that position many times. Anatoly Klypin was still giving it to him, keeping up a steady stream of invective as everyone else moved on. I couldn’t get a word in edgewise, and had a plane home to catch. So when I briefly caught Arthur’s eye, I just said “told you” and moved on. Anatoly paused briefly, apparently fathoming that his behavior, like that of the assembled crowd, was entirely predictable. Then the moment of awkward self-awareness passed, and he resumed haranguing Arthur.
Before we can agree on the interpretation of a set of facts, we have to agree on what those facts are. Even if we agree on the facts, we can differ about their interpretation. It is OK to disagree, and anyone who practices astrophysics is going to be wrong from time to time. It is the inevitable risk we take in trying to understand a universe that is vast beyond human comprehension. Heck, some people have made successful careers out of being wrong. This is OK, so long as we recognize and correct our mistakes. That’s a painful process, and there is an urge in human nature to deny such things, to pretend they never happened, or to assert that what was wrong was right all along.
This happens a lot, and it leads to a lot of weirdness. Beyond the many people in the field whom I already know personally, I tend to meet two kinds of scientists. There are those (usually other astronomers and astrophysicists) who might be familiar with my work on low surface brightness galaxies or galaxy evolution or stellar populations or the gas content of galaxies or the oxygen abundances of extragalactic HII regions or the Tully-Fisher relation or the cusp-core problem or faint blue galaxies or big bang nucleosynthesis or high redshift structure formation or joint constraints on cosmological parameters. These people behave like normal human beings. Then there are those (usually particle physicists) who have only heard of me in the context of MOND. These people often do not behave like normal human beings. They conflate me as a person with a theory that is Milgrom’s. They seem to believe that both are evil and must be destroyed. My presence, even the mere mention of my name, easily destabilizes their surprisingly fragile grasp on sanity.
One of the things that scientists-gone-crazy do is project their insecurities about the dark matter paradigm onto me. People who barely know me frequently attribute to me motivations that I neither have nor recognize. They presume that I have some anti-cosmology, anti-DM, pro-MOND agenda, and are remarkably comfortably about asserting to me what it is that I believe. What they never explain, or apparently bother to consider, is why I would be so obtuse? What is my motivation? I certainly don’t enjoy having the same argument over and over again with their ilk, which is the only thing it seems to get me.
The only agenda I have is a pro-science agenda. I want to know how the universe works.
This agenda is not theory-specific. In addition to lots of other astrophysics, I have worked on both dark matter and MOND. I will continue to work on both until we have a better understanding of how the universe works. Right now we’re very far away from obtaining that goal. Anyone who tells you otherwise is fooling themselves – usually by dint of ignoring inconvenient aspects of the evidence. Everyone is susceptible to cognitive dissonance. Scientists are no exception – I struggle with it all the time. What disturbs me is the number of scientists who apparently do not. The field is being overrun with posers who lack the self-awareness to question their own assumptions and biases.
So, I feel like I’m repeating myself here, but let me state my bias. Oh wait. I already did. That’s why it felt like repetition. It is.
The following bit of this post is adapted from an old web page I wrote well over a decade ago. I’ve lost track of exactly when – the file has been through many changes in computer systems, and unix only records the last edit date. For the linked page, that’s 2016, when I added a few comments. The original is much older, and was written while I was at the University of Maryland. Judging from the html style, it was probably early to mid-’00s. Of course, the sentiment is much older, as it shouldn’t need to be said at all.
I will make a few updates as seem appropriate, so check the link if you want to see the changes. I will add new material at the end.
Long standing remarks on intellectual honesty
The debate about MOND often degenerates into something that falls well short of the sober, objective discussion that is suppose to characterize scientific debates. One can tell when voices are raised and baseless ad hominem accusations made. I have, with disturbing frequency, found myself accused of partisanship and intellectual dishonesty, usually by people who are as fair and balanced as Fox News.
Let me state with absolute clarity that intellectual honesty is a bedrock principle of mine. My attitude is summed up well by the quote
When a man lies, he murders some part of the world.
This is a great quote for science, as the intent is clear. We don’t get to pick and choose our facts. Outright lying about them is antithetical to science.
I would extend this to ignoring facts. One should not only be honest, but also as complete as possible. It does not suffice to be truthful while leaving unpleasant or unpopular facts unsaid. This is lying by omission.
I “grew up” believing in dark matter. Specifically, Cold Dark Matter, presumably a WIMP. I didn’t think MOND was wrong so much as I didn’t think about it at all. Barely heard of it; not worth the bother. So I was shocked – and angered – when it its predictions came true in my data for low surface brightness galaxies. So I understand when my colleagues have the same reaction.
Nevertheless, Milgrom got the prediction right. I had a prediction, it was wrong. There were other conventional predictions, they were also wrong. Indeed, dark matter based theories generically have a very hard time explaining these data. In a Bayesian sense, given the prior that we live in a ΛCDM universe, the probability that MONDian phenomenology would be observed is practically zero. Yet it is. (This is very well established, and has been for some time.)
So – confronted with an unpopular theory that nevertheless had some important predictions come true, I reported that fact. I could have ignored it, pretended it didn’t happen, covered my eyes and shouted LA LA LA NOT LISTENING. With the benefit of hindsight, that certainly would have been the savvy career move. But it would also be ignoring a fact, and tantamount to a lie.
In short, though it was painful and protracted, I changed my mind. Isn’t that what the scientific method says we’re suppose to do when confronted with experimental evidence?
That was my experience. When confronted with evidence that contradicted my preexisting world view, I was deeply troubled. I tried to reject it. I did an enormous amount of fact-checking. The people who presume I must be wrong have not had this experience, and haven’t bothered to do any fact-checking. Why bother when you already are sure of the answer?
I understand being skeptical about MOND. I understand being more comfortable with dark matter. That’s where I started from myself, so as I said above, I can empathize with people who come to the problem this way. This is a perfectly reasonable place to start.
To give an example of disinformation, I still hear said things like “MOND fits rotation curves but nothing else.” This is not true. The first thing I did was check into exactly that. Years of fact-checking went into McGaugh & de Blok (1998), and I’ve done plenty more since. It came as a great surprise to me that MOND explained the vast majority of the data as well or better than dark matter. Not everything, to be sure, but lots more than “just” rotation curves. Yet this old falsehood still gets repeated as if it were not a misconception that was put to rest in the previous century. We’re stuck in the dark ages by choice.
It is not a defensible choice. There is no excuse to remain ignorant of MOND at this juncture in the progress of astrophysics. It is incredibly biased to point to its failings without contending with its many predictive successes. It is tragi-comically absurd to assume that dark matter provides a better explanation when it cannot make the same predictions in advance. MOND may not be correct in every particular, and makes no pretense to be a complete theory of everything. But it is demonstrably less wrong than dark matter when it comes to predicting the dynamics of systems in the low acceleration regime. Pretending like this means nothing is tantamount to ignoring essential facts.
Even a lie of omission murders a part of the world.
This title is an example of what has come to be called Betteridge’s law. This is a relatively recent name for an old phenomenon: if a title is posed as a question, the answer is no. This is especially true in science, whether the authors are conscious of it or not.
Pengfei Li completed his Ph.D. recently, fitting all manner of dark matter halos as well as the radial acceleration relation (RAR) to galaxies in the SPARC database. For the RAR, he found that galaxy data were consistent with a single, universal acceleration scale, g+. There is of course scatter in the data, but this appears to us to be consistent with what we expect from variation in the mass-to-light ratios of stars and the various uncertainties in the data.
This conclusion has been controversial despite being painfully obvious. I have my own law for data interpretation in astronomy:
Obvious results provoke opposition. The more obvious the result, the stronger the opposition.
The constancy of the acceleration scale is such a case. Where we do not believe we can distinguish between galaxies, others think they can – using our own data! Here it is worth contemplating what all is involved in building a database like SPARC – we were the ones who did the work, after all. In the case of the photometry, we observed the galaxies, we reduced the data, we cleaned the images of foreground contaminants (stars), we fit isophotes, we built mass models – that’s a very short version of what we did in order to be able to estimate the acceleration predicted by Newtonian gravity for the observed distribution of stars. That’s one axis of the RAR. The other is the observed acceleration, which comes from rotation curves, which require even more work. I will spare you the work flow; we did some galaxies ourselves, and took others from the literature in full appreciation of what we could and could not believe — which we have a deep appreciation for because we do the same kind of work ourselves. In contrast, the people claiming to find the opposite of what we find obtained the data by downloading it from our website. The only thing they do is the very last step in the analysis, making fits with Bayesian statistics the same as we do, but in manifest ignorance of the process by which the data came to be. This leads to an underappreciation of the uncertainty in the uncertainties.
This is another rule of thumb in science: outside groups are unlikely to discover important things that were overlooked by the group that did the original work. An example from about seven years ago was the putative 126 GeV line in Fermi satellite data. This was thought by some at the time to be evidence for dark matter annihilating into gamma rays with energy corresponding to the rest mass of the dark matter particles and their anti-particles. This would be a remarkable, Nobel-winning discovery, if true. Strange then that the claim was not made by the Fermi team themselves. Did outsiders beat them to the punch with their own data? It can happen: sometimes large collaborations can be slow to move on important results, wanting to vet everything carefully or warring internally over its meaning while outside investigators move more swiftly. But it can also be that the vetting shows that the exciting result is not credible.
I recall the 126 GeV line being a big deal. There was an entire session devoted to it at a conference I was scheduled to attend. Our time is valuable: I can’t go to every interesting conference, and don’t want to spend time on conferences that aren’t interesting. I was skeptical, simply because of the rule of thumb. I wrote the organizers, and asked if they really thought that this would still be a thing by the time the conference happened in few months’ time. Some of them certainly thought so, so it went ahead. As it happened, it wasn’t. Not a single speaker who was scheduled to talk about the 126 GeV line actually did so. In a few short months, if had gone from an exciting result sure to win a Nobel prize to nada.
This happens all the time. Science isn’t as simple as a dry table of numbers and error bars. This is especially true in astronomy, where we are observing objects in the sky. It is never possible to do an ideal experiment in which one controls for all possible systematics: the universe is not a closed box in which we can control the conditions. Heck, we don’t even know what all the unknowns are. It is a big friggin’ universe.
The practical consequence of this is that the uncertainty in any astronomical measurement is almost always larger than its formal error bar. There are effects we can quantify and include appropriately in the error assessment. There are things we can not. We know they’re there, but that doesn’t mean we can put a meaningful number on them.
Indeed, the sociology of this has evolved over the course of my career. Back in the day, everybody understood these things, and took the stated errors with a grain of salt. If it was important to estimate the systematic uncertainty, it was common to estimate a wide band, in effect saying “I’m pretty sure it is in this range.” Nowadays, it has become common to split out terms for random and systematic error. This is helpful to the non-specialist, but it can also be misleading because, so stated, the confidence interval on the systematic looks like a 1 sigma error even though it is not likely to have a Gaussian distribution. Being 3 sigma off of the central value might be a lot more likely than this implies — or a lot less.
People have become more careful in making error estimates, which ironically has made matters worse. People seem to think that they can actually believe the error bars. Sometimes you can, but sometimes not. Many people don’t know how much salt to take it with, or realize that they should take it with a grain of salt at all. Worse, more and more folks come over from particle physics where extraordinary accuracy is the norm. They are completely unprepared to cope with astronomical data, or even fully process that the error bars may not be what they think they are. There is no appreciation for the uncertainties in the uncertainties, which is absolutely fundamental in astrophysics.
Consequently, one gets overly credulous analyses. In the case of the RAR, a number of papers have claimed that the acceleration scale isn’t constant. Not even remotely! Why do they make this claim?
Below is a histogram of raw acceleration scales from SPARC galaxies. In effect, they are claiming that they can tell the difference between galaxies in the tail on one side of the histogram from those on the opposite side. We don’t think we can, which is the more conservative claim. The width of the histogram is just the scatter that one expects from astronomical data, so the data are consistent with zero intrinsic scatter. That’s not to say that’s necessarily what Nature is doing: we can never measure zero scatter, so it is always conceivable that there is some intrinsic variation in the characteristic acceleration scale. All we can say is that if is there, it is so small that we cannot yet resolve it.
Posed as a histogram like this, it is easy to see that there is a characteristic value – the peak – with some scatter around it. The entire issue it whether that scatter is due to real variation from galaxy to galaxy, or if it is just noise. One way to check this is to make quality cuts: in the plot above, the gray-striped histogram plots every available galaxy. The solid blue one makes some mild quality cuts, like knowing the distance to better than 20%. That matters, because the acceleration scale is a quantity that depends on distance – a notoriously difficult quantity to measure accurately in astronomy. When this quality cut is imposed, the width of the histogram shrinks. The better data make a tighter histogram – just as one would expect if the scatter is due to noise. If instead the scatter is a real, physical effect, it should, if anything, be more pronounced in the better data.
This should not be difficult to understand. And yet – other representations of the data give a different impression, like this one:
This figure tells a very different story. The characteristic acceleration does not just scatter around a universal value. There is a clear correlation from one end of the plot to the other. Indeed, it is a perfectly smooth transition, because “Galaxy” is the number of each galaxy ordered by the value of its acceleration, from lowest to highest. The axes are not independent, they represent identically the same quantity. It is a plot of x against x. If properly projected it into a histogram, it would look like the one above.
This is a terrible way to plot data. It makes it look like there is a correlation where there is none. Setting this aside, there is a potential issue with the most discrepant galaxies – those at either extreme. There are more points that are roughly 3 sigma from a constant value than there should be for a sample this size. If this is the right assessment of the uncertainty, then there is indeed some variation from galaxy to galaxy. Not much, but the galaxies at the left hand side of the plot are different from those on the right hand side.
But can we believe the formal uncertainties that inform this error analysis? If you’ve read this far, you will anticipate that the answer to this question obeys Betteridge’s law. No.
One of the reasons we can’t just assign confidence intervals and believe them like a common physicist is that there are other factors in the analysis – nuisance parameters in Bayesian verbiage – with which the acceleration scale covaries. That’s a fancy way of saying that if we turn one knob, it affects another. We assign priors to the nuisance parameters (e.g., the distance to each galaxy and its inclination) based on independent measurements. But there is still some room to slop around. The question is really what to believe at the end of the analysis. We don’t think we can distinguish the acceleration scale from one galaxy to another, but this other analysis says we should. So which is it?
It is easy at this point to devolve into accusations of picking priors to obtain a preconceived result. I don’t think anyone is doing that. But how to show it?
Pengfei had the brilliant idea to perform the same analysis as Marra et al., but allowing Newton’s constant to vary. This is Big G, a universal constant that’s been known to be a constant of nature for centuries. It surely does not vary. However, G appears in our equations, so we can test for variation therein. Pengfei did this, following the same procedure as Mara et al., and finds the same kind of graph – now for G instead of g+.
You see here the same kind of trend for Newton’s constant as one sees above for the acceleration scale. The same data have been analyzed in the same way. It has also been plotted in the same way, giving the impression of a correlation where there is none. The result is also the same: if we believe the formal uncertainties, the best-fit G is different for the galaxies at the left than from those to the right.
I’m pretty sure Newton’s constant does not vary this much. I’m entirely sure that the rotation curve data we analyze are not capable of making this determination. It would be absurd to claim so. The same absurdity extends to the acceleration scale g+. If we don’t believe the variation in G, there’s no reason to believe that in g+.
So what is going on here? It boils down to the errors on the rotation curves not representing the uncertainty in the circular velocity as we would like for them to. There are all sorts of reasons for this, observational, physical, and systematic. I’ve written about this at great lengths elsewhere, and I haven’t the patience to do so again here. it is turgidly technical to the extent that even the pros don’t read it. It boils down to the ancient, forgotten wisdom of astronomy: you have to take the errors with a grain of salt.
Here is the cumulative distribution (CDF) of reduced chi squared for the plot above.
Two things to notice here. First, the CDF looks the same regardless of whether we let Newton’s constant vary or not, or how we assign the Bayesian priors. There’s no value added in letting it vary – just as we found for the characteristic acceleration scale in the first place. Second, the reduced chi squared is rarely close to one. It should be! As a goodness of fit measure, one claims to have a good fit when chi squared equal to one. The majority of these are not good fits! Rather than the gradual slope we see here, the CDF of chi squared should be a nearly straight vertical line. That’s nothing like what we see.
If one interprets this literally, there are many large chi squared values well in excess of unity. These are bad fits, and the model should be rejected. That’s exactly what Rodrigues et al. (2018) found, rejecting the constancy of the acceleration scale at 10 sigma. By their reasoning, we must also reject the constancy of Newton’s constant with the same high confidence. That’s just silly.
One strange thing: the people complaining that the acceleration scale is not constant are only testing that hypothesis. Their presumption is that if the data reject that, it falsifies MOND. The attitude is that this is an automatic win for dark matter. Is it? They don’t bother checking.
We do. We can do the same exercise with dark matter. We find the same result. The CDF looks the same; there are many galaxies with chi squared that is too large.
Having found the same result for dark matter halos that we found for the RAR, if we apply the same logic, then all proposed model halos are excluded. There are too many bad fits with overly large chi squared.
We have now ruled out all conceivable models. Dark matter is falsified. MOND is falsified. Nothing works. Look on these data, ye mighty, and despair.
But wait! Should we believe the error bars that lead to the end of all things? What would Betteridge say?
Here is the rotation curve of DDO 170 fit with the RAR. Look first at the left box, with the data (points) and the fit (red line). Then look at the fit parameters in the right box.
Looking at the left panel, this is a good fit. The line representing the model provides a reasonable depiction of the data.
Looking at the right panel, this is a terrible fit. The reduced chi squared is 4.9. That’s a lot larger than one! The model is rejected with high confidence.
Well, which is it? Lots of people fall into the trap of blindly trusting statistical tests like chi squared. Statistics can only help your brain. They can’t replace it. Trust your eye-brain. This is a good fit. Chi squared is overly large not because this is a bad model but because the error bars are too small. The absolute amount by which the data “miss” is just a few km/s. This is not much by the standards of galaxies, and could easily be explained by a small departure of the tracer from a purely circular orbit – a physical effect we expect at that level. Or it could simply be that the errors are underestimated. Either way, it isn’t a big deal. It would be incredibly naive to take chi squared at face value.
If you want to see a dozen plots like this for all the various models fit to each of over a hundred galaxies, see Li et al. (2020). The bottom line is always the same. The same galaxies are poorly fit by any model — dark matter or MOND. Chi squared is too big not because all conceivable models are wrong, but because the formal errors are underestimated in many cases.
This comes as no surprise to anyone with experience working with astronomical data. We can work to improve the data and the error estimation – see, for example, Sellwood et al (2021). But we can’t blindly turn the crank on some statistical black box and expect all the secrets of the universe to tumble out onto a silver platter for our delectation. There’s a little more to it than that.
Today we talk to Dr. Stacy McGaugh, Chair of the Astronomy Department at Case Western Reserve University.
David: Hi Stacy. You had set out to disprove MOND and instead found evidence to support it. That sounds like the poster child for how science works. Was praise forthcoming?
Stacy: In the late 1980s and into the 1990s, I set out to try to understand low surface brightness galaxies. These are diffuse systems of stars and gas that rotate like the familiar bright spirals, but whose stars are much more spread out. Why? How did these things come to be? Why were they different from brighter galaxies? How could we explain their properties? These were the problems I started out working on that inadvertently set me on a collision course with MOND.
I did not set out to prove or disprove either MOND or dark matter. I was not really even aware of MOND at that time. I had head of it only on a couple of occasions, but I hadn’t payed any attention, and didn’t really know anything about it. Why would I bother? It was already well established that there had to be dark matter.
I worked to develop our understanding of low surface brightness galaxies in the context of dark matter. Their blue colors, low metallicities, high gas fractions, and overall diffuse nature could be explained if they had formed in dark matter halos that are themselves lower than average density: they occupy the low concentration side of the distribution of dark matter halos at a given mass. I found this interpretation quite satisfactory, so gave me no cause to doubt dark matter to that point.
This picture made two genuine predictions that had yet to be tested. First, low surface brightness galaxies should be less strongly clustered than brighter galaxies. Second, having their mass spread over a larger area, they should shift off of the Tully-Fisher relation defined by denser galaxies. The first prediction came true, and for a period I was jubilant that we had made an important new contribution to out understanding of both galaxies and dark matter. The second prediction failed badly: low surface brightness galaxies adhere to the same Tully-Fisher relation that other galaxies follow.
I tried desperately to understand the failure of the second prediction in terms of dark matter. I tried what seemed like a thousand ways to explain this, but ultimately they were all tautological: I could only explain it if I assumed the answer from the start. The adherence of low surface brightness galaxies to the Tully-Fisher relation poses a serious fine-tuning problem: the distribution of dark matter must be adjusted to exactly counterbalance that of the visible matter so as not to leave any residuals. This makes no sense, and anyone who claims it does is not thinking clearly.
It was in this crisis of comprehension in which I became aware that MOND predicted exactly what I was seeing. No fine-tuning was required. Low surface brightness galaxies followed the same Tully-Fisher relation as other galaxies because the modified force law stipulates that they must. It was only at this point (in the mid-’90s) at which I started to take MOND seriously. If it had got this prediction right, what else did it predict?
I was still convinced that the right answer had to be dark matter. There was, after all, so much evidence for it. So this one prediction must be a fluke; surely it would fail the next test. That was not what happened: MOND passed test after test after test, successfully predicting observations both basic and detailed that dark matter theory got wrong or did not even address. It was only after this experience that I realized that what I thought was evidence for dark matter was really just evidence that something was wrong: the data cannot be explained with ordinary gravity without invisible mass. The data – and here I mean ALL the data – were mostly ambiguous: they did not clearly distinguish whether the problem was with mass we couldn’t see or with the underlying equations from which we inferred the need for dark matter.
So to get back to your original question, yes – this is how science should work. I hadn’t set out to test MOND, but I had inadvertently performed exactly the right experiment for that purpose. MOND had its predictions come true where the predictions of other theories did not: both my own theory and those of others who were working in the context of dark matter. We got it wrong while MOND got it right. That led me to change my mind: I had been wrong to be sure the answer had to be dark matter, and to be so quick to dismiss MOND. Admitting this was the most difficult struggle I ever faced in my career.
David: From the perspective of dark matter, how does one understand MOND’s success?
Stacy: One does not.
That the predictions of MOND should come true in a universe dominated by dark matter makes no sense.
Before I became aware of MOND, I spent lots of time trying to come up with dark matter-based explanations for what I was seeing. It didn’t work. Since then, I have continued to search for a viable explanation with dark matter. I have not been successful. Others have claimed such success, but whenever I look at their work, it always seems that what they assert to be a great success is just a specific elaboration of a model I had already considered and rejected as obviously unworkable. The difference boils down to Occam’s razor. If you give dark matter theory enough free parameters, it can be adjusted to “predict” pretty much anything. But the best we can hope to do with dark matter theory is to retroactively explain what MOND successfully predicted in advance. Why should we be impressed by that?
David: Does MOND fail in clusters?
Stacy: Yes and no: there are multiple tests in clusters. MOND passes some and flunks others – as does dark matter.
The most famous test is the baryon fraction. This should be one in MOND – all the mass is normal baryonic matter. With dark matter, it should be the cosmic ratio of normal to dark matter (about 1:5).
MOND fails this test: it explains most of the discrepancy in clusters, but not all of it. The dark matter picture does somewhat better here, as the baryon fraction is close to the cosmic expectation — at least for the richest clusters of galaxies. In smaller clusters and groups of galaxies, the normal matter content falls short of the cosmic value. So both theories suffer a “missing baryon” problem: MOND in rich clusters; dark matter in everything smaller.
Another test is the mass-temperature relation. Both theories predict a relation between the mass of a cluster and the temperature of the gas it contains, but they predict different slopes for this relation. MOND gets the slope right but the amplitude wrong, leading to the missing baryon problem above. Dark matter gets the amplitude right for the most massive clusters, but gets the slope wrong – which leads to it having a missing baryon problem for systems smaller than the largest clusters.
There are other tests. Clusters continue to merge; the collision velocity of merging clusters is predicted to be higher in MOND than with dark matter. For example, the famous bullet cluster, which is often cited as a contradiction to MOND, has a collision speed that is practically impossible with dark matter: there just isn’t enough time for the two components of the bullet to accelerate up to the observed relative speed if they fall together under the influence of normal gravity and the required amount of dark mass. People have argued over the severity of this perplexing problem, but the high collision speed happens quite naturally in MOND as a consequence of its greater effective force of attraction. So, taken at face value, the bullet cluster both confirms and refutes both theories!
I could go on… one expects clusters to form earlier and become more massive in MOND than in dark matter. There are some indications that this is the case – the highest redshift clusters came as a surprise to conventional structure formation theory – but the relative numbers of clusters as a function of mass seems to agree well with current expectations with dark matter. So clusters are a mixed bag.
More generally, there is a widespread myth that MOND fits rotation curves, but gets nothing else right. This is what I expected to find when I started fact checking, but the opposite is true. MOND explains a huge variety of data well. The presumptive superiority of dark matter is just that – a presumption.
David: At a physics colloquium two decades ago, Vera Rubin described how theorists were willing and eager to explain her data to her. At an astronomy colloquium a few years later, you echoed that sentiment in relation to your data on velocity curves. One concludes that theorists are uniquely insightful and generous people. Is there anyone you would like to thank for putting you straight?
Stacy: So they perceive themselves to be.
MOND has made many successful a priori predictions. This is the golden standard of the scientific method. If there is another explanation for it, I’d like to know what it is.
As your questions supposes, many theorists have offered such explanations. At most one of them can be correct. I have yet to hear a satisfactory explanation.
David: What are MOND people working on these days?
Stacy: Any problem that is interesting in extragalactic astronomy is interesting in the context of MOND. Outstanding questions include planes of satellite dwarf galaxies, clusters of galaxies, the formation of large scale structure, and the microwave background. MOND-specific topics include the precise value of the MOND acceleration constant, predicting the velocity dispersions of dwarf galaxies, and the search for the predicted external field effect, which is a unique signature of MOND.
The phrasing of this question raises a sociological issue. I don’t know what a “MOND person” is. Before now, I have only heard it used as a pejorative.
I am a scientist who has worked on many topics. MOND is just one of them. Does that make me a “MOND person”? I have also worked on dark matter, so am I also a “dark matter person”? Are these mutually exclusive?
I have attended conferences where I have heard people say ‘“MOND people” do this’ or ‘“MOND people” fail to do that.’ Never does the speaker of these words specify who they’re talking about: “MOND people” are a nameless Other. In all cases, I am more familiar with the people and the research they pretend to describe, but in no way do I recognize what they’re talking about. It is just a way of saying “Those People” are Bad.
There are many experts on dark matter in the world. I am one of them. There are rather fewer experts on MOND. I am also one of them. Every one of these “MOND people” is also an expert on dark matter. This situation is not reciprocated: many experts on dark matter are shockingly ignorant about MOND. I was once guilty of that myself, but realized that ignorance is not a sound basis on which to base a scientific judgement.
David: Are you tired of getting these types of questions?
Stacy: Yes and no.
No, in that these are interesting questions about fundamental science. That is always fun to talk about.
Yes, in that I find myself having the same arguments over and over again, usually with scientists who remain trapped in the misconceptions I suffered myself a quarter century ago, but whose minds are closed to ideas that threaten their sacred cows. If dark matter is a real, physical substance, then show me a piece already.
I have been busy teaching cosmology this semester. When I started on the faculty of the University of Maryland in 1998, there was no advanced course on the subject. This seemed like an obvious hole to fill, so I developed one. I remember with fond bemusement the senior faculty, many of them planetary scientists, sending Mike A’Hearn as a stately ambassador to politely inquire if cosmology had evolved beyond a dodgy subject and was now rigorous enough to be worthy of a 3 credit graduate course.
Back then, we used transparencies or wrote on the board. It was novel to have a course webpage. I still have those notes, and marvel at the breadth and depth of work performed by my younger self. Now that I’m teaching it for the first time in a decade, I find it challenging to keep up. Everything has to be adapted to an electronic format, and be delivered remotely during this damnable pandemic. It is a less satisfactory experience, and it has precluded posting much here.
Another thing I notice is that attitudes have evolved along with the subject. The baseline cosmology, LCDM, has not changed much. We’ve tilted the power spectrum and spiked it with extra baryons, but the basic picture is that which emerged from the application of classical observational cosmology – measurements of the Hubble constant, the mass density, the ages of the oldest stars, the abundances of the light elements, number counts of faint galaxies, and a wealth of other observational constraints built up over decades of effort. Here is an example of combining such constraints, and exercise I have students do every time I teach the course:
These things were known by the mid-90s. Nowadays, people seem to think Type Ia SN discovered Lambda, when really they were just icing on a cake that was already baked. The location of the first peak in the acoustic power spectrum of the microwave background was corroborative of the flat geometry required by the picture that had developed, but trailed the development of LCDM rather than informing its construction. But students entering the field now seem to have been given the impression that these were the only observations that mattered.
Worse, they seem to think these things are Known, as if there’s never been a time that we cosmologists have been sure about something only to find later that we had it quite wrong. This attitude is deleterious to the progress of science, as it precludes us from seeing important clues when they fail to conform to our preconceptions. To give one recent example, everyone seems to have decided that the EDGES observation of 21 cm absorption during the dark ages is wrong. The reason? Because it is impossible in LCDM. There are technical reasons why it might be wrong, but these are subsidiary to Attitude: we can’t believe it’s true, so we don’t. But that’s what makes a result important: something that makes us reexamine how we perceive the universe. If we’re unwilling to do that, we’re no longer doing science.