Some Outsider Perspective from Insiders

Some Outsider Perspective from Insiders

Avi Loeb has a nice recent post Recalculating Academia, in which he discusses some of the issues confronting modern academia. One of the reasons I haven’t written here for a couple of months is despondency over the same problems. If you’re here reading this, you’ll likely be interested in what he has to say.

I am not eager to write at length today, but I do want to amplify some of the examples he gives with my own experience. For example, he notes that there are

theoretical physicists who avoid the guillotine of empirical tests for half a century by dedicating their career to abstract conjectures, avoid the risk of being proven wrong while demonstrating mathematical virtuosity.

Avi Loeb

I recognize many kinds of theoretical physicists who fit this description. My first thought was string theory, which took off in the mid-80s when I was a grad student at Princeton, ground zero for that movement in the US. (The Russians indulged in this independently.) I remember a colloquium in which David Gross advocated the “theory of everything” with gratuitous religious fervor to a large audience of eager listeners quavering with anticipation with the texture of religious revelation. It was captivating and convincing, up until the point near the end when he noted that experimental tests were many orders of magnitude beyond any experiment conceivable at the time. That… wasn’t physics to me. If this was the path the field was going down, I wanted no part of it. This was one of many factors that precipitated my departure from the toxic sludge that was grad student life in the Princeton physics department.

I wish I could say I had been proven wrong. Instead, decades later, physics has nothing to show for its embrace of string theory. There have been some impressive development in mathematics stemming from it. Mathematics, not physics. And yet, there persists a large community of theoretical physicists who wander endlessly in the barren and practically infinite parameter space of multidimensional string theory. Maybe there is something relevant to physical reality there, or maybe it hasn’t been found because there isn’t. At what point does one admit that the objective being sought just ain’t there? [Death. For many people, the answer seems to be never. They keep repeating the same fruitless endeavor until they die.]

We do have new physics, in the form of massive neutrinos and the dark matter problem and the apparent acceleration of the expansion rate of the universe. What we don’t have is the expected evidence for supersymmetry, the crazy-bold yet comparatively humble first step on the road to string theory. If they had got even this much right, we should have seen evidence for it at the LHC, for example in the decay of the aptly named BS meson. If supersymmetric particles existed, they should provide many options for the meson to decay into, which otherwise has few options in the Standard Model of particle physics. This was a strong prediction of minimal supersymmetry, so much so that it was called the Golden Test of supersymmetry. After hearing this over and over in the ’80s and ’90s, I have not heard it again any time in this century. I’m nor sure when the theorists stopped talking about this embarrassment, but I suspect it is long enough ago now that it will come as a surprise to younger scientists, even those who work in the field. Supersymmetry flunked the golden test, and it flunked it hard. Rather than abandon the theory (some did), we just stopped talking about. There persists a large community of theorists who take supersymmetry for granted, and react with hostility if you question that Obvious Truth. They will tell you with condescension that only minimal supersymmetry is ruled out; there is an enormous parameter space still open for their imaginations to run wild, unbridled by experimental constraint. This is both true and pathetic.

Reading about the history of physics, I learned that there was a community of physicists who persisted believing in aether for decades after the Michelson-Morley experiment. After all, only some forms of aether were ruled out. This was true, at the time, but we don’t bother with that detail when teaching physics now. Instead, it gets streamlined to “aether was falsified by Michelson-Morley.” This is, in retrospect, true, and we don’t bother to mention those who pathetically kept after it.

The standard candidate for dark matter, the WIMP, is a supersymmetric particle. If supersymmetry is wrong, WIMPs don’t exist. And yet, there is a large community of particle physicists who persist in building ever bigger and better experiments designed to detect WIMPs. Funny enough, they haven’t detected anything. It was a good hypothesis, 38 years ago. Now its just a bad habit. The better ones tacitly acknowledge this, attributing their continuing efforts to the streetlight effect: you look where you can see.

Prof. Loeb offers another pertinent example:

When I ask graduating students at their thesis exam whether the cold dark matter paradigm will be proven wrong if their computer simulations will be in conflict with future data, they almost always say that any disagreement will indicate that they should add a missing ingredient to their theoretical model in order to “fix” the discrepancy.

Avi Loeb

This is indeed the attitude. So much so that no additional ingredient seems to absurd if it is what we need to save the phenomenon. Feedback is the obvious example in my own field, as that (or the synonyms “baryon physics” or “gastrophysics”) is invoked to explain away any and all discrepancies. It sounds simple, since feedback is a real effect that does happen, but this single word does a lot of complicated work under the hood. There are many distinct kinds of feedback: stellar winds, UV radiation from massive stars, supernova when those stars explode, X-rays from compact sources like neutron stars, and relativistic jets from supermasive black holes at the centers of galactic nuclei. These are the examples of feedback that I can think of off the top of my head, there are probably more. All of these things have perceptible, real-world effects on the relevant scales, with, for example, stars blowing apart the dust and gas of their stellar cocoons after they form. This very real process has bugger all to do with what feedback is invoked to do on galactic scales. Usually, supernova are blamed by theorists for any and all problems in dwarf galaxies, while observers tell me that stellar winds do most of the work in disrupting star forming regions. Confronted with this apparent discrepancy, the usual answer is that it doesn’t matter how the energy is input into the interstellar medium, just that it is. Yet we can see profound differences between stellar winds and supernova explosions, so this does not inspire confidence for the predictive power of theories that generically invoke feedback to explain away problems that wouldn’t be there in a healthy theory.

This started a long time ago. I had already lost patience with this unscientific attitude to the point that I dubbed it the

Spergel Principle: “It is better to postdict than to predict.”

McGaugh 1998

This continues to go on and has now done so for so long that generations of students seem to think that this is how science is supposed to be done. If asked about hypothesis testing and whether a theory can be falsified, many theorists will first look mystified, then act put out. Why would you even ask that? (One does not question the paradigm.) The minority of better ones then rally to come up with some reason to justify that yes, what they’re talking about can be falsified, so it does qualify as physics. But those goalposts can always be moved.

A good example of moving goalposts is the cusp-core problem. When I first encountered this in the mid to late ’90s, I tried to figure a way out of it, but failed. So I consulted one of the very best theorists, Simon White. When I asked him what he thought would constitute a falsification of cold dark matter, he said cusps: “cusps have to be there” [in the center of a dark matter halo]. Flash forward to today, when nobody would accept that as a falsification of cold dark matter: it can be fixed by feedback. Which would be fine, if it were true, which isn’t really clear. At best it provides a post facto explanation for an unpredicted phenomenon without addressing the underlying root cause, that the baryon distribution is predictive of the dynamics.

This is like putting a band-aid on a Tyrannosaurus. It’s already dead and fossilized. And if it isn’t, well, you got bigger problems.

Another disease common to theory is avoidance. A problem is first ignored, then the data are blamed for showing the wrong thing, then they are explained in a way that may or may not be satisfactory. Either way, it is treated as something that had been expected all along.

In a parallel to this gaslighting, I’ve noticed that it has become fashionable of late to describe unsatisfactory explanations as “natural.” Saying that something can be explained naturally is a powerful argument in science. The traditional meaning is that ok, we hadn’t contemplated this phenomena before it surprised us, but if we sit down and work it out, it makes sense. The “making sense” part means that an answer falls out of a theory easily when the right question is posed. If you need to run gazillions of supercomputer CPU hours of a simulation with a bunch of knobs for feedback to get something that sorta kinda approximates reality but not really, your result does not qualify as natural. It might be right – that’s a more involved adjudication – but it doesn’t qualify as natural and the current fad to abuse this term again does not inspire confidence that the results of such simulations might somehow be right. Just makes me suspect the theorists are fooling themselves.

I haven’t even talked about astroparticle physicists or those who engage in fantasies about the multiverse. I’ll just close by noting that Popper’s criterion for falsification was intended to distinguish between physics and metaphysics. That’s not the same as right or wrong, but physics is subject to experimental test while metaphysics is the stuff of late night bull sessions. The multiverse is manifestly metaphysical. Cool to think about, has lots of implications for philosophy and religion, but not physics. Even Gross has warned against treading down the garden path of the multiverse. (Tell me that you’re warning others not to make the same mistakes you made without admitting you made mistakes.)

There are a lot of scientists who would like to do away with Popper, or any requirement that physics be testable. These are inevitably the same people whose fancy turns to metascapes of mathematically beautiful if fruitless theories, and want to pass off their metaphysical ramblings as real physics. Don’t buy it.

Cosmic whack-a-mole

Cosmic whack-a-mole

The fine-tuning problem encountered by dark matter models that I talked about last time is generic. The knee-jerk reaction of most workers seems to be “let’s build a more sophisticated model.” That’s reasonable – if there is any hope of recovery. The attitude is that dark matter has to be right so something has to work out. This fails to even contemplate the existential challenge that the fine-tuning problem imposes.

Perhaps I am wrong to be pessimistic, but my concern is well informed by years upon years trying to avoid this conclusion. Most of the claims I have seen to the contrary are just specialized versions of the generic models I had already built: they contain the same failings, but these go unrecognized because the presumption is that something has to work out, so people are often quick to declare “close enough!”

In my experience, fixing one thing in a model often breaks something else. It becomes a game of cosmic whack-a-mole. If you succeed in suppressing the scatter in one relation, it pops out somewhere else. A model that seems like it passes the test you built it to pass flunks as soon as you confront it with another test.

Let’s consider a few examples.


Squeezing the toothpaste tube

Our efforts to evade one fine-tuning problem often lead to another. This has been my general experience in many efforts to construct viable dark matter models. It is like squeezing a tube of toothpaste: every time we smooth out the problems in one part of the tube, we simply squeeze them into a different part. There are many published claims to solve this problem or that, but they frequently fail to acknowledge (or notice) that the purported solution to one problem creates another.

One example is provided by Courteau and Rix (1999). They invoke dark matter domination to explain the lack of residuals in the Tully-Fisher relation. In this limit, Mb/R ​≪ ​MDM/R and the baryons leave no mark on the rotation curve. This can reconcile the model with the Tully-Fisher relation, but it makes a strong prediction. It is not just the flat rotation speed that is the same for galaxies of the same mass, but the entirety of the rotation curve, V(R) at all radii. The stars are just convenient tracers of the dark matter halo in this limit; the dynamics are entirely dominated by the dark matter. The hypothesized solution fixes the problem that is addressed, but creates another problem that is not addressed, in this case the observed variation in rotation curve shape.

The limit of complete dark matter domination is not consistent with the shapes of rotation curves. Galaxies of the same baryonic mass have the same flat outer velocity (Tully-Fisher), but the shapes of their rotation curves vary systematically with surface brightness (de Blok & McGaugh, 1996; Tully and Verheijen, 1997; McGaugh and de Blok, 1998a,b; Swaters et al., 2009, 2012; Lelli et al., 2013, 2016c). High surface brightness galaxies have steeply rising rotation curves while LSB galaxies have slowly rising rotation curves (Fig. 6). This systematic dependence of the inner rotation curve shape on the baryon distribution excludes the SH hypothesis in the limit of dark matter domination: the distribution of the baryons clearly has an impact on the dynamics.

Fig. 6. Rotation curve shapes and surface density. The left panel shows the rotation curves of two galaxies, one HSB (NGC 2403, open circles) and one LSB (UGC 128, filled circles) (de Blok & McGaugh, 1996; Verheijen and de Blok, 1999; Kuzio de Naray et al., 2008). These galaxies have very nearly the same baryonic mass (~ 1010 ​M), and asymptote to approximately the same flat rotation speed (~ 130 ​km ​s−1). Consequently, they are indistinguishable in the Tully-Fisher plane (Fig. 4). However, the inner shapes of the rotation curves are readily distinguishable: the HSB galaxy has a steeply rising rotation curve while the LSB galaxy has a more gradual rise. This is a general phenomenon, as illustrated by the central density relation (right panel: Lelli et al., 2016c) where each point is one galaxy; NGC 2403 and UGC 128 are highlighted as open points. The central dynamical mass surface density (Σdyn) measured by the rate of rise of the rotation curve (Toomre, 1963) correlates with the central surface density of the stars (Σ0) measured by their surface brightness. The line shows 1:1 correspondence: no dark matter is required near the centers of HSB galaxies. The need for dark matter appears below 1000 ​M pc−2 and grows systematically greater to lower surface brightness. This is the origin of the statement that LSB galaxies are dark matter dominated.

A more recent example of this toothpaste tube problem for SH-type models is provided by the EAGLE simulations (Schaye et al., 2015). These are claimed (Ludlow et al., 2017) to explain one aspect of the observations, the radial acceleration relation (McGaugh et al., 2016), but fail to explain another, the central density relation (Lelli et al., 2016c) seen in Fig. 6. This was called the ‘diversity’ problem by Oman et al. (2015), who note that the rotation velocity at a specific, small radius (2 kpc) varies considerably from galaxy to galaxy observationally (Fig. 6), while simulated galaxies show essentially no variation, with only a small amount of scatter. This diversity problem is exactly the same problem that was pointed out before [compare Fig. 5 of Oman et al. (2015) to Fig. 14 of McGaugh and de Blok (1998a)].

There is no single, universally accepted standard galaxy formation model, but a common touchstone is provided by Mo et al. (1998). Their base model has a constant ratio of luminous to dark mass md [their assumption (i)], which provides a reasonable description of the sizes of galaxies as a function of mass or rotation speed (Fig. 7). However, this model predicts the wrong slope (3 rather than 4) for the Tully-Fisher relation. This is easily remedied by making the luminous mass fraction proportional to the rotation speed (md ​∝ ​Vf), which then provides an adequate fit to the Tully-Fisher4 relation. This has the undesirable effect of destroying the consistency of the size-mass relation. We can have one or the other, but not both.

Fig. 7. Galaxy size (as measured by the exponential disk scale length, left) and mass (right) as a function of rotation velocity. The latter is the Baryonic Tully-Fisher relation; the data are the same as in Fig. 4. The solid lines are Mo et al. (1998) models with constant md (their equations 12 and 16). This is in reasonable agreement with the size-speed relation but not the BTFR. The latter may be fit by adopting a variable md ​∝ ​Vf (dashed lines), but this ruins agreement with the size-speed relation. This is typical of dark matter models in which fixing one thing breaks another.

This failure of the Mo et al. (1998) model provides another example of the toothpaste tube problem. By fixing one problem, we create another. The only way forward is to consider more complex models with additional degrees of freedom.

Feedback

It has become conventional to invoke ‘feedback’ to address the various problems that afflict galaxy formation theory (Bullock & Boylan-Kolchin, 2017; De Baerdemaker and Boyd, 2020). It goes by other monikers as well, variously being called ‘gastrophysics’5 for gas phase astrophysics, or simply ‘baryonic physics’ for any process that might intervene between the relatively simple (and calculable) physics of collisionless cold dark matter and messy observational reality (which is entirely illuminated by the baryons). This proliferation of terminology obfuscates the boundaries of the subject and precludes a comprehensive discussion.

Feedback is not a single process, but rather a family of distinct processes. The common feature of different forms of feedback is the deposition of energy from compact sources into the surrounding gas of the interstellar medium. This can, at least in principle, heat gas and drive large-scale winds, either preventing gas from cooling and forming too many stars, or ejecting it from a galaxy outright. This in turn might affect the distribution of dark matter, though the effect is weak: one must move a lot of baryons for their gravity to impact the dark matter distribution.

There are many kinds of feedback, and many devils in the details. Massive, short-lived stars produce copious amounts of ultraviolet radiation that heats and ionizes the surrounding gas and erodes interstellar dust. These stars also produce strong winds through much of their short (~ 10 Myr) lives, and ultimately explode as Type II supernovae. These three mechanisms each act in a distinct way on different time scales. That’s just the feedback associated with massive stars; there are many other mechanisms (e.g., Type Ia supernovae are distinct from Type II supernovae, and Active Galactic Nuclei are a completely different beast entirely). The situation is extremely complicated. While the various forms of stellar feedback are readily apparent on the small scales of stars, it is far from obvious that they have the desired impact on the much larger scales of entire galaxies.

For any one kind of feedback, there can be many substantially different implementations in galaxy formation simulations. Independent numerical codes do not generally return compatible results for identical initial conditions (Scannapieco et al., 2012): there is no consensus on how feedback works. Among the many different computational implementations of feedback, at most one can be correct.

Most galaxy formation codes do not resolve the scale of single stars where stellar feedback occurs. They rely on some empirically calibrated, analytic approximation to model this ‘sub-grid physics’ — which is to say, they don’t simulate feedback at all. Rather, they simulate the accumulation of gas in one resolution element, then follow some prescription for what happens inside that unresolved box. This provides ample opportunity for disputes over the implementation and effects of feedback. For example, feedback is often cited as a way to address the cusp-core problem — or not, depending on the implementation (e.g., Benítez-Llambay et al., 2019; Bose et al., 2019; Di Cintio et al., 2014; Governato et al., 2012; Madau et al., 2014; Read et al., 2019). High resolution simulations (Bland-Hawthorn et al., 2015) indicate that the gas of the interstellar medium is less affected by feedback effects than assumed by typical sub-grid prescriptions: most of the energy is funneled through the lowest density gas — the course of least resistance — and is lost to the intergalactic medium without much impacting the galaxy in which it originates.

From the perspective of the philosophy of science, feedback is an auxiliary hypothesis invoked to patch up theories of galaxy formation. Indeed, since there are many distinct flavors of feedback that are invoked to carry out a variety of different tasks, feedback is really a suite of auxiliary hypotheses. This violates parsimony to an extreme and brutal degree.

This concern for parsimony is not specific to any particular feedback scheme; it is not just a matter of which feedback prescription is best. The entire approach is to invoke as many free parameters as necessary to solve any and all problems that might be encountered. There is little doubt that such models can be constructed to match the data, even data that bear little resemblance to the obvious predictions of the paradigm (McGaugh and de Blok, 1998a; Mo et al., 1998). So the concern is not whether ΛCDM galaxy formation models can explain the data; it is that they can’t not.


One could go on at much greater length about feedback and its impact on galaxy formation. This is pointless. It is a form of magical thinking to expect that the combined effects of numerous complicated feedback effects are going to always add up to looking like MOND in each and every galaxy. It is also the working presumption of an entire field of modern science.

Two Hypotheses

Two Hypotheses

OK, basic review is over. Shit’s gonna get real. Here I give a short recounting of the primary reason I came to doubt the dark matter paradigm. This is entirely conventional – my concern about the viability of dark matter is a contradiction within its own context. It had nothing to do with MOND, which I was blissfully ignorant of when I ran head-long into this problem in 1994. Most of the community chooses to remain blissfully ignorant, which I understand: it’s way more comfortable. It is also why the field has remained mired in the ’90s, with all the apparent progress since then being nothing more than the perpetual reinvention of the same square wheel.


To make a completely generic point that does not depend on the specifics of dark matter halo profiles or the details of baryonic assembly, I discuss two basic hypotheses for the distribution of disk galaxy size at a given mass. These broad categories I label SH (Same Halo) and DD (Density begets Density) following McGaugh and de Blok (1998a). In both cases, galaxies of a given baryonic mass are assumed to reside in dark matter halos of a corresponding total mass. Hence, at a given halo mass, the baryonic mass is the same, and variations in galaxy size follow from one of two basic effects:

  • SH: variations in size follow from variations in the spin of the parent dark matter halo.
  • DD: variations in surface brightness follow from variations in the density of the dark matter halo.

Recall that at a given luminosity, size and surface brightness are not independent, so variation in one corresponds to variation in the other. Consequently, we have two distinct ideas for why galaxies of the same mass vary in size. In SH, the halo may have the same density profile ρ(r), and it is only variations in angular momentum that dictate variations in the disk size. In DD, variations in the surface brightness of the luminous disk are reflections of variations in the density profile ρ(r) of the dark matter halo. In principle, one could have a combination of both effects, but we will keep them separate for this discussion, and note that mixing them defeats the virtues of each without curing their ills.

The SH hypothesis traces back to at least Fall and Efstathiou (1980). The notion is simple: variations in the size of disks correspond to variations in the angular momentum of their host dark matter halos. The mass destined to become a dark matter halo initially expands with the rest of the universe, reaching some maximum radius before collapsing to form a gravitationally bound object. At the point of maximum expansion, the nascent dark matter halos torque one another, inducing a small but non-zero net spin in each, quantified by the dimensionless spin parameter λ (Peebles, 1969). One then imagines that as a disk forms within a dark matter halo, it collapses until it is centrifugally supported: λ → 1 from some initially small value (typically λ ​≈ ​0.05, Barnes & Efstathiou, 1987, with some modest distribution about this median value). The spin parameter thus determines the collapse factor and the extent of the disk: low spin halos harbor compact, high surface brightness disks while high spin halos produce extended, low surface brightness disks.

The distribution of primordial spins is fairly narrow, and does not correlate with environment (Barnes & Efstathiou, 1987). The narrow distribution was invoked as an explanation for Freeman’s Law: the small variation in spins from halo to halo resulted in a narrow distribution of disk central surface brightness (van der Kruit, 1987). This association, while apparently natural, proved to be incorrect: when one goes through the mathematics to transform spin into scale length, even a narrow distribution of initial spins predicts a broad distribution in surface brightness (Dalcanton, Spergel, & Summers, 1997; McGaugh and de Blok, 1998a). Indeed, it predicts too broad a distribution: to prevent the formation of galaxies much higher in surface brightness than observed, one must invoke a stability criterion (Dalcanton, Spergel, & Summers, 1997; McGaugh and de Blok, 1998a) that precludes the existence of very high surface brightness disks. While it is physically quite reasonable that such a criterion should exist (Ostriker and Peebles, 1973), the observed surface density threshold does not emerge naturally, and must be inserted by hand. It is an auxiliary hypothesis invoked to preserve SH. Once done, size variations and the trend of average size with mass work out in reasonable quantitative detail (e.g., Mo et al., 1998).

Angular momentum conservation must hold for an isolated galaxy, but the assumption made in SH is stronger: baryons conserve their share of the angular momentum independently of the dark matter. It is considered a virtue that this simple assumption leads to disk sizes that are about right. However, this assumption is not well justified. Baryons and dark matter are free to exchange angular momentum with each other, and are seen to do so in simulations that track both components (e.g., Book et al., 2011; Combes, 2013; Klypin et al., 2002). There is no guarantee that this exchange is equitable, and in general it is not: as baryons collapse to form a small galaxy within a large dark matter halo, they tend to lose angular momentum to the dark matter. This is a one-way street that runs in the wrong direction, with the final destination uncomfortably invisible with most of the angular momentum sequestered in the unobservable dark matter. Worse still, if we impose rigorous angular momentum conservation among the baryons, the result is a disk with a completely unrealistic surface density profile (van den Bosch, 2001a). It then becomes necessary to pick and choose which baryons manage to assemble into the disk and which are expelled or otherwise excluded, thereby solving one problem by creating another.

Early work on LSB disk galaxies led to a rather different picture. Compared to the previously known population of HSB galaxies around which our theories had been built, the LSB galaxy population has a younger mean stellar age (de Blok & van der Hulst, 1998; McGaugh and Bothun, 1994), a lower content of heavy elements (McGaugh, 1994), and a systematically higher gas fraction (McGaugh and de Blok, 1997; Schombert et al., 1997). These properties suggested that LSB galaxies evolve more gradually than their higher surface brightness brethren: they convert their gas into stars over a much longer timescale (McGaugh et al., 2017). The obvious culprit for this difference is surface density: lower surface brightness galaxies have less gravity, hence less ability to gather their diffuse interstellar medium into dense clumps that could form stars (Gerritsen and de Blok, 1999; Mihos et al., 1999). It seemed reasonable to ascribe the low surface density of the baryons to a correspondingly low density of their parent dark matter halos.

One way to think about a region in the early universe that will eventually collapse to form a galaxy is as a so-called top-hat over-density. The mass density Ωm → 1 ​at early times, irrespective of its current value, so a spherical region (the top-hat) that is somewhat over-dense early on may locally exceed the critical density. We may then consider this finite region as its own little closed universe, and follow its evolution with the Friedmann equations with Ω ​> ​1. The top-hat will initially expand along with the rest of the universe, but will eventually reach a maximum radius and recollapse. When that happens depends on the density. The greater the over-density, the sooner the top-hat will recollapse. Conversely, a lesser over-density will take longer to reach maximum expansion before recollapsing.

Everything about LSB galaxies suggested that they were lower density, late-forming systems. It therefore seemed quite natural to imagine a distribution of over-densities and corresponding collapse times for top-hats of similar mass, and to associate LSB galaxy with the lesser over-densities (Dekel and Silk, 1986; McGaugh, 1992). More recently, some essential aspects of this idea have been revived under the monicker of “assembly bias” (e.g. Zehavi et al., 2018).

The work that informed the DD hypothesis was based largely on photometric and spectroscopic observations of LSB galaxies: their size and surface brightness, color, chemical abundance, and gas content. DD made two obvious predictions that had not yet been tested at that juncture. First, late-forming halos should reside preferentially in low density environments. This is a generic consequence of Gaussian initial conditions: big peaks defined on small (e.g., galaxy) scales are more likely to be found in big peaks defined on large (e.g., cluster) scales, and vice-versa. Second, the density of the dark matter halo of an LSB galaxy should be lower than that of an equal mass halo containing and HSB galaxy. This predicts a clear signature in their rotation speeds, which should be lower for lower density.

The prediction for the spatial distribution of LSB galaxies was tested by Bothun et al. (1993) and Mo et al. (1994). The test showed the expected effect: LSB galaxies were less strongly clustered than HSB galaxies. They are clustered: both galaxy populations follow the same large scale structure, but HSB galaxies adhere more strongly to it. In terms of the correlation function, the LSB sample available at the time had about half the amplitude r0 as comparison HSB samples (Mo et al., 1994). The effect was even more pronounced on the smallest scales (<2 Mpc: Bothun et al., 1993), leading Mo et al. (1994) to construct a model that successfully explained both small and large scale aspects of the spatial distribution of LSB galaxies simply by associating them with dark matter halos that lacked close interactions with other halos. This was strong corroboration of the DD hypothesis.

One way to test the prediction of DD that LSB galaxies should rotate more slowly than HSB galaxies was to use the Tully-Fisher relation (Tully and Fisher, 1977) as a point of reference. Originally identified as an empirical relation between optical luminosity and the observed line-width of single-dish 21 ​cm observations, more fundamentally it turns out to be a relation between the baryonic mass of a galaxy (stars plus gas) and its flat rotation speed the Baryonic Tully-Fisher relation (BTFR: McGaugh et al., 2000). This relation is a simple power law of the form

Mb = AVf4 (equation 1)

with A ​≈ ​50 ​M km−4 s4 (McGaugh, 2005).

Aaronson et al. (1979) provided a straightforward interpretation for a relation of this form. A test particle orbiting a mass M at a distance R will have a circular speed V

V2 = GM/R (equation 2)

where G is Newton’s constant. If we square this, a relation like the Tully-Fisher relation follows:

V4 = (GM/R)2 &propto; MΣ (equation 3)

where we have introduced the surface mass density Σ ​= ​M/R2. The Tully-Fisher relation M ​∝ ​V4 is recovered if Σ is constant, exactly as expected from Freeman’s Law (Freeman, 1970).

LSB galaxies, by definition, have central surface brightnesses (and corresponding stellar surface densities Σ0) that are less than the Freeman value. Consequently, DD predicts, through equation (3), that LSB galaxies should shift systematically off the Tully-Fisher relation: lower Σ means lower velocity. The predicted effect is not subtle2 (Fig. 4). For the range of surface brightness that had become available, the predicted shift should have stood out like the proverbial sore thumb. It did not (Hoffman et al., 1996; McGaugh and de Blok, 1998a; Sprayberry et al., 1995; Zwaan et al., 1995). This had an immediate impact on galaxy formation theory: compare Dalcanton et al. (1995, who predict a shift in Tully-Fisher with surface brightness) with Dalcanton et al. (1997b, who do not).

Fig. 4. The Baryonic Tully-Fisher relation and residuals. The top panel shows the flat rotation velocity of galaxies in the SPARC database (Lelli et al., 2016a) as a function of the baryonic mass (stars plus gas). The sample is restricted to those objects for which both quantities are measured to better than 20% accuracy. The bottom panel shows velocity residuals around the solid line in the top panel as a function of the central surface density of the stellar disks. Variations in the stellar surface density predict variations in velocity along the dashed line. These would translate to shifts illustrated by the dotted lines in the top panel, with each dotted line representing a shift of a factor of ten in surface density. The predicted dependence on surface density is not observed (Courteau & Rix, 1999; McGaugh and de Blok, 1998a; Sprayberry et al., 1995; Zwaan et al., 1995).

Instead of the systematic variation of velocity with surface brightness expected at fixed mass, there was none. Indeed, there is no hint of a second parameter dependence. The relation is incredibly tight by the standards of extragalactic astronomy (Lelli et al., 2016b): baryonic mass and the flat rotation speed are practically interchangeable.

The above derivation is overly simplistic. The radius at which we should make a measurement is ill-defined, and the surface density is dynamical: it includes both stars and dark matter. Moreover, galaxies are not spherical cows: one needs to solve the Poisson equation for the observed disk geometry of LTGs, and account for the varying radial contributions of luminous and dark matter. While this can be made to sound intimidating, the numerical computations are straightforward and rigorous (e.g., Begeman et al., 1991; Casertano & Shostak, 1980; Lelli et al., 2016a). It still boils down to the same sort of relation (modulo geometrical factors of order unity), but with two mass distributions: one for the baryons Mb(R), and one for the dark matter MDM(R). Though the dark matter is more massive, it is also more extended. Consequently, both components can contribute non-negligibly to the rotation over the observed range of radii:

V2(R) = GM/R = G(Mb/R + MDM/R), (equation 4)

(4)where for clarity we have omitted* geometrical factors. The only absolute requirement is that the baryonic contribution should begin to decline once the majority of baryonic mass is encompassed. It is when rotation curves persist in remaining flat past this point that we infer the need for dark matter.

A recurrent problem in testing galaxy formation theories is that they seldom make ironclad predictions; I attempt a brief summary in Table 1. SH represents a broad class of theories with many variants. By construction, the dark matter halos of galaxies of similar stellar mass are similar. If we associate the flat rotation velocity with halo mass, then galaxies of the same mass have the same circular velocity, and the problem posed by Tully-Fisher is automatically satisfied.

Table 1. Predictions of DD and SH for LSB galaxies.

ObservationDDSH
Evolutionary rate++
Size distribution++
Clustering+X
Tully-Fisher relationX?
Central density relation+X

While it is common to associate the flat rotation speed with the dark matter halo, this is a half-truth: the observed velocity is a combination of baryonic and dark components (eq. (4)). It is thus a rather curious coincidence that rotation curves are as flat as they are: the Keplerian decline of the baryonic contribution must be precisely balanced by an increasing contribution from the dark matter halo. This fine-tuning problem was dubbed the “disk-halo conspiracy” (Bahcall & Casertano, 1985; van Albada & Sancisi, 1986). The solution offered for the disk-halo conspiracy was that the formation of the baryonic disk has an effect on the distribution of the dark matter. As the disk settles, the dark matter halo respond through a process commonly referred to as adiabatic compression that brings the peak velocities of disk and dark components into alignment (Blumenthal et al., 1986). Some rearrangement of the dark matter halo in response to the change of the gravitational potential caused by the settling of the disk is inevitable, so this seemed a plausible explanation.

The observation that LSB galaxies obey the Tully-Fisher relation greatly compounds the fine-tuning (McGaugh and de Blok, 1998a; Zwaan et al., 1995). The amount of adiabatic compression depends on the surface density of stars (Sellwood and McGaugh, 2005b): HSB galaxies experience greater compression than LSB galaxies. This should enhance the predicted shift between the two in Tully-Fisher. Instead, the amplitude of the flat rotation speed remains unperturbed.

The generic failings of dark matter models was discussed at length by McGaugh and de Blok ​(1998a). The same problems have been encountered by others. For example, Fig. 5 shows model galaxies formed in a dark matter halo with identical total mass and density profile but with different spin parameters (van den Bosch, ​2001b). Variations in the assembly and cooling history were also considered, but these make little difference and are not relevant here. The point is that smaller (larger) spin parameters lead to more (less) compact disks that contribute more (less) to the total rotation, exactly as anticipated from variations in the term Mb/R in equation (4). The nominal variation is readily detectable, and stands out prominently in the Tully-Fisher diagram (Fig. 5). This is exactly the same fine-tuning problem that was pointed out by Zwaan et al. ​(1995) and McGaugh and de Blok ​(1998a).

What I describe as a fine-tuning problem is not portrayed as such by van den Bosch (2000) and van den Bosch and Dalcanton (2000), who argued that the data could be readily accommodated in the dark matter picture. The difference is between accommodating the data once known, and predicting it a priori. The dark matter picture is extraordinarily flexible: one is free to distribute the dark matter as needed to fit any data that evinces a non-negative mass discrepancy, even data that are wrong (de Blok & McGaugh, 1998). It is another matter entirely to construct a realistic model a priori; in my experience it is quite easy to construct models with plausible-seeming parameters that bear little resemblance to real galaxies (e.g., the low-spin case in Fig. 5). A similar conundrum is encountered when constructing models that can explain the long tidal tails observed in merging and interacting galaxies: models with realistic rotation curves do not produce realistic tidal tails, and vice-versa (Dubinski et al., 1999). The data occupy a very narrow sliver of the enormous volume of parameter space available to dark matter models, a situation that seems rather contrived.

Fig. 5. Model galaxy rotation curves and the Tully-Fisher relation. Rotation curves (left panel) for model galaxies of the same mass but different spin parameters λ from van den Bosch (2001b, see his Fig. 3). Models with lower spin have more compact stellar disks that contribute more to the rotation curve (V2 ​= ​GM/R; R being smaller for the same M). These models are shown as square points on the Baryonic Tully-Fisher relation (right) along with data for real galaxies (grey circles: Lelli et al., 2016b) and a fit thereto (dashed line). Differences in the cooling history result in modest variation in the baryonic mass at fixed halo mass as reflected in the vertical scatter of the models. This is within the scatter of the data, but variation due to the spin parameter is not.

Both DD and SH predict residuals from Tully-Fisher that are not observed. I consider this to be an unrecoverable failure for DD, which was my hypothesis (McGaugh, 1992), so I worked hard to salvage it. I could not. For SH, Tully-Fisher might be recovered in the limit of dark matter domination, which requires further consideration.


I will save the further consideration for a future post, as that can take infinite words (there are literally thousands of ApJ papers on the subject). The real problem that rotation curve data pose generically for the dark matter interpretation is the fine-tuning required between baryonic and dark matter components – the balancing act explicit in the equations above. This, by itself, constitutes a practical falsification of the dark matter paradigm.

Without going into interesting but ultimately meaningless details (maybe next time), the only way to avoid this conclusion is to choose to be unconcerned with fine-tuning. If you choose to say fine-tuning isn’t a problem, then it isn’t a problem. Worse, many scientists don’t seem to understand that they’ve even made this choice: it is baked into their assumptions. There is no risk of questioning those assumptions if one never stops to think about them, much less worry that there might be something wrong with them.

Much of the field seems to have sunk into a form of scientific nihilism. The attitude I frequently encounter when I raise this issue boils down to “Don’t care! Everything will magically work out! LA LA LA!”


*Strictly speaking, eq. (4) only holds for spherical mass distributions. I make this simplification here to emphasize the fact that both mass and radius matter. This essential scaling persists for any geometry: the argument holds in complete generality.

Common ground

Common ground

In order to agree on an interpretation, we first have to agree on the facts. Even when we agree on the facts, the available set of facts may admit multiple interpretations. This was an obvious and widely accepted truth early in my career*. Since then, the field has decayed into a haphazardly conceived set of unquestionable absolutes that are based on a large but well-curated subset of facts that gratuitously ignores any subset of facts that are inconvenient.

Sadly, we seem to have entered a post-truth period in which facts are drowned out by propaganda. I went into science to get away from people who place faith before facts, and comfortable fictions ahead of uncomfortable truths. Unfortunately, a lot of those people seem to have followed me here. This manifests as people who quote what are essentially pro-dark matter talking points at me like I don’t understand LCDM, when all it really does is reveal that they are posers** who picked up on some common myths about the field without actually reading the relevant journal articles.

Indeed, a recent experience taught me a new psychology term: identity protective cognition. Identity protective cognition is the tendency for people in a group to selectively credit or dismiss evidence in patterns that reflect the beliefs that predominate in their group. When it comes to dark matter, the group happens to be a scientific one, but the psychology is the same: I’ve seen people twist themselves into logical knots to protect their belief in dark matter from being subject to critical examination. They do it without even recognizing that this is what they’re doing. I guess this is a human foible we cannot escape.

I’ve addressed these issues before, but here I’m going to start a series of posts on what I think some of the essential but underappreciated facts are. This is based on a talk that I gave at a conference on the philosophy of science in 2019, back when we had conferences, and published in Studies in History and Philosophy of Science. I paid the exorbitant open access fee (the journal changed its name – and publication policy – during the publication process), so you can read the whole thing all at once if you are eager. I’ve already written it to be accessible, so mostly I’m going to post it here in what I hope are digestible chunks, and may add further commentary if it seems appropriate.

Cosmic context

Cosmology is the science of the origin and evolution of the universe: the biggest of big pictures. The modern picture of the hot big bang is underpinned by three empirical pillars: an expanding universe (Hubble expansion), Big Bang Nucleosynthesis (BBN: the formation of the light elements through nuclear reactions in the early universe), and the relic radiation field (the Cosmic Microwave Background: CMB) (Harrison, 2000; Peebles, 1993). The discussion here will take this framework for granted.

The three empirical pillars fit beautifully with General Relativity (GR). Making the simplifying assumptions of homogeneity and isotropy, Einstein’s equations can be applied to treat the entire universe as a dynamical entity. As such, it is compelled either to expand or contract. Running the observed expansion backwards in time, one necessarily comes to a hot, dense, early phase. This naturally explains the CMB, which marks the transition from an opaque plasma to a transparent gas (Sunyaev and Zeldovich, 1980; Weiss, 1980). The abundances of the light elements can be explained in detail with BBN provided the universe expands in the first few minutes as predicted by GR when radiation dominates the mass-energy budget of the universe (Boesgaard & Steigman, 1985).

The marvelous consistency of these early universe results with the expectations of GR builds confidence that the hot big bang is the correct general picture for cosmology. It also builds overconfidence that GR is completely sufficient to describe the universe. Maintaining consistency with modern cosmological data is only possible with the addition of two auxiliary hypotheses: dark matter and dark energy. These invisible entities are an absolute requirement of the current version of the most-favored cosmological model, ΛCDM. The very name of this model is born of these dark materials: Λ is Einstein’s cosmological constant, of which ‘dark energy’ is a generalization, and CDM is cold dark matter.

Dark energy does not enter much into the subject of galaxy formation. It mainly helps to set the background cosmology in which galaxies form, and plays some role in the timing of structure formation. This discussion will not delve into such details, and I note only that it was surprising and profoundly disturbing that we had to reintroduce (e.g., Efstathiou et al., 1990; Ostriker and Steinhardt, 1995; Perlmutter et al., 1999; Riess et al., 1998; Yoshii and Peterson, 1995) Einstein’s so-called ‘greatest blunder.’

Dark matter, on the other hand, plays an intimate and essential role in galaxy formation. The term ‘dark matter’ is dangerously crude, as it can reasonably be used to mean anything that is not seen. In the cosmic context, there are at least two forms of unseen mass: normal matter that happens not to glow in a way that is easily seen — not all ordinary material need be associated with visible stars — and non-baryonic cold dark matter. It is the latter form of unseen mass that is thought to dominate the mass budget of the universe and play a critical role in galaxy formation.

Cold Dark Matter

Cold dark matter is some form of slow moving, non-relativistic (‘cold’) particulate mass that is not composed of normal matter (baryons). Baryons are the family of particles that include protons and neutrons. As such, they compose the bulk of the mass of normal matter, and it has become conventional to use this term to distinguish between normal, baryonic matter and the non-baryonic dark matter.

The distinction between baryonic and non-baryonic dark matter is no small thing. Non-baryonic dark matter must be a new particle that resides in a new ‘dark sector’ that is completely distinct from the usual stable of elementary particles. We do not just need some new particle, we need one (or many) that reside in some sector beyond the framework of the stubbornly successful Standard Model of particle physics. Whatever the solution to the mass discrepancy problem turns out to be, it requires new physics.

The cosmic dark matter must be non-baryonic for two basic reasons. First, the mass density of the universe measured gravitationally (Ωm ​≈ ​0.3, e.g., Faber and Gallagher, 1979; Davis et al., 1980, 1992) clearly exceeds the mass density in baryons as constrained by BBN (Ωb ​≈ ​0.05, e.g., Walker et al., 1991). There is something gravitating that is not ordinary matter: Ωm ​> ​Ωb.

The second reason follows from the absence of large fluctuations in the CMB (Peebles and Yu, 1970; Silk, 1968; Sunyaev and Zeldovich, 1980). The CMB is extraordinarily uniform in temperature across the sky, varying by only ~ 1 part in 105 (Smoot et al., 1992). These small temperature variations correspond to variations in density. Gravity is an attractive force; it will make the rich grow richer. Small density excesses will tend to attract more mass, making them larger, attracting more mass, and leading to the formation of large scale structures, including galaxies. But gravity is also a weak force: this process takes a long time. In the long but finite age of the universe, gravity plus known baryonic matter does not suffice to go from the initially smooth, highly uniform state of the early universe to the highly clumpy, structured state of the local universe (Peebles, 1993). The solution is to boost the process with an additional component of mass — the cold dark matter — that gravitates without interacting with the photons, thus getting a head start on the growth of structure while not aggravating the amplitude of temperature fluctuations in the CMB.

Taken separately, one might argue away the need for dark matter. Taken together, these two distinct arguments convinced nearly everyone, including myself, of the absolute need for non-baryonic dark matter. Consequently, CDM became established as the leading paradigm during the 1980s (Peebles, 1984; Steigman and Turner, 1985). The paradigm has snowballed since that time, the common attitude among cosmologists being that CDM has to exist.

From an astronomical perspective, the CDM could be any slow-moving, massive object that does not interact with photons nor participate in BBN. The range of possibilities is at once limitless yet highly constrained. Neutrons would suffice if they were stable in vacuum, but they are not. Primordial black holes are a logical possibility, but if made of normal matter, they must somehow form in the first second after the Big Bang to not impair BBN. At this juncture, microlensing experiments have excluded most plausible mass ranges that primordial black holes could occupy (Mediavilla et al., 2017). It is easy to invent hypothetical dark matter candidates, but difficult for them to remain viable.

From a particle physics perspective, the favored candidate is a Weakly Interacting Massive Particle (WIMP: Peebles, 1984; Steigman and Turner, 1985). WIMPs are expected to be the lightest stable supersymmetric partner particle that resides in the hypothetical supersymmetric sector (Martin, 1998). The WIMP has been the odds-on favorite for so long that it is often used synonymously with the more generic term ‘dark matter.’ It is the hypothesized particle that launched a thousand experiments. Experimental searches for WIMPs have matured over the past several decades, making extraordinary progress in not detecting dark matter (Aprile et al., 2018). Virtually all of the parameter space in which WIMPs had been predicted to reside (Trotta et al., 2008) is now excluded. Worse, the existence of the supersymmetric sector itself, once seemingly a sure thing, remains entirely hypothetical, and appears at this juncture to be a beautiful idea that nature declined to implement.

In sum, we must have cold dark matter for both galaxies and cosmology, but we have as yet no clue to what it is.


* There is a trope that late in their careers, great scientists come to the opinion that everything worth discovering has been discovered, because they themselves already did everything worth doing. That is not a concern I have – I know we haven’t discovered all there is to discover. Yet I see no prospect for advancing our fundamental understanding simply because there aren’t enough of us pulling in the right direction. Most of the community is busy barking up the wrong tree, and refuses to be distracted from their focus on the invisible squirrel that isn’t there.

** Many of these people are the product of the toxic culture that Simon White warned us about. They wave the sausage of galaxy formation and feedback like a magic wand that excuses all faults while being proudly ignorant of how the sausage was made. Bitch, please. I was there when that sausage was made. I helped make the damn sausage. I know what went into it, and I recognize when it tastes wrong.

Are there credible deviations from the baryonic Tully-Fisher relation?

Are there credible deviations from the baryonic Tully-Fisher relation?

There is a rule of thumb in scientific publication that if a title is posed a question, the answer is no.

It sucks being so far ahead of the field that I get to watch people repeat the mistakes I made (or almost made) and warned against long ago. There have been persistent claims of deviations of one sort or another from the Baryonic Tully-Fisher relation (BTFR). So far, these have all been obviously wrong, for reasons we’ve discussed before. It all boils down to data quality. The credibility of data is important, especially in astronomy.

Here is a plot of the BTFR for all the data I have ready at hand, both for gas rich galaxies and the SPARC sample:

Baryonic mass (stars plus gas) as a function of the rotation speed measured at the outermost detected radius.

A relation is clear in the plot above, but it’s a mess. There’s lots of scatter, especially at low mass. There is also a systematic tendency for low mass galaxies to fall to the left of the main relation, appearing to rotate too slowly for their mass.

There is no quality control in the plot above. I have thrown all the mud at the wall. Let’s now do some quality control. The plotted quantities are the baryonic mass and the flat rotation speed. We haven’t actually measured the flat rotation speed in all these cases. For some, we’ve simply taken the last measured point. This was an issue we explicitly pointed out in Stark et al (2009):

Fig. 1 from Stark et al (2009): Examples of rotation curves (Swaters et al. 2009) that do and do not satisfy the flatness criterion. The rotation curve of UGC 4173 (top) rises continuously and does not meet the flatness criterion. UGC 5721 (center) is an ideal case with clear flattening of the rotational velocity. UGC 4499 marginally satisfies the flatness criterion.

If we include a galaxy like UGC 4173, we expect it will be offset to the low velocity side because we haven’t measured the flat rotation speed. We’ve merely taken that last point and hoped it is close enough. Sometimes it is, depending on your tolerance for systematic errors. But the plain fact is that we haven’t measured the flat rotation speed in this case. We don’t even know if it has one; it is only empirical experience with other examples that lead us to expect it to flatten if we manage to observe further out.

For our purpose here, it is as if we hadn’t measured this galaxy at all. So let’s not pretend like we have, and restrict the plot to galaxies for which the flat velocity is measured:

The same as the first plot, restricted to galaxies for which the flat rotation speed has been measured.

The scatter in the BTFR decreases dramatically when we exclude the galaxies for which we haven’t measured the relevant quantities. This is a simple matter of data quality. We’re no longer pretending to have measured a quantity that we haven’t measured.

There are still some outliers as there are still things that can go wrong. Inclinations are a challenge for some galaxies, as are distances determinations. Remember that Tully-Fisher was first employed as a distance indicator. If we look at the plot above from that perspective, the outliers have obviously been assigned the wrong distance, and we would assign a new one by putting them on the relation. That, in a nutshell, is how astronomical distance indicators work.

If we restrict the data to those with accurate measurements, we get

Same as the plot above, restricted to galaxies for which the quantities measured on both axes have been measured to an accuracy of 20% or better.

Now the outliers are gone. They were outliers because they had crappy data. This is completely unsurprising. Some astronomical data are always crappy. You plot crap against crap, you get crap. If, on the other hand, you look at the high quality data, you get a high quality correlation. Even then, you can never be sure that you’ve excluded all the crap, as there are often unknown unknowns – systematic errors you don’t know about and can’t control for.

We have done the exercise of varying the tolerance limits on data quality many times. We have shown that the scatter varies as expected with data quality. If we consider high quality data, we find a small scatter in the BTFR. If we consider low quality data, we get to plot more points, but the scatter goes up. You can see this by eye above. We can quantify this, and have. The amount of scatter varies as expected with the size of the uncertainties. Bigger errors, bigger scatter. Smaller errors, smaller scatter. This shouldn’t be hard to understand.

So why do people – many of them good scientists – keep screwing this up?

There are several answers. One is that measuring the flat rotation speed is hard. We have only done it for a couple hundred galaxies. This seems like a tiny number in the era of the Sloan Digitial Sky Survey, which enables any newbie to assemble a sample of tens of thousands of galaxies… with photometric data. It doesn’t provide any kinematic data. Measuring the stellar mass with the photometric data doesn’t do one bit of good for this problem if you don’t have the kinematic axis to plot against. Consequently, it doesn’t matter how big such a sample is.

You have zero data.

Other measurements often provide a proxy measurement that seems like it ought to be close enough to use. If not the flat rotation speed, maybe you have a line width or a maximum speed or V2.2 or the hybrid S0.5 or some other metric. That’s fine, so long as you recognize you’re plotting something different so should expect to get something different – not the BTFR. Again, we’ve shown that the flat rotation speed is the measure that minimizes the scatter; if you utilize some other measure you’re gonna get more scatter. That may be useful for some purposes, but it only tells you about what you measured. It doesn’t tell you anything about the scatter in the BTFR constructed with the flat rotation speed if you didn’t measure the flat rotation speed.

Another possibility is that there exist galaxies that fall off the BTFR that we haven’t observed yet. It is a big universe, after all. This is a known unknown unknown: we know that we don’t know if there are non-conforming galaxies. If the relation is indeed absolute, then we never can find any, but never can we know that they don’t exist, only that we haven’t yet found any credible examples.

I’ve addressed the possibility of nonconforming galaxies elsewhere, so all I’ll say here is that I have spent my entire career seeking out the extremes in galaxy properties. Many times I have specifically sought out galaxies that should deviate from the BTFR for some clear reason, only to be surprised when they fall bang on the BTFR. Over and over and over again. It makes me wonder how Vera Rubin felt when her observations kept turning up flat rotation curves. Shouldn’t happen, but it does – over and over and over again. So far, I haven’t found any credible deviations from the BTFR, nor have I seen credible cases provided by others – just repeated failures of quality control.

Finally, an underlying issue is often – not always, but often – an obsession with salvaging the dark matter paradigm. That’s hard to do if you acknowledge that the observed BTFR – its slope, normalization, lack of scale length residuals, negligible intrinsic scatter; indeed, the very quantities that define it, were anticipated and explicitly predicted by MOND and only MOND. It is easy to confirm the dark matter paradigm if you never acknowledge this to be a problem. Often, people redefine the terms of the issue in some manner that is more tractable from the perspective of dark matter. From that perspective, neither the “cold” baryonic mass nor the flat rotation speed have any special meaning, so why even consider them? That is the road to MONDness.

A script for every observational test

A script for every observational test

Science progresses through hypothesis testing. The primary mechanism for distinguishing between hypotheses is predictive power. The hypothesis that can predict new phenomena is “better.” This is especially true for surprising, a priori predictions: it matters more when the new phenomena was not expected in the context of an existing paradigm.

I’ve seen this happen many times now. MOND has had many predictive successes. As a theory, it has been exposed to potential falsification, and passed many tests. These have often been in the form of phenomena that had not been anticipated in any other way, and were initially received as strange to the point of seeming impossible. It is exactly the situation envisioned in Putnam’s “no miracles” argument: it is unlikely to the point of absurdity that a wholly false theory should succeed in making so many predictions of such diversity and precision.

MOND has many doubters, which I can understand. What I don’t get is the ignorance I so often encounter among them. To me, the statement that MOND has had many unexpected predictions come true is a simple statement of experiential fact. I suspect it will be received by some as a falsehood. It shouldn’t be, so if you don’t know what I’m talking about, you should try reading the relevant literature. What papers about MOND have you actually read?

Ignorance is not a strong basis for making scientific judgements. Before I criticize something, I make sure I know what I’m talking about. That’s rarely true of the complaints I hear against MOND. There are legitimate ones, to be sure, but for the most part I hear assertions like

  • MOND is guaranteed to fit rotation curves.
  • It fits rotation curves but does nothing else.
  • It is just a fitting tool with no predictive power.

These are myths, plain and simple. They are easily debunked, and were long ago. Yet I hear them repeated often by people who think they know better, one as recently as last week. Serious people who expect to be taken seriously as scientists, and yet they repeat known falsehoods as if they were established fact. Is there a recycling bin of debunked myths that gets passed around? I guess it is easy to believe a baseless rumor when it conforms to your confirmation bias: no need for fact-checking!

Aside from straight-up reality denial, another approach is to claim that dark matter predicts exactly the same thing, whatever it is. I’ve seen this happen so often, I know how the script always goes:


• We make a new observation X that is surprising.
• We test the hypothesis, and report the result: “Gee, MOND predicted this strange effect, and we see evidence of it in the data.”
• Inevitable Question: What does LCDM predict?
• Answer: Not that.
• Q: But what does it predict?
• A: It doesn’t really make a clear prediction on this subject, so we have to build some kind of model to even begin to address this question. In the most obvious models one can construct, it predicts Y. Y is not the same as X.
• Q: What about more complicated models?
• A: One can construct more complicated models, but they are not unique. They don’t make a prediction so much as provide a menu of options from which we may select the results that suit us. The obvious danger is that it becomes possible to do anything, and we have nothing more than an epicycle theory of infinite possibilities. If we restrict ourselves to considering the details of serious models that have only been partially fine-tuned over the course of the development of the field, then there are still a lot of possibilities. Some of them come closer to reality than others but still don’t really do the right thing for the following reasons…[here follows 25 pages of minutia in the ApJ considering every up/down left/right stand on your head and squint possibility that still winds up looking more like Y than like X.] You certainly couldn’t predict X this way, as MOND did a priori.
• Q: That’s too long to read. Dr. Z says it works, so he must be right since we already know that LCDM is correct.

The thing is, Dr. Z did not predict X ahead of time. MOND did. Maybe Dr. Z’s explanation in terms of dark matter makes sense. Often it does not, but even if it does, so what? Why should I be more impressed with a theory that only explains things after they’re observed when another predicted them a priori?

There are lots of Dr. Z’s. No matter how carefully one goes through the minutia, no matter how clearly one demonstrates that X cannot work in a purely conventional CDM context, there is always someone who says it does. That’s what people want to hear, so that’s what they choose to believe. Way easier that way. Or, as it has been noted before

Faced with the choice between changing one’s mind and proving that there is no need to do so, almost everybody gets busy on the proof.

J. K. Galbraith (1965)

Super spirals on the Tully-Fisher relation

Super spirals on the Tully-Fisher relation

A surprising and ultimately career-altering result that I encountered while in my first postdoc was that low surface brightness galaxies fell precisely on the Tully-Fisher relation. This surprising result led me to test the limits of the relation in every conceivable way. Are there galaxies that fall off it? How far is it applicable? Often, that has meant pushing the boundaries of known galaxies to ever lower surface brightness, higher gas fraction, and lower mass where galaxies are hard to find because of unavoidable selection biases in galaxy surveys: dim galaxies are hard to see.

I made a summary plot in 2017 to illustrate what we had learned to that point. There is a clear break in the stellar mass Tully-Fisher relation (left panel) that results from neglecting the mass of interstellar gas that becomes increasingly important in lower mass galaxies. The break goes away when you add in the gas mass (right panel). The relation between baryonic mass and rotation speed is continuous down to Leo P, a tiny galaxy just outside the Local Group comparable in mass to a globular cluster and the current record holder for the slowest known rotating galaxy at a mere 15 km/s.

The stellar mass (left) and baryonic (right) Tully-Fisher relations constructed in 2017 from SPARC data and gas rich galaxies. Dark blue points are star dominated galaxies; light blue points are galaxies with more mass in gas than in stars. The data are restricted to galaxies with distance measurements accurate to 20% or better; see McGaugh et al. (2019) for a discussion of the effects of different quality criteria. The line has a slope of 4 and is identical in both panels for comparison.

At the high mass end, galaxies aren’t hard to see, but they do become progressively rare: there is an exponential cut off in the intrinsic numbers of galaxies at the high mass end. So it is interesting to see how far up in mass we can go. Ogle et al. set out to do that, looking over a huge volume to identify a number of very massive galaxies, including what they dubbed “super spirals.” These extend the Tully-Fisher relation to higher masses.

The Tully-Fisher relation extended to very massive “super” spirals (blue points) by Ogle et al. (2019).

Most of the super spirals lie on the top end of the Tully-Fisher relation. However, a half dozen of the most massive cases fall off to the right. Could this be a break in the relation? So it was claimed at the time, but looking at the data, I wasn’t convinced. It looked to me like they were not always getting out to the flat part of the rotation curve, instead measuring the maximum rotation speed.

Bright galaxies tend to have rapidly rising rotation curves that peak early then fall before flattening out. For very bright galaxies – and super spirals are by definition the brightest spirals – the amplitude of the decline can be substantial, several tens of km/s. So if one measures the maximum speed instead of the flat portion of the curve, points will fall to the right of the relation. I decided not to lose any sleep over it, and wait for better data.

Better data have now been provided by Di Teodoro et al. Here is an example from their paper. The morphology of the rotation curve is typical of what we see in massive spiral galaxies. The maximum rotation speed exceeds 300 km/s, but falls to 275 km/s where it flattens out.

A super spiral (left) and its rotation curve (right) from Di Teodoro et al.

Adding the updated data to the plot, we see that the super spirals now fall on the Tully-Fisher relation, with no hint of a break. There are a couple of outliers, but those are trees. The relation is the forest.

The super spiral (red points) stellar mass (left) and baryonic (right) Tully-Fisher relations as updated by Di Teodoro et al. (2021).

That’s a good plot, but it stops at 108 solar masses, so I couldn’t resist adding the super spirals to my plot from 2017. I’ve also included the dwarfs I discussed in the last post. Together, we see that the baryonic Tully-Fisher relation is continuous over six decades in mass – a factor of million from the smallest to the largest galaxies.

The plot from above updated to include the super spirals (red points) at high mass and Local Group dwarfs (gray squares) at low mass. The SPARC data (blue points) have also been updated with new stellar population mass-to-light ratio estimates that make their bulge components a bit more massive, and with scaling relations for metallicity and molecular gas. The super spirals have been treated in the same way, and adjusted to a matching distance scale (H0 = 73 km/s/Mpc). There is some overlap between the super spirals and the most massive galaxies in SPARC; here the data are in excellent agreement. The super spirals extend to higher mass by a factor of two.

The strength of this correlation continues to amaze me. This never happens in extragalactic astronomy, where correlations are typically weak and have lots of intrinsic scatter. The opposite is true here. This must be telling us something.

The obvious thing that this is telling us is MOND. The initial report that super spirals fell off of the Tully-Fisher relation was widely hailed as a disproof of MOND. I’ve seen this movie many times, so I am not surprised that the answer changed in this fashion. It happens over and over again. Even less surprising is that there is no retraction, no self-examination of whether maybe we jumped to the wrong conclusion.

I get it. I couldn’t believe it myself, to start. I struggled for many years to explain the data conventionally in terms of dark matter. Worked my ass off trying to save the paradigm. Try as I might, nothing worked. Since then, many people have claimed to explain what I could not, but so far all I have seen are variations on models that I had already rejected as obviously unworkable. They either make unsubstantiated assumptions, building a tautology, or simply claim more than they demonstrate. As long as you say what people want to hear, you will be held to a very low standard. If you say what they don’t want to hear, what they are conditioned not to believe, then no standard of proof is high enough.

MOND was the only theory to predict the observed behavior a priori. There are no free parameters in the plots above. We measure the mass and the rotation speed. The data fall on the predicted line. Dark matter models did not predict this, and can at best hope to provide a convoluted, retroactive explanation. Why should I be impressed by that?

Divergence

Divergence

I read somewhere – I don’t think it was Kuhn himself, but someone analyzing Kuhn – that there came a point in the history of science where there was a divergence between scientists, with different scientists disagreeing about what counts as a theory, what counts as a test of a theory, what even counts as evidence. We have reached that point with the mass discrepancy problem.

For many years, I worried that if the field ever caught up with me, it would zoom past. That hasn’t happened. Instead, it has diverged towards a place that I barely recognize as science. It looks more like the Matrix – a simulation – that is increasingly sophisticated yet self-contained, making only parsimonious contact with observational reality and unable to make predictions that apply to real objects. Scaling relations and statistical properties, sure. Actual galaxies with NGC numbers, not so much. That, to me, is not science.

I have found it increasingly difficult to communicate across the gap built on presumptions buried so deep that they cannot be questioned. One obvious one is the existence of dark matter. This has been fueled by cosmologists who take it for granted and particle physicists eager to discover it who repeat “we know dark matter exists*; we just need to find it” like a religious mantra. This is now ingrained so deeply that it has become difficult to convey even the simple concept that what we call “dark matter” is really just evidence of a discrepancy: we do not know whether it is literally some kind of invisible mass, or a breakdown of the equations that lead us to infer invisible mass.

I try to look at all sides of a problem. I can say nice things about dark matter (and cosmology); I can point out problems with it. I can say nice things about MOND; I can point out problems with it. The more common approach is to presume that any failing of MOND is an automatic win for dark matter. This is a simple-minded logical fallacy: just because MOND gets something wrong doesn’t mean dark matter gets it right. Indeed, my experience has been that cases that don’t make any sense in MOND don’t make any sense in terms of dark matter either. Nevertheless, this attitude persists.

I made this flowchart as a joke in 2012, but it persists in being an uncomfortably fair depiction of how many people who work on dark matter approach the problem.

I don’t know what is right, but I’m pretty sure this attitude is wrong. Indeed, it empowers a form of magical thinking: dark matter has to be correct, so any data that appear to contradict it are either wrong, or can be explained with feedback. Indeed, the usual trajectory has been denial first (that can’t be true!) and explanation later (we knew it all along!) This attitude is an existential threat to the scientific method, and I am despondent in part because I worry we are slipping into a post-scientific reality, where even scientists are little more than priests of a cold, dark religion.


*If we’re sure dark matter exists, it is not obvious that we need to be doing expensive experiments to find it.

Why bother?

Divergence

Divergence

Reality check

Before we can agree on the interpretation of a set of facts, we have to agree on what those facts are. Even if we agree on the facts, we can differ about their interpretation. It is OK to disagree, and anyone who practices astrophysics is going to be wrong from time to time. It is the inevitable risk we take in trying to understand a universe that is vast beyond human comprehension. Heck, some people have made successful careers out of being wrong. This is OK, so long as we recognize and correct our mistakes. That’s a painful process, and there is an urge in human nature to deny such things, to pretend they never happened, or to assert that what was wrong was right all along.

This happens a lot, and it leads to a lot of weirdness. Beyond the many people in the field whom I already know personally, I tend to meet two kinds of scientists. There are those (usually other astronomers and astrophysicists) who might be familiar with my work on low surface brightness galaxies or galaxy evolution or stellar populations or the gas content of galaxies or the oxygen abundances of extragalactic HII regions or the Tully-Fisher relation or the cusp-core problem or faint blue galaxies or big bang nucleosynthesis or high redshift structure formation or joint constraints on cosmological parameters. These people behave like normal human beings. Then there are those (usually particle physicists) who have only heard of me in the context of MOND. These people often do not behave like normal human beings. They conflate me as a person with a theory that is Milgrom’s. They seem to believe that both are evil and must be destroyed. My presence, even the mere mention of my name, easily destabilizes their surprisingly fragile grasp on sanity.

One of the things that scientists-gone-crazy do is project their insecurities about the dark matter paradigm onto me. People who barely know me frequently attribute to me motivations that I neither have nor recognize. They presume that I have some anti-cosmology, anti-DM, pro-MOND agenda, and are remarkably comfortably about asserting to me what it is that I believe. What they never explain, or apparently bother to consider, is why I would be so obtuse? What is my motivation? I certainly don’t enjoy having the same argument over and over again with their ilk, which is the only thing it seems to get me.

The only agenda I have is a pro-science agenda. I want to know how the universe works.

This agenda is not theory-specific. In addition to lots of other astrophysics, I have worked on both dark matter and MOND. I will continue to work on both until we have a better understanding of how the universe works. Right now we’re very far away from obtaining that goal. Anyone who tells you otherwise is fooling themselves – usually by dint of ignoring inconvenient aspects of the evidence. Everyone is susceptible to cognitive dissonance. Scientists are no exception – I struggle with it all the time. What disturbs me is the number of scientists who apparently do not. The field is being overrun with posers who lack the self-awareness to question their own assumptions and biases.

So, I feel like I’m repeating myself here, but let me state my bias. Oh wait. I already did. That’s why it felt like repetition. It is.

The following bit of this post is adapted from an old web page I wrote well over a decade ago. I’ve lost track of exactly when – the file has been through many changes in computer systems, and unix only records the last edit date. For the linked page, that’s 2016, when I added a few comments. The original is much older, and was written while I was at the University of Maryland. Judging from the html style, it was probably early to mid-’00s. Of course, the sentiment is much older, as it shouldn’t need to be said at all.

I will make a few updates as seem appropriate, so check the link if you want to see the changes. I will add new material at the end.


Long standing remarks on intellectual honesty

The debate about MOND often degenerates into something that falls well short of the sober, objective discussion that is suppose to characterize scientific debates. One can tell when voices are raised and baseless ad hominem accusations made. I have, with disturbing frequency, found myself accused of partisanship and intellectual dishonesty, usually by people who are as fair and balanced as Fox News.

Let me state with absolute clarity that intellectual honesty is a bedrock principle of mine. My attitude is summed up well by the quote

When a man lies, he murders some part of the world.

Paul Gerhardt

I first heard this spoken by the character Merlin in the movie Excalibur (1981 version). Others may have heard it in a song by Metallica. As best I can tell, it is originally attributable to the 17th century cleric Paul Gerhardt.

This is a great quote for science, as the intent is clear. We don’t get to pick and choose our facts. Outright lying about them is antithetical to science.

I would extend this to ignoring facts. One should not only be honest, but also as complete as possible. It does not suffice to be truthful while leaving unpleasant or unpopular facts unsaid. This is lying by omission.

I “grew up” believing in dark matter. Specifically, Cold Dark Matter, presumably a WIMP. I didn’t think MOND was wrong so much as I didn’t think about it at all. Barely heard of it; not worth the bother. So I was shocked – and angered – when it its predictions came true in my data for low surface brightness galaxies. So I understand when my colleagues have the same reaction.

Nevertheless, Milgrom got the prediction right. I had a prediction, it was wrong. There were other conventional predictions, they were also wrong. Indeed, dark matter based theories generically have a very hard time explaining these data. In a Bayesian sense, given the prior that we live in a ΛCDM universe, the probability that MONDian phenomenology would be observed is practically zero. Yet it is. (This is very well established, and has been for some time.)

So – confronted with an unpopular theory that nevertheless had some important predictions come true, I reported that fact. I could have ignored it, pretended it didn’t happen, covered my eyes and shouted LA LA LA NOT LISTENING. With the benefit of hindsight, that certainly would have been the savvy career move. But it would also be ignoring a fact, and tantamount to a lie.

In short, though it was painful and protracted, I changed my mind. Isn’t that what the scientific method says we’re suppose to do when confronted with experimental evidence?

That was my experience. When confronted with evidence that contradicted my preexisting world view, I was deeply troubled. I tried to reject it. I did an enormous amount of fact-checking. The people who presume I must be wrong have not had this experience, and haven’t bothered to do any fact-checking. Why bother when you already are sure of the answer?


Willful Ignorance

I understand being skeptical about MOND. I understand being more comfortable with dark matter. That’s where I started from myself, so as I said above, I can empathize with people who come to the problem this way. This is a perfectly reasonable place to start.

For me, that was over a quarter century ago. I can understand there being some time lag. That is not what is going on. There has been ample time to process and assimilate this information. Instead, most physicists have chosen to remain ignorant. Worse, many persist in spreading what can only be described as misinformation. I don’t think they are liars; rather, it seems that they believe their own bullshit.

To give an example of disinformation, I still hear said things like “MOND fits rotation curves but nothing else.” This is not true. The first thing I did was check into exactly that. Years of fact-checking went into McGaugh & de Blok (1998), and I’ve done plenty more since. It came as a great surprise to me that MOND explained the vast majority of the data as well or better than dark matter. Not everything, to be sure, but lots more than “just” rotation curves. Yet this old falsehood still gets repeated as if it were not a misconception that was put to rest in the previous century. We’re stuck in the dark ages by choice.

It is not a defensible choice. There is no excuse to remain ignorant of MOND at this juncture in the progress of astrophysics. It is incredibly biased to point to its failings without contending with its many predictive successes. It is tragi-comically absurd to assume that dark matter provides a better explanation when it cannot make the same predictions in advance. MOND may not be correct in every particular, and makes no pretense to be a complete theory of everything. But it is demonstrably less wrong than dark matter when it comes to predicting the dynamics of systems in the low acceleration regime. Pretending like this means nothing is tantamount to ignoring essential facts.

Even a lie of omission murders a part of the world.

25 years a heretic

25 years a heretic

People seem to like to do retrospectives at year’s end. I take a longer view, but the end of 2020 seems like a fitting time to do that. Below is the text of a paper I wrote in 1995 with collaborators at the Kapteyn Institute of the University of Groningen. The last edit date is from December of that year, so this text (in plain TeX, not LaTeX!) is now a quarter century old. I am just going to cut & paste it as-was; I even managed to recover the original figures and translate them into something web-friendly (postscript to jpeg). This is exactly how it was.

This was my first attempt to express in the scientific literature my concerns for the viability of the dark matter paradigm, and my puzzlement that the only theory to get any genuine predictions right was MOND. It was the hardest admission in my career that this could be even a remote possibility. Nevertheless, intellectual honesty demanded that I report it. To fail to do so would be an act of reality denial antithetical to the foundational principles of science.

It was never published. There were three referees. Initially, one was positive, one was negative, and one insisted that rotation curves weren’t flat. There was one iteration; this is the resubmitted version in which the concerns of the second referee were addressed to his apparent satisfaction by making the third figure a lot more complicated. The third referee persisted that none of this was valid because rotation curves weren’t flat. Seems like he had a problem with something beyond the scope of this paper, but the net result was rejection.

One valid concern that ran through the refereeing process from all sides was “what about everything else?” This is a good question that couldn’t fit into a short letter like this. Thanks to the support of Vera Rubin and a Carnegie Fellowship, I spent the next couple of years looking into everything else. The results were published in 1998 in a series of three long papers: one on dark matter, one on MOND, and one making detailed fits.

This had started from a very different place intellectually with my efforts to write a paper on galaxy formation that would have been similar to contemporaneous papers like Dalcanton, Spergel, & Summers and Mo, Mao, & White. This would have followed from my thesis and from work with Houjun Mo, who was an office mate when we were postdocs at the IoA in Cambridge. (The ideas discussed in Mo, McGaugh, & Bothun have been reborn recently in the galaxy formation literature under the moniker of “assembly bias.”) But I had realized by then that my ideas – and those in the papers cited – were wrong. So I didn’t write a paper that I knew to be wrong. I wrote this one instead.

Nothing substantive has changed since. Reading it afresh, I’m amazed how many of the arguments over the past quarter century were anticipated here. As a scientific community, we are stuck in a rut, and seem to prefer to spin the wheels to dig ourselves in deeper than consider the plain if difficult path out.


Testing hypotheses of dark matter and alternative gravity with low surface density galaxies

The missing mass problem remains one of the most vexing in astrophysics. Observations clearly indicate either the presence of a tremendous amount of as yet unidentified dark matter1,2, or the need to modify the law of gravity3-7. These hypotheses make vastly different predictions as a function of density. Observations of the rotation curves of galaxies of much lower surface brightness than previously studied therefore provide a powerful test for discriminating between them. The dark matter hypothesis requires a surprisingly strong relation between the surface brightness and mass to light ratio8, placing stringent constraints on theories of galaxy formation and evolution. Alternatively, the observed behaviour is predicted4 by one of the hypothesised alterations of gravity known as modified Newtonian dynamics3,5 (MOND).

Spiral galaxies are observed to have asymptotically flat [i.e., V(R) ~ constant for large R] rotation curves that extend well beyond their optical edges. This trend continues for as far (many, sometimes > 10 galaxy scale lengths) as can be probed by gaseous tracers1,2 or by the orbits of satellite galaxies9. Outside a galaxy’s optical radius, the gravitational acceleration is aN = GM/R2 = V2/R so one expects V(R) ~ R-1/2. This Keplerian behaviour is not observed in galaxies.

One approach to this problem is to increase M in the outer parts of galaxies in order to provide the extra gravitational acceleration necessary to keep the rotation curves flat. Indeed, this is the only option within the framework of Newtonian gravity since both V and R are directly measured. The additional mass must be invisible, dominant, and extend well beyond the optical edge of the galaxies.

Postulating the existence of this large amount of dark matter which reveals itself only by its gravitational effects is a radical hypothesis. Yet the kinematic data force it upon us, so much so that the existence of dark matter is generally accepted. Enormous effort has gone into attempting to theoretically predict its nature and experimentally verify its existence, but to date there exists no convincing detection of any hypothesised dark matter candidate, and many plausible candidates have been ruled out10.

Another possible solution is to alter the fundamental equation aN = GM/R2. Our faith in this simple equation is very well founded on extensive experimental tests of Newtonian gravity. Since it is so fundamental, altering it is an even more radical hypothesis than invoking the existence of large amounts of dark matter of completely unknown constituent components. However, a radical solution is required either way, so both possibilities must be considered and tested.

A phenomenological theory specifically introduced to address the problem of the flat rotation curves is MOND3. It has no other motivation and so far there is no firm physical basis for the theory. It provides no satisfactory cosmology, having yet to be reconciled with General Relativity. However, with the introduction of one new fundamental constant (an acceleration a0), it is empirically quite successful in fitting galaxy rotation curves11-14. It hypothesises that for accelerations a < a0 = 1.2 x 10-10 m s-2, the effective acceleration is given by aeff = (aN a0)1/2. This simple prescription works well with essentially only one free parameter per galaxy, the stellar mass to light ratio, which is subject to independent constraint by stellar evolution theory. More importantly, MOND makes predictions which are distinct and testable. One specific prediction4 is that the asymptotic (flat) value of the rotation velocity, Va, is Va = (GMa0)1/4. Note that Va does not depend on R, but only on M in the regime of small accelerations (a < a0).

In contrast, Newtonian gravity depends on both M and R. Replacing R with a mass surface density variable S = M(R)/R2, the Newtonian prediction becomes M S ~ Va4 which contrasts with the MOND prediction M ~ Va4. These relations are the theoretical basis in each case for the observed luminosity-linewidth relation L ~ Va4 (better known as the Tully-Fisher15 relation. Note that the observed value of the exponent is bandpass dependent, but does obtain the theoretical value of 4 in the near infrared16 which is considered the best indicator of the stellar mass. The systematic variation with bandpass is a very small effect compared to the difference between the two gravitational theories, and must be attributed to dust or stars under either theory.) To transform from theory to observation one requires the mass to light ratio Y: Y = M/L = S/s, where s is the surface brightness. Note that in the purely Newtonian case, M and L are very different functions of R, so Y is itself a strong function of R. We define Y to be the mass to light ratio within the optical radius R*, as this is the only radius which can be measured by observation. The global mass to light ratio would be very different (since M ~ R for R > R*, the total masses of dark haloes are not measurable), but the particular choice of definition does not affect the relevant functional dependences is all that matters. The predictions become Y2sL ~ Va4 for Newtonian gravity8,16 and YL ~ Va4 for MOND4.

The only sensible17 null hypothesis that can be constructed is that the mass to light ratio be roughly constant from galaxy to galaxy. Clearly distinct predictions thus emerge if galaxies of different surface brightnesses s are examined. In the Newtonian case there should be a family of parallel Tully-Fisher relations for each surface brightness. In the case of MOND, all galaxies should follow the same Tully-Fisher relation irrespective of surface brightness.

Recently it has been shown that extreme objects such as low surface brightness galaxies8,18 (those with central surface brightnesses fainter than s0 = 23 B mag./[] corresponding 40 L pc-2) obey the same Tully-Fisher relation as do the high surface brightness galaxies (typically with s0 = 21.65 B mag./[] or 140 L pc-2) which originally15 defined it. Fig. 1 shows the luminosity-linewidth plane for galaxies ranging over a factor of 40 in surface brightness. Regardless of surface brightness, galaxies fall on the same Tully-Fisher relation.

The luminosity-linewidth (Tully-Fisher) relation for spiral galaxies over a large range in surface brightness. The B-band relation is shown; the same result is obtained in all bands8,18. Absolute magnitudes are measured from apparent magnitudes assuming H0 = 75 km/s/Mpc. Rotation velocities Va are directly proportional to observed 21 cm linewidths (measured as the full width at 20% of maximum) W20 corrected for inclination [sin-1(i)]. Open symbols are an independent sample which defines42 the Tully-Fisher relation (solid line). The dotted lines show the expected shift of the Tully-Fisher relation for each step in surface brightness away from the canonical value s0 = 21.5 if the mass to light ratio remains constant. Low surface brightness galaxies are plotted as solid symbols, binned by surface brightness: red triangles: 22 < s0 < 23; green squares: 23 < s0 < 24; blue circles: s0 > 24. One galaxy with two independent measurements is connected by a line. This gives an indication of the typical uncertainty which is sufficient to explain nearly all the scatter. Contrary to the clear expectation of a readily detectable shift as indicated by the dotted lines, galaxies fall on the same Tully-Fisher relation regardless of surface brightness, as predicted by MOND.

MOND predicts this behaviour in spite of the very different surface densities of low surface brightness galaxies. In order to understand this observational fact in the framework of standard Newtonian gravity requires a subtle relation8 between surface brightness and the mass to light ratio to keep the product sY2 constant. If we retain normal gravity and the dark matter hypothesis, this result is unavoidable, and the null hypothesis of similar mass to light ratios (which, together with an assumed constancy of surface brightness, is usually invoked to explain the Tully-Fisher relation) is strongly rejected. Instead, the current epoch surface brightness is tightly correlated with the properties of the dark matter halo, placing strict constraints on models of galaxy formation and evolution.

The mass to light ratios computed for both cases are shown as a function of surface brightness in Fig. 2. Fig. 2 is based solely on galaxies with full rotation curves19,20 and surface photometry, so Va and R* are directly measured. The correlation in the Newtonian case is very clear (Fig. 2a), confirming our inference8 from the Tully-Fisher relation. Such tight correlations are very rare in extragalactic astronomy, and the Y-s relation is probably the real cause of an inferred Y-L relation. The latter is much weaker because surface brightness and luminosity are only weakly correlated21-24.

The mass to light ratio Y (in M/L) determined with (a) Newtonian dynamics and (b) MOND, plotted as a function of central surface brightness. The mass determination for Newtonian dynamics is M = V2 R*/G and for MOND is M = V4/(G a0). We have adopted as a consistent definition of the optical radius R* four scale lengths of the exponential optical disc. This is where discs tend to have edges, and contains essentially all the light21,22. The definition of R* makes a tremendous difference to the absolute value of the mass to light ratio in the Newtonian case, but makes no difference at all to the functional relation will be present regardless of the precise definition. These mass measurements are more sensitive to the inclination corrections than is the Tully-Fisher relation since there is a sin-2(i) term in the Newtonian case and one of sin-4(i) for MOND. It is thus very important that the inclination be accurately measured, and we have retained only galaxies which have adequate inclination determinations — error bars are plotted for a nominal uncertainty of 6 degrees. The sensitivity to inclination manifests itself as an increase in the scatter from (a) to (b). The derived mass is also very sensitive to the measured value of the asymptotic velocity itself, so we have used only those galaxies for which this can be taken directly from a full rotation curve19,20,42. We do not employ profile widths; the velocity measurements here are independent of those in Fig. 1. In both cases, we have subtracted off the known atomic gas mass19,20,42, so what remains is essentially only the stars and any dark matter that may exist. A very strong correlation (regression coefficient = 0.85) is apparent in (a): this is the mass to light ratio — surface brightness conspiracy. The slope is consistent (within the errors) with the theoretical expectation s ~ Y-2 derived from the Tully-Fisher relation8. At the highest surface brightnesses, the mass to light ratio is similar to that expected for the stellar population. At the faintest surface brightnesses, it has increased by a factor of nearly ten, indicating increasing dark matter domination within the optical disc as surface brightness decreases or a very systematic change in the stellar population, or both. In (b), the mass to light ratio scatters about a constant value of 2. This mean value, and the lack of a trend, is what is expected for stellar populations17,21-24.

The Y-s relation is not predicted by any dark matter theory25,26. It can not be purely an effect of the stellar mass to light ratio, since no other stellar population indicator such as color21-24 or metallicity27,28 is so tightly correlated with surface brightness. In principle it could be an effect of the stellar mass fraction, as the gas mass to light ratio follows a relation very similar to that of total mass to light ratio20. We correct for this in Fig. 2 by subtracting the known atomic gas mass so that Y refers only to the stars and any dark matter. We do not correct for molecular gas, as this has never been detected in low surface brightness galaxies to rather sensitive limits30 so the total mass of such gas is unimportant if current estimates31 of the variation of the CO to H2 conversion factor with metallicity are correct. These corrections have no discernible effect at all in Fig. 2 because the dark mass is totally dominant. It is thus very hard to see how any evolutionary effect in the luminous matter can be relevant.

In the case of MOND, the mass to light ratio directly reflects that of the stellar population once the correction for gas mass fraction is made. There is no trend of Y* with surface brightness (Fig. 2b), a more natural result and one which is consistent with our studies of the stellar populations of low surface brightness galaxies21-23. These suggest that Y* should be roughly constant or slightly declining as surface brightness decreases, with much scatter. The mean value Y* = 2 is also expected from stellar evolutionary theory17, which always gives a number 0 < Y* < 10 and usually gives 0.5 < Y* < 3 for disk galaxies. This is particularly striking since Y* is the only free parameter allowed to MOND, and the observed mean is very close to that directly observed29 in the Milky Way (1.7 ± 0.5 M/L).

The essence of the problem is illustrated by Fig. 3, which shows the rotation curves of two galaxies of essentially the same luminosity but vastly different surface brightnesses. Though the asymptotic velocities are the same (as required by the Tully-Fisher relation), the rotation curve of the low surface brightness galaxy rises less quickly than that of the high surface brightness galaxy as expected if the mass is distributed like the light. Indeed, the ratio of surface brightnesses is correct to explain the ratio of velocities at small radii if both galaxies have similar mass to light ratios. However, if this continues to be the case as R increases, the low surface brightness galaxy should reach a lower asymptotic velocity simply because R* must be larger for the same L. That this does not occur is the problem, and poses very significant systematic constraints on the dark matter distribution.

The rotation curves of two galaxies, one of high surface brightness11 (NGC 2403; open circles) and one of low surface brightness19 (UGC 128; filled circles). The two galaxies have very nearly the same asymptotic velocity, and hence luminosity, as required by the Tully-Fisher relation. However, they have central surface brightnesses which differ by a factor of 13. The lines give the contributions to the rotation curves of the various components. Green: luminous disk. Blue: dark matter halo. Red: luminous disk (stars and gas) with MOND. Solid lines refer to NGC 2403 and dotted lines to UGC 128. The fits for NGC 2403 are taken from ref. 11, for which the stars have Y* = 1.5 M/L. For UGC 128, no specific fit is made: the blue and green dotted lines are simply the NGC 2403 fits scaled by the ratio of disk scale lengths h. This provides a remarkably good description of the UGC 128 rotation curve and illustrates one possible manifestation of the fine tuning problem: if disks have similar Y, the halo parameters p0 and R0 must scale with the disk parameters s0 and h while conspiring to keep the product p0 R02 fixed at any given luminosity. Note also that the halo of NGC 2403 gives an adequate fit to the rotation curve of UGC 128. This is another possible manifestation of the fine tuning problem: all galaxies of the same luminosity have the same halo, with Y systematically varying with s0 so that Y* goes to zero as s0 goes to zero. Neither of these is exactly correct because the contribution of the gas can not be set to zero as is mathematically possible with the stars. This causes the resulting fin tuning problems to be even more complex, involving more parameters. Alternatively, the green dotted line is the rotation curve expected by MOND for a galaxy with the observed luminous mass distribution of UGC 128.

Satisfying the Tully-Fisher relation has led to some expectation that haloes all have the same density structure. This simplest possibility is immediately ruled out. In order to obtain L ~ Va4 ~ MS, one might suppose that the mass surface density S is constant from galaxy to galaxy, irrespective of the luminous surface density s. This achieves the correct asymptotic velocity Va, but requires that the mass distribution, and hence the complete rotation curve, be essentially identical for all galaxies of the same luminosity. This is obviously not the case (Fig. 3), as the rotation curves of lower surface brightness galaxies rise much more gradually than those of higher surface brightness galaxies (also a prediction4 of MOND). It might be possible to have approximately constant density haloes if the highest surface brightness disks are maximal and the lowest minimal in their contribution to the inner parts of the rotation curves, but this then requires fine tuning of Y* with this systematically decreasing with surface brightness.

The expected form of the halo mass distribution depends on the dominant form of dark matter. This could exist in three general categories: baryonic (e.g., MACHOs), hot (e.g., neutrinos), and cold exotic particles (e.g., WIMPs). The first two make no specific predictions. Baryonic dark matter candidates are most subject to direct detection, and most plausible candidates have been ruled out10 with remaining suggestions of necessity sounding increasingly contrived32. Hot dark matter is not relevant to the present problem. Even if neutrinos have a small mass, their velocities considerably exceed the escape velocities of the haloes of low mass galaxies where the problem is most severe. Cosmological simulations involving exotic cold dark matter33,34 have advanced to the point where predictions are being made about the density structure of haloes. These take the form33,34 p(R) = pH/[R(R+RH)b] where pH characterises the halo density and RH its radius, with b ~ 2 to 3. The characteristic density depends on the mean density of the universe at the collapse epoch, and is generally expected to be greater for lower mass galaxies since these collapse first in such scenarios. This goes in the opposite sense of the observations, which show that low mass and low surface brightness galaxies are less, not more, dense. The observed behaviour is actually expected in scenarios which do not smooth on a particular mass scale and hence allow galaxies of the same mass to collapse at a variety of epochs25, but in this case the Tully-Fisher relation should not be universal. Worse, note that at small R < RH, p(R) ~ R-1. It has already been noted32,35 that such a steep interior density distribution is completely inconsistent with the few (4) analysed observations of dwarf galaxies. Our data19,20 confirm and considerably extend this conclusion for 24 low surface brightness galaxies over a wide range in luminosity.

The failure of the predicted exotic cold dark matter density distribution either rules out this form of dark matter, indicates some failing in the simulations (in spite of wide-spread consensus), or requires some mechanism to redistribute the mass. Feedback from star formation is usually invoked for the last of these, but this can not work for two reasons. First, an objection in principle: a small mass of stars and gas must have a dramatic impact on the distribution of the dominant dark mass, with which they can only interact gravitationally. More mass redistribution is required in less luminous galaxies since they start out denser but end up more diffuse; of course progressively less baryonic material is available to bring this about as luminosity declines. Second, an empirical objection: in this scenario, galaxies explode and gas is lost. However, progressively fainter and lower surface brightness galaxies, which need to suffer more severe explosions, are actually very gas rich.

Observationally, dark matter haloes are inferred to have density distributions1,2,11 with constant density cores, p(R) = p0/[1 + (R/R0)g]. Here, p0 is the core density and R0 is the core size with g ~ 2 being required to produce flat rotation curves. For g = 2, the rotation curve resulting from this mass distribution is V(R) = Va [1-(R0/R) tan-1({R/R0)]1/2 where the asymptotic velocity is Va = (4πG p0 R02)1/2. To satisfy the Tully-Fisher relation, Va, and hence the product p0 R02, must be the same for all galaxies of the same luminosity. To decrease the rate of rise of the rotation curves as surface brightness decreases, R0 must increase. Together, these two require a fine tuning conspiracy to keep the product p0 R02 constant while R0 must vary with the surface brightness at a given luminosity. Luminosity and surface brightness themselves are only weakly correlated, so there exists a wide range in one parameter at any fixed value of the other. Thus the structural properties of the invisible dark matter halo dictate those of the luminous disk, or vice versa. So, s and L give the essential information about the mass distribution without recourse to kinematic information.

A strict s-p0-R0 relation is rigorously obeyed only if the haloes are spherical and dominate throughout. This is probably a good approximation for low surface brightness galaxies but may not be for the those of the highest surface brightness. However, a significant non-halo contribution can at best replace one fine tuning problem with another (e.g., surface brightness being strongly correlated with the stellar population mass to light ratio instead of halo core density) and generally causes additional conspiracies.

There are two perspectives for interpreting these relations, with the preferred perspective depending strongly on the philosophical attitude one has towards empirical and theoretical knowledge. One view is that these are real relations which galaxies and their haloes obey. As such, they provide a positive link between models of galaxy formation and evolution and reality.

The other view is that this list of fine tuning requirements makes it rather unattractive to maintain the dark matter hypothesis. MOND provides an empirically more natural explanation for these observations. In addition to the Tully-Fisher relation, MOND correctly predicts the systematics of the shapes of the rotation curves of low surface brightness galaxies19,20 and fits the specific case of UGC 128 (Fig. 3). Low surface brightness galaxies were stipulated4 to be a stringent test of the theory because they should be well into the regime a < a0. This is now observed to be true, and to the limit of observational accuracy the predictions of MOND are confirmed. The critical acceleration scale a0 is apparently universal, so there is a single force law acting in galactic disks for which MOND provides the correct description. The cause of this could be either a particular dark matter distribution36 or a real modification of gravity. The former is difficult to arrange, and a single force law strongly supports the latter hypothesis since in principle the dark matter could have any number of distributions which would give rise to a variety of effective force laws. Even if MOND is not correct, it is essential to understand why it so closely describe the observations. Though the data can not exclude Newtonian dynamics, with a working empirical alternative (really an extension) at hand, we would not hesitate to reject as incomplete any less venerable hypothesis.

Nevertheless, MOND itself remains incomplete as a theory, being more of a Kepler’s Law for galaxies. It provides only an empirical description of kinematic data. While successful for disk galaxies, it was thought to fail in clusters of galaxies37. Recently it has been recognized that there exist two missing mass problems in galaxy clusters, one of which is now solved38: most of the luminous matter is in X-ray gas, not galaxies. This vastly improves the consistency of MOND with with cluster dynamics39. The problem with the theory remains a reconciliation with Relativity and thereby standard cosmology (which is itself in considerable difficulty38,40), and a lack of any prediction about gravitational lensing41. These are theoretical problems which need to be more widely addressed in light of MOND’s empirical success.

ACKNOWLEDGEMENTS. We thank R. Sanders and M. Milgrom for clarifying aspects of a theory with which we were previously unfamiliar. SSM is grateful to the Kapteyn Astronomical Institute for enormous hospitality during visits when much of this work was done. [Note added in 2020: this work was supported by a cooperative grant funded by the EU and would no longer be possible thanks to Brexit.]

REFERENCES

  1. Rubin, V. C. Science 220, 1339-1344 (1983).
  2. Sancisi, R. & van Albada, T. S. in Dark Matter in the Universe, IAU Symp. No. 117, (eds. Knapp, G. & Kormendy, J.) 67-80 (Reidel, Dordrecht, 1987).
  3. Milgrom, M. Astrophys. J. 270, 365-370 (1983).
  4. Milgrom, M. Astrophys. J. 270, 371-383 (1983).
  5. Bekenstein, K. G., & Milgrom, M. Astrophys. J. 286, 7-14
  6. Mannheim, P. D., & Kazanas, D. 1989, Astrophys. J. 342, 635-651 (1989).
  7. Sanders, R. H. Astron. Atrophys. Rev. 2, 1-28 (1990).
  8. Zwaan, M.A., van der Hulst, J. M., de Blok, W. J. G. & McGaugh, S. S. Mon. Not. R. astr. Soc., 273, L35-L38, (1995).
  9. Zaritsky, D. & White, S. D. M. Astrophys. J. 435, 599-610 (1994).
  10. Carr, B. Ann. Rev. Astr. Astrophys., 32, 531-590 (1994).
  11. Begeman, K. G., Broeils, A. H. & Sanders, R. H. Mon. Not. R. astr. Soc. 249, 523-537 (1991).
  12. Kent, S. M. Astr. J. 93, 816-832 (1987).
  13. Milgrom, M. Astrophys. J. 333, 689-693 (1988).
  14. Milgrom, M. & Braun, E. Astrophys. J. 334, 130-134 (1988).
  15. Tully, R. B., & Fisher, J. R. Astr. Astrophys., 54, 661-673 (1977).
  16. Aaronson, M., Huchra, J., & Mould, J. Astrophys. J. 229, 1-17 (1979).
  17. Larson, R. B. & Tinsley, B. M. Astrophys. J. 219, 48-58 (1978).
  18. Sprayberry, D., Bernstein, G. M., Impey, C. D. & Bothun, G. D. Astrophys. J. 438, 72-82 (1995).
  19. van der Hulst, J. M., Skillman, E. D., Smith, T. R., Bothun, G. D., McGaugh, S. S. & de Blok, W. J. G. Astr. J. 106, 548-559 (1993).
  20. de Blok, W. J. G., McGaugh, S. S., & van der Hulst, J. M. Mon. Not. R. astr. Soc. (submitted).
  21. McGaugh, S. S., & Bothun, G. D. Astr. J. 107, 530-542 (1994).
  22. de Blok, W. J. G., van der Hulst, J. M., & Bothun, G. D. Mon. Not. R. astr. Soc. 274, 235-259 (1995).
  23. Ronnback, J., & Bergvall, N. Astr. Astrophys., 292, 360-378 (1994).
  24. de Jong, R. S. Ph.D. thesis, University of Groningen (1995).
  25. Mo, H. J., McGaugh, S. S. & Bothun, G. D. Mon. Not. R. astr. Soc. 267, 129-140 (1994).
  26. Dalcanton, J. J., Spergel, D. N., Summers, F. J. Astrophys. J., (in press).
  27. McGaugh, S. S. Astrophys. J. 426, 135-149 (1994).
  28. Ronnback, J., & Bergvall, N. Astr. Astrophys., 302, 353-359 (1995).
  29. Kuijken, K. & Gilmore, G. Mon. Not. R. astr. Soc., 239, 605-649 (1989).
  30. Schombert, J. M., Bothun, G. D., Impey, C. D., & Mundy, L. G. Astron. J., 100, 1523-1529 (1990).
  31. Wilson, C. D. Astrophys. J. 448, L97-L100 (1995).
  32. Moore, B. Nature 370, 629-631 (1994).
  33. Navarro, J. F., Frenk, C. S., & White, S. D. M. Mon. Not. R. astr. Soc., 275, 720-728 (1995).
  34. Cole, S. & Lacey, C. Mon. Not. R. astr. Soc., in press.
  35. Flores, R. A. & Primack, J. R. Astrophys. J. 427, 1-4 (1994).
  36. Sanders, R. H., & Begeman, K. G. Mon. Not. R. astr. Soc. 266, 360-366 (1994).
  37. The, L. S., & White, S. D. M. Astron. J., 95, 1642-1651 (1988).
  38. White, S. D. M., Navarro, J. F., Evrard, A. E. & Frenk, C. S. Nature 366, 429-433 (1993).
  39. Sanders, R. H. Astron. Astrophys. 284, L31-L34 (1994).
  40. Bolte, M., & Hogan, C. J. Nature 376, 399-402 (1995).
  41. Bekenstein, J. D. & Sanders, R. H. Astrophys. J. 429, 480-490 (1994).
  42. Broeils, A. H., Ph.D. thesis, Univ. of Groningen (1992).