The curious case of AGC 114905: an isolated galaxy devoid of dark matter?

The curious case of AGC 114905: an isolated galaxy devoid of dark matter?

It’s early in the new year, so what better time to violate my own resolutions? I prefer to be forward-looking and not argue over petty details, or chase wayward butterflies. But sometimes the devil is in the details, and the occasional butterfly can be entertaining if distracting. Today’s butterfly is the galaxy AGC 114905, which has recently been in the news.

There are a couple of bandwagons here: one to rebrand very low surface brightness galaxies as ultradiffuse, and another to get overly excited when these types of galaxies appear to lack dark matter. The nomenclature is terrible, but that’s normal for astronomy so I would overlook it, except that in this case it gives the impression that there is some new population of galaxies behaving in an unexpected fashion, when instead it looks to me like the opposite is the case. The extent to which there are galaxies lacking dark matter is fundamental to our interpretation of the acceleration discrepancy (aka the missing mass problem), so bears closer scrutiny. The evidence for galaxies devoid of dark matter is considerably weaker than the current bandwagon portrays.

If it were just one butterfly (e.g., NGC 1052-DF2), I wouldn’t bother. Indeed, it was that specific case that made me resolve to ignore such distractions as a waste of time. I’ve seen this movie literally hundreds of times, I know how it goes:

  • Observations of this one galaxy falsify MOND!
  • Hmm, doing the calculation right, that’s what MOND predicts.
  • OK, but better data shrink the error bars and now MOND falsified.
  • Are you sure about…?
  • Yes. We like this answer, let’s stop thinking about it now.
  • As the data continue to improve, it approaches what MOND predicts.
  • <crickets>

Over and over again. DF44 is another example that has followed this trajectory, and there are many others. This common story is not widely known – people lose interest once they get the answer they want. Irrespective of whether we can explain this weird case or that, there is a deeper story here about data analysis and interpretation that seems not to be widely appreciated.

My own experience inevitably colors my attitude about this, as it does for us all, so let’s start thirty years ago when I was writing a dissertation on low surface brightness (LSB) galaxies. I did many things in my thesis, most of them well. One of the things I tried to do then was derive rotation curves for some LSB galaxies. This was not the main point of the thesis, and arose almost as an afterthought. It was also not successful, and I did not publish the results because I didn’t believe them. It wasn’t until a few years later, with improved data, analysis software, and the concerted efforts of Erwin de Blok, that we started to get a handle on things.

The thing that really bugged me at the time was not the Doppler measurements, but the inclinations. One has to correct the observed velocities by the inclination of the disk, 1/sin(i). The inclination can be constrained by the shape of the image and by the variation of velocities across the face of the disk. LSB galaxies presented raggedy images and messy velocity fields. I found it nigh on impossible to constrain their inclinations at the time, and it remains a frequent struggle to this day.

Here is an example of the LSB galaxy F577-V1 that I find lurking around on disk from all those years ago:

The LSB galaxy F577-V1 (B-band image, left) and the run of the eccentricity of ellipses fit to the atomic gas data (right).

A uniform disk projected on the sky at some inclination will have a fixed corresponding eccentricity, with zero being the limit of a circular disk seen perfectly face-on (i = 0). Do you see a constant value of the eccentricity in the graph above? If you say yes, go get your eyes checked.

What we see in this case is a big transition from a fairly eccentric disk to one that is more nearly face on. The inclination doesn’t have a sudden warp; the problem is that the assumption of a uniform disk is invalid. This galaxy has a bar – a quasi-linear feature that is common in many spiral galaxies that is supported by non-circular orbits. Even face-on, the bar will look elongated simply because it is. Indeed, the sudden change in eccentricity is one way to define the end of the bar, which the human eye-brain can do easily by looking at the image. So in a case like this, one might adopt the inclination from the outer points, and that might even be correct. But note that there are spiral arms along the outer edge that is visible to the eye, so it isn’t clear that even these isophotes are representative of the shape of the underlying disk. Worse, we don’t know what happens beyond the edge of the data; the shape might settle down at some other level that we can’t see.

This was so frustrating, I swore never to have anything to do with galaxy kinematics ever again. Over 50 papers on the subject later, all I can say is D’oh! Repeatedly.

Bars are rare in LSB galaxies, but it struck me as odd that we saw any at all. We discovered unexpectedly that they were dark matter dominated – the inferred dark halo outweighs the disk, even within the edge defined by the stars – but that meant that the disks should be stable against the formation of bars. My colleague Chris Mihos agreed, and decided to look into it. The answer was yes, LSB galaxies should be stable against bar formation, at least internally generated bars. Sometimes bars are driven by external perturbations, so we decided to simulate the close passage of a galaxy of similar mass – basically, whack it real hard and see what happens:

Simulation of an LSB galaxy during a strong tidal encounter with another galaxy. Closest approach is at t=24 in simulation units (between the first and second box). A linear bar does not form, but the model galaxy does suffer a strong and persistent oval distortion: all these images are shown face-on (i=0). From Mihos et al (1997).

This was a conventional simulation, with a dark matter halo constructed to be consistent with the observed properties of the LSB galaxy UGC 128. The results are not specific to this case; it merely provides numerical corroboration of the more general case that we showed analytically.

Consider the image above in the context of determining galaxy inclinations from isophotal shapes. We know this object is face-on because we can control our viewing angle in the simulation. However, we would not infer i=0 from this image. If we didn’t know it had been perturbed, we would happily infer a substantial inclination – in this case, easily as much as 60 degrees! This is an intentionally extreme case, but it illustrates how a small departure from a purely circular shape can be misinterpreted as an inclination. This is a systematic error, and one that usually makes the inclination larger than it is: it is possible to appear oval when face-on, but it is not possible to appear more face-on than perfectly circular.

Around the same time, Erwin and I were making fits to the LSB galaxy data – with both dark matter halos and MOND. By this point in my career, I had deeply internalized that the data for LSB galaxies were never perfect. So we sweated every detail, and worked through every “what if?” This was a particularly onerous task for the dark matter fits, which could do many different things if this or that were assumed – we discussed all the plausible possibilities at the time. (Subsequently, a rich literature sprang up discussing many unreasonable possibilities.) By comparison, the MOND fits were easy. They had fewer knobs, and in 2/3 of the cases they simply worked, no muss, no fuss.

For the other 1/3 of the cases, we noticed that the shape of the MOND-predicted rotation curves was usually right, but the amplitude was off. How could it work so often, and yet miss in this weird way? That sounded like a systematic error, and the inclination was the most obvious culprit, with 1/sin(i) making a big difference for small inclinations. So we decided to allow this as a fit parameter, to see whether a fit could be obtained, and judge how [un]reasonable this was. Here is an example for two galaxies:

UGC 1230 (left) and UGC 5005 (right). Ovals show the nominally measured inclination (i=22o for UGC 1230 and 41o for UGC 5005, respectively) and the MOND best-fit value (i=17o and 30o). From de Blok & McGaugh (1998).

The case of UGC 1230 is memorable to me because it had a good rotation curve, despite being more face-on than widely considered acceptable for analysis. And for good reason: the difference between 22 and 17 degrees make a huge difference to the fit, changing it from way off to picture perfect.

Rotation curve fits for UGC 1230 (top) and UGC 5005 (bottom) with the inclination fixed (left) and fit (right). From de Blok & McGaugh (1998).

What I took away from this exercise is how hard it is to tell the difference between inclination values for relatively face-on galaxies. UGC 1230 is obvious: the ovals for the two inclinations are practically on top of each other. The difference in the case of UGC 5005 is more pronounced, but look at the galaxy. The shape of the outer isophote where we’re trying to measure this is raggedy as all get out; this is par for the course for LSB galaxies. Worse, look further in – this galaxy has a bar! The central bar is almost orthogonal to the kinematic major axis. If we hadn’t observed as deeply as we had, we’d think the minor axis was the major axis, and the inclination was something even higher.

I remember Erwin quipping that he should write a paper on how to use MOND to determine inclinations. This was a joke between us, but only half so: using the procedure in this way would be analogous to using Tully-Fisher to measure distances. We would simply be applying an empirically established procedure to constrain a property of a galaxy – luminosity from line-width in that case of Tully-Fisher; inclination from rotation curve shape here. That we don’t understand why this works has never stopped astronomers before.

Systematic errors in inclination happen all the time. Big surveys don’t have time to image deeply – they have too much sky area to cover – and if there is follow-up about the gas content, it inevitably comes in the form of a single dish HI measurement. This is fine; it is what we can do en masse. But an unresolved single dish measurement provides no information about the inclination, only a pre-inclination line-width (which itself is a crude proxy for the flat rotation speed). The inclination we have to take from the optical image, which would key on the easily detected, high surface brightness central region of the image. That’s the part that is most likely to show a bar-like distortion, so one can expect lots of systematic errors in the inclinations determined in this way. I provided a long yet still incomplete discussion of these issues in McGaugh (2012). This is both technical and intensely boring, so not even the pros read it.

This brings us to the case of AGC 114905, which is part of a sample of ultradiffuse galaxies discussed previously by some of the same authors. On that occasion, I kept to the code, and refrained from discussion. But for context, here are those data on a recent Baryonic Tully-Fisher plot. Spoiler alert: that post was about a different sample of galaxies that seemed to be off the relation but weren’t.

Baryonic Tully-Fisher relation showing the ultradiffuse galaxies discussed by Mancera Piña et al. (2019) as gray circles. These are all outliers from the relation; AGC 114905 is highlighted in orange. Placing much meaning in the outliers is a classic case of missing the forest for the trees. The outliers are trees. The Tully-Fisher relation is the forest.

On the face of it, these ultradiffuse galaxies (UDGs) are all very serious outliers. This is weird – they’re not some scatter off to one side, they’re just way off on their own island, with no apparent connection to the rest of established reality. By calling them a new name, UDG, it makes it sound plausible that these are some entirely novel population of galaxies that behave in a new way. But they’re not. They are exactly the same kinds of galaxies I’ve been talking about. They’re all blue, gas rich, low surface brightness, fairly isolated galaxies – all words that I’ve frequently used to describe my thesis sample. These UDGs are all a few billion solar mass is baryonic mass, very similar to F577-V1 above. You could give F577-V1 a different name, slip into the sample, and nobody would notice that it wasn’t like one of the others.

The one slight difference is implied by the name: UDGs are a little lower in surface brightness. Indeed, once filter transformations are taken into account, the definition of ultradiffuse is equal to what I arbitrarily called very low surface brightness in 1996. Most of my old LSB sample galaxies have central stellar surface brightnesses at or a bit above 10 solar masses per square parsec while the UDGs here are a bit under this threshold. For comparison, in typical high surface brightness galaxies this quantity is many hundreds, often around a thousand. Nothing magic happens at the threshold of 10 solar masses per square parsec, so this line of definition between LSB and UDG is an observational distinction without a physical difference. So what are the odds of a different result for the same kind of galaxies?

Indeed, what really matters is the baryonic surface density, not just the stellar surface brightness. A galaxy made purely of gas but no stars would have zero optical surface brightness. I don’t know of any examples of that extreme, but we came close to it with the gas rich sample of Trachternach et al. (2009) when we tried this exact same exercise a decade ago. Despite selecting that sample to maximize the chance of deviations from the Baryonic Tully-Fisher relation, we found none – at least none that were credible: there were deviant cases, but their data were terrible. There were no deviants among the better data. This sample is comparable or even extreme than the UDGs in terms of baryonic surface density, so the UDGs can’t be exception because they’re a genuinely new population, whatever name we call them by.

The key thing is the credibility of the data, so let’s consider the data for AGC 114905. The kinematics are pretty well ordered; the velocity field is well observed for this kind of beast. It ought to be; they invested over 40 hours of JVLA time into this one galaxy. That’s more than went into my entire LSB thesis sample. The authors are all capable, competent people. I don’t think they’ve done anything wrong, per se. But they do seem to have climbed aboard the bandwagon of dark matter-free UDGs, and have talked themselves into believing smaller error bars on the inclination than I am persuaded is warranted.

Here is the picture of AGC 114905 from Mancera Piña et al. (2021):

AGC 114905 in stars (left) and gas (right). The contours of the gas distribution are shown on top of the stars in white. Figure 1 from Mancera Piña et al. (2021).

This messy morphology is typical of very low surface brightness galaxies – hence their frequent classification as Irregular galaxies. Though messier, it shares some morphological traits with the LSB galaxies shown above. The central light distribution is elongated with a major axis that is not aligned with that of the gas. The gas is raggedy as all get out. The contours are somewhat boxy; this is a hint that something hinky is going on beyond circular motion in a tilted axisymmetric disk.

The authors do the right thing and worry about the inclination, checking to see what it would take to be consistent with either LCDM or MOND, which is about i=11o in stead of the 30o indicated by the shape of the outer isophote. They even build a model to check the plausibility of the smaller inclination:

Contours of models of disks with different inclinations (lines, as labeled) compared to the outer contour of the gas distribution of AGC 114905. Figure 7 from Mancera Piña et al. (2021).

Clearly the black line (i=30o) is a better fit to the shape of the gas distribution than the blue dashed line (i=11o). Consequently, they “find it unlikely that we are severely overestimating the inclination of our UDG, although this remains the largest source of uncertainty in our analysis.” I certainly agree with the latter phrase, but not the former. I think it is quite likely that they are overestimating the inclination. I wouldn’t even call it a severe overestimation; more like par for the course with this kind of object.

As I have emphasized above and elsewhere, there are many things that can go wrong in this sort of analysis. But if I were to try to put my finger on the most important thing, here it would be the inclination. The modeling exercise is good, but it assumes “razor-thin axisymmetric discs.” That’s a reasonable thing to do when building such a model, but we have to bear in mind that real disks are neither. The thickness of the disk probably doesn’t matter too much for a nearly face-on case like this, but the assumption of axisymmetry is extraordinarily dubious for an Irregular galaxy. That’s how they got the name.

It is hard to build models that are not axisymmetric. Once you drop this simplifying assumption, where do you even start? So I don’t fault them for stopping at this juncture, but I can also imagine doing as de Blok suggested, using MOND to set the inclination. Then one could build models with asymmetric features by trial and error until a match is obtained. Would we know that such a model would be a better representation of reality? No. Could we exclude such a model? Also no. So the bottom line is that I am not convinced that the uncertainty in the inclination is anywhere near as small as the adopted ±3o.

That’s very deep in the devilish details. If one is worried about a particular result, one can back off and ask if it makes sense in the context of what we already know. I’ve illustrated this process previously. First, check the empirical facts. Every other galaxy in the universe with credible data falls on the Baryonic Tully-Fisher relation, including very similar galaxies that go by a slightly different name. Hmm, strike one. Second, check what we expect from theory. I’m not a fan of theory-informed data interpretation, but we know that LCDM, unlike SCDM before it, at least gets the amplitude of the rotation speed in the right ballpark (Vflat ~ V200). Except here. Strike two. As much as we might favor LCDM as the standard cosmology, it has now been extraordinarily well established that MOND has considerable success in not just explaining but predicting these kind of data, with literally hundreds of examples. One hundred was the threshold Vera Rubin obtained to refute excuses made to explain away the first few flat rotation curves. We’ve crossed that threshold: MOND phenomenology is as well established now as flat rotation curves were at the inception of the dark matter paradigm. So while I’m open to alternative explanations for the MOND phenomenology, seeing that a few trees stand out from the forest is never going to be as important as the forest itself.

The Baryonic Tully-Fisher relation exists empirically; we have to explain it in any theory. Either we explain it, or we don’t. We can’t have it both ways, just conveniently throwing away our explanation to accommodate any discrepant observation that comes along. That’s what we’d have to do here: if we can explain the relation, we can’t very well explain the outliers. If we explain the outliers, it trashes our explanation for the relation. If some galaxies are genuine exceptions, then there are probably exceptional reasons for them to be exceptions, like a departure from equilibrium. That can happen in any theory, rendering such a test moot: a basic tenet of objectivity is that we don’t get to blame a missed prediction of LCDM on departures from equilibrium without considering the same possibility for MOND.

This brings us to a physical effect that people should be aware of. We touched on the bar stability above, and how a galaxy might look oval even when seen face on. This happens fairly naturally in MOND simulations of isolated disk galaxies. They form bars and spirals and their outer parts wobble about. See, for example, this simulation by Nils Wittenburg. This particular example is a relatively massive galaxy; the lopsidedness reminds me of M101 (Watkins et al. 2017). Lower mass galaxies deeper in the MOND regime are likely even more wobbly. This happens because disks are only marginally stable in MOND, not the over-stabilized entities that have to be hammered to show a response as in our early simulation of UGC 128 above. The point is that there is good reason to expect even isolated face-on dwarf Irregulars to look, well, irregular, leading to exactly the issues with inclination determinations discussed above. Rather than being a contradiction to MOND, AGC 114905 may illustrate one of its inevitable consequences.

I don’t like to bicker at this level of detail, but it makes a profound difference to the interpretation. I do think we should be skeptical of results that contradict well established observational reality – especially when over-hyped. God knows I was skeptical of our own results, which initially surprised the bejeepers out of me, but have been repeatedly corroborated by subsequent observations.

I guess I’m old now, so I wonder how I come across to younger practitioners; perhaps as some scary undead monster. But mates, these claims about UDGs deviating from established scaling relations are off the edge of the map.

What JWST will see

What JWST will see

Big galaxies at high redshift!

That’s my prediction, anyway. A little context first.

New Year, New Telescope

First, JWST finally launched. This has been a long-delayed NASA mission; the launch had been put off so many times it felt like a living example of Zeno’s paradox: ever closer but never quite there. A successful launch is always a relief – rockets do sometimes blow up on lift off – but there is still sweating to be done: it has one of the most complex deployments of any space mission. This is still a work in progress, but to start the new year, I thought it would be nice to look forward to what we hope to see.

JWST is a major space telescope optimized for observing in the near and mid-infrared. This enables observation of redshifted light from the earliest galaxies. This should enable us to see them as they would appear to our eyes had we been around at the time. And that time is long, long ago, in galaxies very far away: in principle, we should be able to see the first galaxies in their infancy, 13+ billion years ago. So what should we expect to see?

Early galaxies in LCDM

A theory is only as good as its prior. In LCDM, structure forms hierarchically: small objects emerge first, then merge into larger ones. It takes time to build up large galaxies like the Milky Way; the common estimate early on was that it would take at least a billion years to assemble an L* galaxy, and it could easily take longer. Ach, terminology: an L* galaxy is the characteristic luminosity of the Schechter function we commonly use to describe the number density of galaxies of various sizes. L* galaxies like the Milky Way are common, but the number of brighter galaxies falls precipitously. Bigger galaxies exist, but they are rare above this characteristic brightness, so L* is shorthand for a galaxy of typical brightness.

We expect galaxies to start small and slowly build up in size. This is a very basic prediction of LCDM. The hierarchical growth of dark matter halos is fundamental, and relatively easy to calculate. How this translates to the visible parts of galaxies is more fraught, depending on the details of baryonic infall, star formation, and the many kinds of feedback. [While I am a frequent critic of model feedback schemes implemented in hydrodynamic simulations on galactic scales, there is no doubt that feedback happens on the much smaller scales of individual stars and their nurseries. These are two very different things for which we confusingly use the same word since the former is the aspirational result of the latter.] That said, one only expects to assemble mass so fast, so the natural expectation is to see small galaxies first, with larger galaxies emerging slowly as their host dark matter halos merge together.

Here is an example of a model formation history that results in the brightest galaxy in a cluster (from De Lucia & Blaizot 2007). Little things merge to form bigger things (hence “hierarchical”). This happens a lot, and it isn’t really clear when you would say the main galaxy had formed. The final product (at lookback time zero, at redshift z=0) is a big galaxy composed of old stars – fairly typically for a giant elliptical. But the most massive progenitor is still rather small 8 billion years ago, over 4 billion years after the Big Bang. The final product doesn’t really emerge until the last major merger around 4 billion years ago. This is just one example in one model, and there are many different models, so your mileage will vary. But you get the idea: it takes a long time and a lot of mergers to assemble a big galaxy.

Brightest cluster galaxy merger tree. Time progresses upwards from early in the universe at bottom to the present day at top. Every line is a small galaxy that merges to ultimately form the larger galaxy. Symbols are color-coded by B−V color (red meaning old stars, blue young) and their area scales with the stellar mass (bigger circles being bigger galaxies. From De Lucia & Blaizot 2007).

It is important to note that in a hierarchical model, the age of a galaxy is not the same as the age of the stars that make up the galaxy. According to De Lucia & Blaizot, the stars of the brightest cluster galaxies

“are formed very early (50 per cent at z~5, 80 per cent at z~3)”

but do so

“in many small galaxies”

– i.e., the little progenitor circles in the plot above. The brightest cluster galaxies in their model build up rather slowly, such that

“half their final mass is typically locked-up in a single galaxy after z~0.5.”

De Lucia & Blaizot (2007)

So all the star formation happens early in the little things, but the final big thing emerges later – a lot later, only reaching half its current size when the universe is about 8 Gyr old. (That’s roughly when the solar system formed: we are late-comers to this party.) Given this prediction, one can imagine that JWST should see lots of small galaxies at high redshift, their early star formation popping off like firecrackers, but it shouldn’t see any big galaxies early on – not really at z > 3 and certainly not at z > 5.

Big galaxies in the data at early times?

While JWST is eagerly awaited, people have not been idle about looking into this. There have been many deep surveys made with the Hubble Space Telescope, augmented by the infrared capable (and now sadly defunct) Spitzer Space Telescope. These have already spied a number of big galaxies at surprisingly high redshift. So surprising that Steinhardt et al. (2016) dubbed it “The Impossibly Early Galaxy Problem.” This is their key plot:

The observed (points) and predicted (lines) luminosity functions of galaxies at various redshifts (colors). If all were well, the points would follow the lines of the same color. Instead, galaxies appear to be brighter than expected, already big at the highest redshifts probed. From Steinhardt et al. (2016).

There are lots of caveats to this kind of work. Constructing the galaxy luminosity function is a challenging task at any redshift; getting it right at high redshift especially so. While what counts as “high” varies, I’d say everything on the above plot counts. Steinhardt et al. (2016) worry about these details at considerable length but don’t find any plausible way out.

Around the same time, one of our graduate students, Jay Franck, was looking into similar issues. One of the things he found was that not only were there big galaxies in place early on, but they were also in clusters (or at least protoclusters) early and often. That is to say, not only are the galaxies too big too soon, so are the clusters in which they reside.

Dr. Franck made his own comparison of data to models, using the Millennium simulation to devise an apples-to-apples comparison:

The apparent magnitude m* at 4.5 microns of L* galaxies in clusters as a function of redshift. Circles are data; squares represent the Millennium simulation. These diverge at z > 2: galaxies are brighter (smaller m*) than predicted (Fig. 5.5 from Franck 2017).

The result is that the data look more like big galaxies formed early already as big galaxies. The solid lines are “passive evolution” models in which all the stars form in a short period starting at z=10. This starting point is an arbitrary choice, but there is little cosmic time between z = 10 and 20 – just a few hundred million years, barely one spin around the Milky Way. This is a short time in stellar evolution, so is practically the same as starting right at the beginning of time. As Jay put it,

“High redshift cluster galaxies appear to be consistent with an old stellar population… they do not appear to be rapidly assembling stellar mass at these epochs.”

Franck 2017

We see old stars, but we don’t see the predicted assembly of galaxies via mergers, at least not at the expected time. Rather, it looks like some galaxies were already big very early on.

As someone who has worked mostly on well resolved, relatively nearby galaxies, all this makes me queasy. Jay, and many others, have worked desperately hard to squeeze knowledge from the faint smudges detected by first generation space telescopes. JWST should bring these into much better focus.

Early galaxies in MOND

To go back to the first line of this post, big galaxies at high redshift did not come as a surprise to me. It is what we expect in MOND.

Structure formation is generally considered a great success of LCDM. It is straightforward and robust to calculate on large scales in linear perturbation theory. Individual galaxies, on the other hand, are highly non-linear objects, making them hard to beasts to tame in a model. In MOND, it is the other way around – predicting the behavior of individual galaxies is straightforward – only the observed distribution of mass matters, not all the details of how it came to be that way – but what happens as structure forms in the early universe is highly non-linear.

The non-linearity of MOND makes it hard to work with computationally. It is also crucial to how structure forms. I provide here an outline of how I expect structure formation to proceed in MOND. This page is now old, even ancient in internet time, as the golden age for this work was 15 – 20 years ago, when all the essential predictions were made and I was naive enough to think cosmologists were amenable to reason. Since the horizon of scientific memory is shorter than that, I felt it necessary to review in 2015. That is now itself over the horizon, so with the launch of JWST, it seems appropriate to remind the community yet again that these predictions exist.

This 1998 paper by Bob Sanders is a foundational paper in this field (see also Sanders 2001 and the other references given on the structure formation page). He says, right in the abstract,

“Objects of galaxy mass are the first virialized objects to form (by z = 10), and larger structure develops rapidly.”

Sanders (1998)

This was a remarkable prediction to make in 1998. Galaxies, much less larger structures, were supposed to take much longer to form. It takes time to go from the small initial perturbations that we see in the CMB at z=1000 to large objects like galaxies. Indeed, the it takes at least a few hundred million years simply in free fall time to assemble a galaxy’s worth of mass, a hard limit. Here Sanders was saying that an L* galaxy might assemble as early as half a billion years after the Big Bang.

So how can this happen? Without dark matter to lend a helping hand, structure formation in the very early universe is inhibited by the radiation field. This inhibition is removed around z ~ 200; exactly when being very sensitive to the baryon density. At this point, the baryon perturbations suddenly find themselves deep in the MOND regime, and behave as if there is a huge amount of dark matter. Structure proceeds hierarchically, as it must, but on a highly compressed timescale. To distinguish it from LCDM hierarchical galaxy formation, let’s call it prompt structure formation. In prompt structure formation, we expect

  • Early reionization (z ~ 20)
  • Some L* galaxies by z ~ 10
  • Early emergence of the cosmic web
  • Massive clusters already at z > 2
  • Large, empty voids
  • Large peculiar velocities
  • A very large homogeneity scale, maybe fractal over 100s of Mpc

There are already indications of all of these things, nearly all of which were predicted in advance of the relevant observations. I could elaborate, but that is beyond the scope of this post. People should read the references* if they’re keen.

*Reading the science papers is mandatory for the pros, who often seem fond of making straw man arguments about what they imagine MOND might do without bothering to check. I once referred some self-styled experts in structure formation to Sanders’s work. They promptly replied “That would mean structures of 1018 M!” when what he said was

“The largest objects being virialized now would be clusters of galaxies with masses in excess of 1014 M. Superclusters would only now be reaching maximum expansion.”

Sanders (1998)

The exact numbers are very sensitive to cosmological parameters, as Sanders discussed, but I have no idea where the “experts” got 1018, other than just making stuff up. More importantly, Sanders’s statement clearly presaged the observation of very massive clusters at surprisingly high redshift and the discovery of the Laniakea Supercluster.

These are just the early predictions of prompt structure formation, made in the same spirit that enabled me to predict the second peak of the microwave background and the absorption signal observed by EDGES at cosmic dawn. Since that time, at least two additional schools of thought as to how MOND might impact cosmology have emerged. One of them is the sterile neutrino MOND cosmology suggested by Angus and being actively pursued by the Bonn-Prague research group. Very recently, there is of course the new relativistic theory of Skordis & Złośnik which fits the cosmologists’ holy grail of the power spectrum in both the CMB at z = 1090 and galaxies at z = 0. There should be an active exchange and debate between these approaches, with perhaps new ones emerging.

Instead, we lack critical mass. Most of the community remains entirely obsessed with pursuing the vain chimera of invisible mass. I fear that this will eventually prove to be one of the greatest wastes of brainpower (some of it my own) in the history of science. I can only hope I’m wrong, as many brilliant people seem likely to waste their career running garbage in-garbage out computer simulations or at the bottom of a mine shaft failing to detect what isn’t there.

A beautiful mess

JWST can’t answer all of these questions, but it will help enormously with galaxy formation, which is bound to be messy. It’s not like L* galaxies are going to spring fully formed from the void like Athena from the forehead of Zeus. The early universe must be a chaotic place, with clumps of gas condensing to form the first stars that irradiate the surrounding intergalactic gas with UV photons before detonating as the first supernovae, and the clumps of stars merging to form giant elliptical galaxies while elsewhere gas manages to pool and settle into the large disks of spiral galaxies. When all this happens, how it happens, and how big galaxies get how fast are all to be determined – but now accessible to direct observation thanks to JWST.

It’s going to be a confusing, beautiful mess, in the best possible way – one that promises to test and challenge our predictions and preconceptions about structure formation in the early universe.

Super spirals on the Tully-Fisher relation

Super spirals on the Tully-Fisher relation

A surprising and ultimately career-altering result that I encountered while in my first postdoc was that low surface brightness galaxies fell precisely on the Tully-Fisher relation. This surprising result led me to test the limits of the relation in every conceivable way. Are there galaxies that fall off it? How far is it applicable? Often, that has meant pushing the boundaries of known galaxies to ever lower surface brightness, higher gas fraction, and lower mass where galaxies are hard to find because of unavoidable selection biases in galaxy surveys: dim galaxies are hard to see.

I made a summary plot in 2017 to illustrate what we had learned to that point. There is a clear break in the stellar mass Tully-Fisher relation (left panel) that results from neglecting the mass of interstellar gas that becomes increasingly important in lower mass galaxies. The break goes away when you add in the gas mass (right panel). The relation between baryonic mass and rotation speed is continuous down to Leo P, a tiny galaxy just outside the Local Group comparable in mass to a globular cluster and the current record holder for the slowest known rotating galaxy at a mere 15 km/s.

The stellar mass (left) and baryonic (right) Tully-Fisher relations constructed in 2017 from SPARC data and gas rich galaxies. Dark blue points are star dominated galaxies; light blue points are galaxies with more mass in gas than in stars. The data are restricted to galaxies with distance measurements accurate to 20% or better; see McGaugh et al. (2019) for a discussion of the effects of different quality criteria. The line has a slope of 4 and is identical in both panels for comparison.

At the high mass end, galaxies aren’t hard to see, but they do become progressively rare: there is an exponential cut off in the intrinsic numbers of galaxies at the high mass end. So it is interesting to see how far up in mass we can go. Ogle et al. set out to do that, looking over a huge volume to identify a number of very massive galaxies, including what they dubbed “super spirals.” These extend the Tully-Fisher relation to higher masses.

The Tully-Fisher relation extended to very massive “super” spirals (blue points) by Ogle et al. (2019).

Most of the super spirals lie on the top end of the Tully-Fisher relation. However, a half dozen of the most massive cases fall off to the right. Could this be a break in the relation? So it was claimed at the time, but looking at the data, I wasn’t convinced. It looked to me like they were not always getting out to the flat part of the rotation curve, instead measuring the maximum rotation speed.

Bright galaxies tend to have rapidly rising rotation curves that peak early then fall before flattening out. For very bright galaxies – and super spirals are by definition the brightest spirals – the amplitude of the decline can be substantial, several tens of km/s. So if one measures the maximum speed instead of the flat portion of the curve, points will fall to the right of the relation. I decided not to lose any sleep over it, and wait for better data.

Better data have now been provided by Di Teodoro et al. Here is an example from their paper. The morphology of the rotation curve is typical of what we see in massive spiral galaxies. The maximum rotation speed exceeds 300 km/s, but falls to 275 km/s where it flattens out.

A super spiral (left) and its rotation curve (right) from Di Teodoro et al.

Adding the updated data to the plot, we see that the super spirals now fall on the Tully-Fisher relation, with no hint of a break. There are a couple of outliers, but those are trees. The relation is the forest.

The super spiral (red points) stellar mass (left) and baryonic (right) Tully-Fisher relations as updated by Di Teodoro et al. (2021).

That’s a good plot, but it stops at 108 solar masses, so I couldn’t resist adding the super spirals to my plot from 2017. I’ve also included the dwarfs I discussed in the last post. Together, we see that the baryonic Tully-Fisher relation is continuous over six decades in mass – a factor of million from the smallest to the largest galaxies.

The plot from above updated to include the super spirals (red points) at high mass and Local Group dwarfs (gray squares) at low mass. The SPARC data (blue points) have also been updated with new stellar population mass-to-light ratio estimates that make their bulge components a bit more massive, and with scaling relations for metallicity and molecular gas. The super spirals have been treated in the same way, and adjusted to a matching distance scale (H0 = 73 km/s/Mpc). There is some overlap between the super spirals and the most massive galaxies in SPARC; here the data are in excellent agreement. The super spirals extend to higher mass by a factor of two.

The strength of this correlation continues to amaze me. This never happens in extragalactic astronomy, where correlations are typically weak and have lots of intrinsic scatter. The opposite is true here. This must be telling us something.

The obvious thing that this is telling us is MOND. The initial report that super spirals fell off of the Tully-Fisher relation was widely hailed as a disproof of MOND. I’ve seen this movie many times, so I am not surprised that the answer changed in this fashion. It happens over and over again. Even less surprising is that there is no retraction, no self-examination of whether maybe we jumped to the wrong conclusion.

I get it. I couldn’t believe it myself, to start. I struggled for many years to explain the data conventionally in terms of dark matter. Worked my ass off trying to save the paradigm. Try as I might, nothing worked. Since then, many people have claimed to explain what I could not, but so far all I have seen are variations on models that I had already rejected as obviously unworkable. They either make unsubstantiated assumptions, building a tautology, or simply claim more than they demonstrate. As long as you say what people want to hear, you will be held to a very low standard. If you say what they don’t want to hear, what they are conditioned not to believe, then no standard of proof is high enough.

MOND was the only theory to predict the observed behavior a priori. There are no free parameters in the plots above. We measure the mass and the rotation speed. The data fall on the predicted line. Dark matter models did not predict this, and can at best hope to provide a convoluted, retroactive explanation. Why should I be impressed by that?

The RAR extended by weak lensing

The RAR extended by weak lensing

Last time, I expressed despondency about the lack of progress due to attitudes that in many ways remain firmly entrenched in the 1980s. Recently a nice result has appeared, so maybe there is some hope.

The radial acceleration relation (RAR) measured in rotationally supported galaxies extends down to an observed acceleration of about gobs = 10-11 m/s/s, about one part in 1000000000000 of the acceleration we feel here on the surface of the Earth. In some extreme dwarfs, we get down below 10-12 m/s/s. But accelerations this low are hard to find except in the depths of intergalactic space.

Weak lensing data

Brouwer et al have obtained a new constraint down to 10-12.5 m/s/s using weak gravitational lensing. This technique empowers one to probe the gravitational potential of massive galaxies out to nearly 1 Mpc. (The bulk of the luminous mass is typically confined within a few kpc.) To do this, one looks for the net statistical distortion in galaxies behind a lensing mass like a giant elliptical galaxy. I always found this approach a little scary, because you can’t see the signal directly with your eyes the way you can the velocities in a galaxy measured with a long slit spectrograph. Moreover, one has to bin and stack the data, so the result isn’t for an individual galaxy, but rather the average of galaxies within the bin, however defined. There are further technical issues that makes this challenging, but it’s what one has to do to get farther out.

Doing all that, Brouwer et al obtained this RAR:

The radial acceleration relation from weak lensing measured by Brouwer et al (2021). The red squares and bluescale at the top right are the RAR from rotating galaxies (McGaugh et al 2016). The blue, black, and orange points are the new weak lensing results.

To parse a few of the details: there are two basic results here, one from the GAMA survey (the blue points) and one from KiDS. KiDS is larger so has smaller formal errors, but relies on photometric redshifts (which uses lots of colors to guess the best match redshift). That’s probably OK in a statistical sense, but they are not as accurate as the spectroscopic redshifts measured for GAMA. There is a lot of structure in redshift space that gets washed out by photometric redshift estimates. The fact that the two basically agree hopefully means that this doesn’t matter here.

There are two versions of the KiDS data, one using just the stellar mass to estimate gbar, and another that includes an estimate of the coronal gas mass. Many galaxies are surrounded by a hot corona of gas. This is negligible at small radii where the stars dominate, but becomes progressively more important as part of the baryonic mass budget as one moves out. How important? Hard to say. But it certainly matters on scales of a few hundred kpc (this is the CGM in the baryon pie chart, which suggests roughly equal mass in stars (all within a few tens of kpc) and hot coronal gas (mostly out beyond 100 kpc). This corresponds to the orange points; the black points are what happens if we neglect this component (which certainly isn’t zero). So in there somewhere – this seems to be the dominant systematic uncertainty.

Getting past these pesky detail, this result is cool on many levels. First, the RAR appears to persist as a relation. That needn’t have happened. Second, it extends the RAR by a couple of decades to much lower accelerations. Third, it applies to non-rotating as well as rotationally supported galaxies (more on that in a bit). Fourth, the data at very low accelerations follow a straight line with a slope of about 1/2 in this log-log plot. That means gobs ~ gbar1/2. That provides a test of theory.

What does it mean?

Empirically, this is a confirmation that a known if widely unexpected relation extends further than previously known. That’s pretty neat in its own right, without any theoretical baggage. We used to be able to appreciate empirical relations better (e.g, the stellar main sequence!) before we understood what they meant. Now we seem to put the cart (theory) before the horse (data). That said, we do want to use data to test theories. Usually I discuss dark matter first, but that is complicated, so let’s start with MOND.

Test of MOND

MOND predicts what we see.

I am tempted to leave it at that, because it’s really that simple. But experience has taught me that no result is so obvious that someone won’t claim exactly the opposite, so let’s explore it a bit more.

There are three tests: whether the relation (i) exists, (ii) has the right slope, and (iii) has the right normalization. Tests (i) and (ii) are an immediate pass. It also looks like (iii) is very nearly correct, but it depends in detail on the baryonic mass-to-light ratio – that of the stars plus any coronal gas.

MOND is represented by the grey line that’s hard to see, but goes through the data at both high and low acceleration. At high accelerations, this particular line is a fitting function I chose for convenience. There’s nothing special about it, nor is it even specific to MOND. That was the point of our 2016 RAR paper: this relation exists in the data whether it is due to MOND or not. Conceivably, the RAR might be a relation that only applies to rotating galaxies for some reason that isn’t MOND. That’s hard to sustain, since the data look like MOND – so much so that the two are impossible to distinguish in this plane.

In terms of MOND, the RAR traces the interpolation function that quantifies the transition from the Newtonian regime where gobs = gbar to the deep MOND regime where gobs ~ gbar1/2. MOND does not specify the precise form of the interpolation function, just the asymptotic limits. The data trace that the transition, providing an empirical assessment of the shape of the interpolation function around the acceleration scale a0. That’s interesting and will hopefully inform further theory development, but it is not critical to testing MOND.

What MOND does very explicitly predict is the asymptotic behavior gobs ~ gbar1/2 in the deep MOND regime of low accelerations (gobs << a0). That the lensing data are well into this regime makes them an excellent test of this strong prediction of MOND. It passes with flying colors: the data have precisely the slope anticipated by Milgrom nearly 40 years ago.

This didn’t have to happen. All sorts of other things might have happened. Indeed, as we discussed in Lelli et al (2017), there were some hints that the relation flattened, saturating at a constant gobs around 10-11 m/s/s. I was never convinced that this was real, as it only appears in the least certain data, and there were already some weak lensing data to lower accelerations.

Milgrom (2013) analyzed weak lensing data that were available then, obtaining this figure:

Velocity dispersion-luminosity relation obtained from weak lensing data by Milgrom (2013). Lines are the expectation of MOND for mass-to-light ratios ranging from 1 to 6 in the r’-band, as labeled. The sample is split into red (early type, elliptical) and blue (late type, spiral) galaxies. The early types have a systematically higher M/L, as expected for their older stellar populations.

The new data corroborate this result. Here is a similar figure from Brouwer et al:

The RAR from weak lensing for galaxies split by Sesic index (left) and color (right).

Just looking at these figures, one can see the same type-dependent effect found by Milgrom. However, there is an important difference: Milgrom’s plot leaves the unknown mass-to-light ratio as a free parameter, while the new plot has an estimate of this built-in. So if the adopted M/L is correct, then the red and blue galaxies form parallel RARs that are almost but not quite exactly the same. That would not be consistent with MOND, which should place everything on the same relation. However, this difference is well within the uncertainty of the baryonic mass estimate – not just the M/L of the stars, but also the coronal gas content (i.e., the black vs. orange points in the first plot). MOND predicted this behavior well in advance of the observation, so one would have to bend over backwards, rub one’s belly, and simultaneously punch oneself in the face to portray this as anything short of a fantastic success of MOND.

The data! Look at the data!

I say that because I’m sure people will line up to punch themselves in the face in exactly this fashion*. One of the things that persuades me to suspect that there might be something to MOND is the lengths to which people will go to deny even its most obvious successes. At the same time, they are more than willing to cut any amount of slack necessary to save LCDM. An example is provided by Ludlow et al., who claim to explain the RAR ‘naturally’ from simulations – provided they spot themselves a magic factor of two in the stellar mass-to-light ratio. If it were natural, they wouldn’t need that arbitrary factor. By the same token, if you recognize that you might have been that far off about M*/L, you have to extend that same grace to MOND as you do to LCDM. That’s a basic tenet of objectivity, which used to be a value in science. It doesn’t look like a correction as large as a factor of two is necessary here given the uncertainty in the coronal gas. So, preemptively: Get a grip, people.

MOND predicts what we see. No other theory beat it to the punch. The best one can hope to do is to match its success after the fact by coming up with some other theory that looks just like MOND.

Test of LCDM

In order to test LCDM, we have to agree what LCDM predicts. That agreement is lacking. There is no clear prediction. This complicates the discussion, as the best one can hope to do is give a thorough discussion of all the possibilities that people have so far considered, which differ in important ways. That exercise is necessarily incomplete – people can always come up with new and different ideas for how to explain what they didn’t predict. I’ve been down the road of being thorough many times, which gets so complicated that no one reads it. So I will not attempt to be thorough here, and only explore enough examples to give a picture of where we’re currently at.

The tests are the same as above: should the relation (i) exist? (ii) have the observed slope? and (iii) normalization?

The first problem for LCDM is that the relation exists (i). There is no reason to expect this relation to exist. There was (and in some corners, continues to be) a lot of denial that the RAR even exists, because it shouldn’t. It does, and it looks just like what MOND predicts. LCDM is not MOND, and did not anticipate this behavior because there is no reason to do so.

If we persist past this point – and it is not obvious that we should – then we may say, OK, here’s this unexpected relation; how do we explain it? For starters, we do have a prediction for the density profiles of dark matter halos; these fall off as r-3. That translates to some slope in the RAR plane, but not a unique relation, as the normalization can and should be different for each halo. But it’s not even the right slope. The observed slope corresponds to a logarithmic potential in which the density profile falls off as r-2. That’s what is required to give a flat rotation curve in Newtonian dynamics, which is why the psedoisothermal halo was the standard model before simulations gave us the NFW halo with its r-3 fall off. The lensing data are like a flat rotation curve that extends indefinitely far out; they are not like an NFW halo.

That’s just stating the obvious. To do more requires building a model. Here is an example from Oman et al. of a model that follows the logic I just outlined, adding some necessary and reasonable assumptions about the baryons:

The “slight offset” from the observed RAR mentioned in the caption is the factor of two in stellar mass they spotted themselves in Ludlow et al. (2017).

The model is the orange line. It deviates from the black line that is the prediction of MOND. The data look like MOND, not like the orange line.

One can of course build other models. Brouwer et al discuss some. I will not explore these in detail, and only note that the models are not consistent, so there is no clear prediction from LCDM. To explore just one a little further, this figure appears at the very end of their paper, in appendix C:

The orange line in this case is some extrapolation of the model of Navarro et al. (2017).** This also does not work, though it doesn’t fail by as much as the model of Oman et al. I don’t understand how they make the extrapolation here, as a major prediction of Navarro et al. was that gobs would saturate at 10-11 ms/s/s; the orange line should flatten out near the middle of this plot. Indeed, they argued that we would never observe any lower accelerations, and that

“extending observations to radii well beyond the inner halo regions should lead to systematic deviations from the MDAR.”

– Navarro et al (2017)

This is a reasonable prediction for LCDM, but it isn’t what happened – the RAR continues as predicted by MOND. (The MDAR is equivalent to the RAR).

The astute reader may notice that many of these theorists are frequently coauthors, so you might expect they’d come up with a self-consistent model and stick to it. Unfortunately, consistency is not a hobgoblin that afflicts galaxy formation theory, and there are as many predictions as there are theorists (more for the prolific ones). They’re all over the map – which is the problem. LCDM makes no prediction to which everyone agrees. This makes it impossible to test the theory. If one model is wrong, that is just because that particular model is wrong, not because the theory is under threat. The theory is never under threat as there always seems to be another modeler who will claim success where others fail, whether they genuinely succeed or not. That they claim success is all that is required. Cognitive dissonance then takes over, people believe what they want to hear, and all anomalies are forgiven and forgotten. There never seems to be a proper prior that everyone would agree falsifies the theory if it fails. Galaxy formation in LCDM has become epicycles on steroids.

Whither now?

I have no idea. Continue to improve the data, of course. But the more important thing that needs to happen is a change in attitude. The attitude is that LCDM as a cosmology must be right so the mass discrepancy must be caused by non-baryonic dark matter so any observation like this must have a conventional explanation, no matter how absurd and convoluted. We’ve been stuck in this rut since before we even put the L in CDM. We refuse to consider alternatives so long as the standard model has not been falsified, but I don’t see how it can be falsified to the satisfaction of all – there’s always a caveat, a rub, some out that we’re willing to accept uncritically, no matter how silly. So in the rut we remain.

A priori predictions are an important part of the scientific method because they can’t be fudged. On the rare occasions when they come true, it is supposed to make us take note – even change our minds. These lensing results are just another of many previous corroborations of a priori predictions by MOND. What people do with that knowledge – build on it, choose to ignore it, or rant in denial – is up to them.


*Bertolt Brecht mocked this attitude amongst the Aristotelian philosophers in his play about Galileo, noting how they were eager to criticize the new dynamics if the heavier rock beat the lighter rock to the ground by so much as a centimeter in the Leaning Tower of Pisa experiment while turning a blind eye to their own prediction being off by a hundred meters.

**I worked hard to salvage dark matter, which included a lot of model building. I recognize the model of Navarro et al as a slight variation on a model I built in 2000 but did not publish because it was obviously wrong. It takes a lot of time to write a scientific paper, so a lot of null results never get reported. In 2000 when I did this, the natural assumption to make was that galaxies all had about the same disk fraction (the ratio of stars to dark matter, e.g., assumption (i) of Mo et al 1998). This predicts far too much scatter in the RAR, which is why I abandoned the model. Since then, this obvious and natural assumption has been replaced by abundance matching, in which the stellar mass fraction is allowed to vary to account for the difference between the predicted halo mass function and the observed galaxy luminosity function. In effect, we replaced a universal constant with a rolling fudge factor***. This has the effect of compressing the range of halo masses for a given range of stellar masses. This in turn reduces the “predicted” scatter in the RAR, just by taking away some of the variance that was naturally there. One could do better still with even more compression, as the data are crudely consistent with all galaxies living in the same dark matter halo. This is of course a consequence of MOND, in which the conventionally inferred dark matter halo is just the “extra” force specified by the interpolation function.

***This is an example of what I’ll call prediction creep for want of a better term. Originally, we thought that galaxies corresponded to balls of gas that had had time to cool and condense. As data accumulated, we realized that the baryon fractions of galaxies were not equal to the cosmic value fb; they were rather less. That meant that only a fraction of the baryons available in a dark matter halo had actually cooled to form the visible disk. So we introduced a parameter md = Mdisk/Mtot (as Mo et al. called it) where the disk is the visible stars and gas and the total includes that and all the dark matter out to the notional edge of the dark matter halo. We could have any md < fb, but they were in the same ballpark for massive galaxies, so it seemed reasonable to think that the disk fraction was a respectable fraction of the baryons – and the same for all galaxies, perhaps with some scatter. This also does not work; low mass galaxies have much lower md than high mass galaxies. Indeed, md becomes ridiculously small for the smallest galaxies, less than 1% of the available fb (a problem I’ve been worried about since the previous century). At each step, there has been a creep in what we “predict.” All the baryons should condense. Well, most of them. OK, fewer in low mass galaxies. Why? Feedback! How does that work? Don’t ask! You don’t want to know. So for a while the baryon fraction of a galaxy was just a random number stochastically generated by chance and feedback. That is reasonable (feedback is chaotic) but it doesn’t work; the variation of the disk fraction is a clear function of mass that has to have little scatter (or it pumps up the scatter in the Tully-Fisher relation). So we gradually backed our way into a paradigm where the disk fraction is a function md(M*). This has been around long enough that we have gotten used to the idea. Instead of seeing it for what it is – a rolling fudge factor – we call it natural as if it had been there from the start, as if we expected it all along. This is prediction creep. We did not predict anything of the sort. This is just an expectation built through familiarity with requirements imposed by the data, not genuine predictions made by the theory. It has become common to assert that some unnatural results are natural; this stems in part from assuming part of the answer: any model built on abundance matching is unnatural to start, because abundance matching is unnatural. Necessary, but not remotely what we expected before all the prediction creep. It’s creepy how flexible our predictions can be.

Despondency

Despondency

I have become despondent for the progress of science.

Despite enormous progress both observational and computational, we have made little progress in solving the missing mass problem. The issue is not one of technical progress. It is psychological.

Words matter. We are hung up on missing mass as literal dark matter. As Bekenstein pointed out, a less misleading name would have been the acceleration discrepancy, because the problem only appears at low accelerations. But that sounds awkward. We humans like our simple catchphrases, and often cling to them no matter what. We called it dark matter, so it must be dark matter!

Vera Rubin succinctly stated the appropriately conservative attitude of most scientists in 1982 during the discussion at IAU 100:

To highlight the end of her quote:

I believe most of us would rather alter Newtonian gravitational theory only as a last resort.

Rubin, V.C. 1983, in the proceedings of IAU Symposium 100: Internal Kinematics and Dynamics of Galaxies, p. 10.

Exactly.

In 1982, this was exactly the right attitude. It had been clearly established that there was a discrepancy between what you see and what you get. But that was about it. So, we could add a little mass that’s hard to see, or we could change a fundamental law of nature. Easy call.

By this time, the evidence for a discrepancy was clear, but the hypothesized solutions were still in development. This was before the publication of the suggestion of Peebles and separately by Steigman & Turner of cold dark matter. This was before the publication of Milgrom’s first papers on MOND. (Note that these ideas took years to develop, so much of this work was simultaneous and not done in a vacuum.) All that was clear was that something extra was needed. It wasn’t even clear how much – a factor of two in mass sufficed for many of the early observations. At that time, it was easy to imagine that amount to be lurking in low mass stars. No need for new physics, either gravitational or particle.

The situation quickly snowballed. From a factor of two, we soon needed a factor of ten. Whatever was doing the gravitating, it exceeded the mass density allowed in normal matter by big bang nucleosynthesis. By the time I was a grad student in the late ’80s, it was obvious that there had to be some kind of dark mass, and it had to be non-baryonic. That meant new particle physics (e.g., a WIMP). The cold dark matter paradigm took root.

Like a fifty year mortgage, we are basically still stuck with this decision we made in the ’80s. It made sense then, given what was then known. Does it still? At what point have we reached the last resort? More importantly, apparently, how do we persuade ourselves that we have reached this point?

Peebles provides a nice recent summary of all the ways in which LCDM is a good approximation to cosmologically relevant observations. There are a lot, and I don’t disagree with him. The basic argument is that it is very unlikely that these things all agree unless LCDM is basically correct.

Trouble is, the exact same argument applies for MOND. I’m not going to justify this here – it should be obvious. If it isn’t, you haven’t been paying attention. It is unlikely to the point of absurdity that a wholly false theory should succeed in making so many predictions of such diversity and precision as MOND has.

These are both examples of what philosophers of science call a No Miracles Argument. The problem is that it cuts both ways. I will refrain from editorializing here on which would be the bigger miracle, and simply note that the obvious thing to do is try to combine the successes of both, especially given that they don’t overlap much. And yet, the Venn diagram of scientists working to satisfy both ends is vanishingly small. Not zero, but the vast majority of the community remains stuck in the ’80s: it has to be cold dark matter. I remember having this attitude, and how hard it was to realize that it might be wrong. The intellectual blinders imposed by this attitude are more opaque than a brick wall. This psychological hangup is the primary barrier to real scientific progress (as opposed to incremental progress in the sense used by Kuhn).

Unfortunately, both CDM and MOND rely on a tooth fairy. In CDM, it is the conceit that non-baryonic dark matter actually exists. This requires new physics beyond the Standard Model of particle physics. All the successes of LCDM follow if and only if dark matter actually exists. This we do not know (contrary to many assertions to this effect); all we really know is that there are discrepancies. Whether the discrepancies are due to literal dark matter or a change in the force law is maddeningly ambiguous. Of course, the conceit in MOND is not just that there is a modified force law, but that there must be a physical mechanism by which it occurs. The first part is the well-established discrepancy. The last part remains wanting.

When we think we know, we cease to learn.

Dr. Radhakrishnan

The best scientists are always in doubt. As well as enumerating its successes, Peebles also discusses some of the ways in which LCDM might be better. Should massive galaxies appear as they do? (Not really.) Should the voids really be so empty? (MOND predicted that one.) I seldom hear these concerns from other cosmologists. That’s because they’re not in doubt. The attitude is that dark matter has to exist, and any contrary evidence is simply a square peg that can be made to fit the round hole if we pound hard enough.

And so, we’re stuck still pounding the ideas of the ’80s into the heads of innocent students, creating a closed ecosystem of stagnant ideas self-perpetuated by the echo chamber effect. I see no good way out of this; indeed, the quality of debate is palpably lower now than it was in the previous century.

So I have become despondent for the progress of science.

Bias all the way down

Bias all the way down

It often happens that data are ambiguous and open to multiple interpretations. The evidence for dark matter is an obvious example. I frequently hear permutations on the statement

We know dark matter exists; we just need to find it.

This is said in all earnestness by serious scientists who clearly believe what they say. They mean it. Unfortunately, meaning something in all seriousness, indeed, believing it with the intensity of religious fervor, does not guarantee that it is so.

The way the statement above is phrased is a dangerous half-truth. What the data show beyond any dispute is that there is a discrepancy between what we observe in extragalactic systems (including cosmology) and the predictions of Newton & Einstein as applied to the visible mass. If we assume that the equations Newton & Einstein taught us are correct, then we inevitably infer the need for invisible mass. That seems like a very reasonable assumption, but it is just that: an assumption. Moreover, it is an assumption that is only tested on the relevant scales by the data that show a discrepancy. One could instead infer that theory fails this test – it does not work to predict observed motions when applied to the observed mass. From this perspective, it could just as legitimately be said that

A more general theory of dynamics must exist; we just need to figure out what it is.

That puts an entirely different complexion on exactly the same problem. The data are the same; they are not to blame. The difference is how we interpret them.

Neither of these statements are correct: they are both half-truths; two sides of the same coin. As such, one risks being wildly misled. If one only hears one, the other gets discounted. That’s pretty much where the field is now, and has it been stuck there for a long time.

That’s certainly where I got my start. I was a firm believer in the standard dark matter interpretation. The evidence was obvious and overwhelming. Not only did there need to be invisible mass, it had to be some new kind of particle, like a WIMP. Almost certainly a WIMP. Any other interpretation (like MACHOs) was obviously stupid, as it violated some strong constraint, like Big Bang Nucleosynthesis (BBN). It had to be non-baryonic cold dark matter. HAD. TO. BE. I was sure of this. We were all sure of this.

What gets us in trouble is not what we don’t know. It’s what we know for sure that just ain’t so.

Josh Billings

I realized in the 1990s that the above reasoning was not airtight. Indeed, it has a gaping hole: we were not even considering modifications of dynamical laws (gravity and inertia). That this was a possibility, even a remote one, came as a profound and deep shock to me. It took me ages of struggle to admit it might be possible, during which I worked hard to save the standard picture. I could not. So it pains me to watch the entire community repeat the same struggle, repeat the same failures, and pretend like it is a success. That last step follows from the zeal of religious conviction: the outcome is predetermined. The answer still HAS TO BE dark matter.

So I asked myself – what if we’re wrong? How could we tell? Once one has accepted that the universe is filled with invisible mass that can’t be detected by any craft available known to us, how can we disabuse ourselves of this notion should it happen to be wrong?

One approach that occurred to me was a test in the power spectrum of the cosmic microwave background. Before any of the peaks had been measured, the only clear difference one expected was a bigger second peak with dark matter, and a smaller one without it for the same absolute density of baryons as set by BBN. I’ve written about the lead up to this prediction before, and won’t repeat it here. Rather, I’ll discuss some of the immediate fall out – some of which I’ve only recently pieced together myself.

The first experiment to provide a test of the prediction for the second peak was Boomerang. The second was Maxima-1. I of course checked the new data when they became available. Maxima-1 showed what I expected. So much so that it barely warranted comment. One is only supposed to write a scientific paper when one has something genuinely new to say. This didn’t rise to that level. It was more like checking a tick box. Besides, lots more data were coming; I couldn’t write a new paper every time someone tacked on an extra data point.

There was one difference. The Maxima-1 data had a somewhat higher normalization. The shape of the power spectrum was consistent with that of Boomerang, but the overall amplitude was a bit higher. The latter mattered not at all to my prediction, which was for the relative amplitude of the first to second peaks.

Systematic errors, especially in the amplitude, were likely in early experiments. That’s like rule one of observing the sky. After examining both data sets and the model expectations, I decided the Maxima-1 amplitude was more likely to be correct, so I asked what offset was necessary to reconcile the two. About 14% in temperature. This was, to me, no big deal – it was not relevant to my prediction, and it is exactly the sort of thing one expects to happen in the early days of a new kind of observation. It did seem worth remarking on, if not writing a full blown paper about, so I put it in a conference presentation (McGaugh 2000), which was published in a journal (IJMPA, 16, 1031) as part of the conference proceedings. This correctly anticipated the subsequent recalibration of Boomerang.

The figure from McGaugh (2000) is below. Basically, I said “gee, looks like the Boomerang calibration needs to be adjusted upwards a bit.” This has been done in the figure. The amplitude of the second peak remained consistent with the prediction for a universe devoid of dark matter. In fact, if got better (see Table 4 of McGaugh 2004).

Plot from McGaugh (2000): The predictions of LCDM (left) and no-CDM (right) compared to Maxima-1 data (open points) and Boomerang data (filled points, corrected in normalization). The LCDM model shown is the most favorable prediction that could be made prior to observation of the first two peaks; other then-viable choices of cosmic parameters predicted a higher second peak. The no-CDM got the relative amplitude right a priori, and remains consistent with subsequent data from WMAP and Planck.

This much was trivial. There was nothing new to see, at least as far as the test I had proposed was concerned. New data were pouring in, but there wasn’t really anything worth commenting on until WMAP data appeared several years later, which persisted in corroborating the peak ratio prediction. By this time, the cosmological community had decided that despite persistent corroborations, my prediction was wrong.

That’s right. I got it right, but then right turned into wrong according to the scuttlebutt of cosmic gossip. This was a falsehood, but it took root, and seems to have become one of the things that cosmologists know for sure that just ain’t so.

How did this come to pass? I don’t know. People never asked me. My first inkling was 2003, when it came up in a chance conversation with Marv Leventhal (then chair of Maryland Astronomy), who opined “too bad the data changed on you.” This shocked me. Nothing relevant in the data had changed, yet here was someone asserting that it had like it was common knowledge. Which I suppose it was by then, just not to me.

Over the years, I’ve had the occasional weird conversation on the subject. In retrospect, I think the weirdness stemmed from a divergence of assumed knowledge. They knew I was right then wrong. I knew the second peak prediction had come true and remained true in all subsequent data, but the third peak was a different matter. So there were many opportunities for confusion. In retrospect, I think many of these people were laboring under the mistaken impression that I had been wrong about the second peak.

I now suspect this started with the discrepancy between the calibration of Boomerang and Maxima-1. People seemed to be aware that my prediction was consistent with the Boomerang data. Then they seem to have confused the prediction with those data. So when the data changed – i.e., Maxima-1 was somewhat different in amplitude, then it must follow that the prediction now failed.

This is wrong on many levels. The prediction is independent of the data that test it. It is incredibly sloppy thinking to confuse the two. More importantly, the prediction, as phrased, was not sensitive to this aspect of the data. If one had bothered to measure the ratio in the Maxima-1 data, one would have found a number consistent with the no-CDM prediction. This should be obvious from casual inspection of the figure above. Apparently no one bothered to check. They didn’t even bother to understand the prediction.

Understanding a prediction before dismissing it is not a hard ask. Unless, of course, you already know the answer. Then laziness is not only justified, but the preferred course of action. This sloppy thinking compounds a number of well known cognitive biases (anchoring bias, belief bias, confirmation bias, to name a few).

I mistakenly assumed that other people were seeing the same thing in the data that I saw. It was pretty obvious, after all. (Again, see the figure above.) It did not occur to me back then that other scientists would fail to see the obvious. I fully expected them to complain and try and wriggle out of it, but I could not imagine such complete reality denial.

The reality denial was twofold: clearly, people were looking for any excuse to ignore anything associated with MOND, however indirectly. But they also had no clear prior for LCDM, which I did establish as a point of comparison. A theory is only as good as its prior, and all LCDM models made before these CMB data showed the same thing: a bigger second peak than was observed. This can be fudged: there are ample free parameters, so it can be made to fit; one just had to violate BBN (as it was then known) by three or four sigma.

In retrospect, I think the very first time I had this alternate-reality conversation was at a conference at the University of Chicago in 2001. Andrey Kravtsov had just joined the faculty there, and organized a conference to get things going. He had done some early work on the cusp-core problem, which was still very much a debated thing at the time. So he asked me to come address that topic. I remember being on the plane – a short ride from Cleveland – when I looked at the program. Nearly did a spit take when I saw that I was to give the first talk. There wasn’t a lot of time to organize my transparencies (we still used overhead projectors in those days) but I’d given the talk many times before, so it was enough.

I only talked about the rotation curves of low surface brightness galaxies in the context of the cusp-core problem. That was the mandate. I didn’t talk about MOND or the CMB. There’s only so much you can address in a half hour talk. [This is a recurring problem. No matter what I say, there always seems to be someone who asks “why didn’t you address X?” where X is usually that person’s pet topic. Usually I could do so, but not in the time allotted.]

About halfway through this talk on the cusp-core problem, I guess it became clear that I wasn’t going to talk about things that I hadn’t been asked to talk about, and I was interrupted by Mike Turner, who did want to talk about the CMB. Or rather, extract a confession from me that I had been wrong about it. I forget how he phrased it exactly, but it was the academic equivalent of “Have you stopped beating your wife lately?” Say yes, and you admit to having done so in the past. Say no, and you’re still doing it. What I do clearly remember was him prefacing it with “As a test of your intellectual honesty” as he interrupted to ask a dishonest and intentionally misleading question that was completely off-topic.

Of course, the pretext for his attack question was the Maxima-1 result. He phrased it in a way that I had to agree that those disproved my prediction, or be branded a liar. Now, at the time, there were rumors swirling that the experiment – some of the people who worked on it were there – had detected the third peak, so I thought that was what he was alluding to. Those data had not yet been published and I certainly had not seen them, so I could hardly answer that question. Instead, I answered the “intellectual honesty” affront by pointing to a case where I had said I was wrong. At one point, I thought low surface brightness galaxies might explain the faint blue galaxy problem. On closer examination, it became clear that they could not provide a complete explanation, so I said so. Intellectual honesty is really important to me, and should be to all scientists. I have no problem admitting when I’m wrong. But I do have a problem with demands to admit that I’m wrong when I’m not.

To me, it was obvious that the Maxima-1 data were consistent with the second peak. The plot above was already published by then. So it never occurred to me that he thought the Maxima-1 data were in conflict with what I had predicted – it was already known that it was not. Only to him, it was already known that it was. Or so I gather – I have no way to know what others were thinking. But it appears that this was the juncture in which the field suffered a psychotic break. We are not operating on the same set of basic facts. There has been a divergence in personal realities ever since.

Arthur Kosowsky gave the summary talk at the end of the conference. He told me that he wanted to address the elephant in the room: MOND. I did not think the assembled crowd of luminary cosmologists were mature enough for that, so advised against going there. He did, and was incredibly careful in what he said: empirical, factual, posing questions rather than making assertions. Why does MOND work as well as it does?

The room dissolved into chaotic shouting. Every participant was vying to say something wrong more loudly than the person next to him. (Yes, everyone shouting was male.) Joel Primack managed to say something loudly enough for it to stick with me, asserting that gravitational lensing contradicted MOND in a way that I had already shown it did not. It was just one of dozens of superficial falsehoods that people take for granted to be true if they align with one’s confirmation bias.

The uproar settled down, the conference was over, and we started to disperse. I wanted to offer Arthur my condolences, having been in that position many times. Anatoly Klypin was still giving it to him, keeping up a steady stream of invective as everyone else moved on. I couldn’t get a word in edgewise, and had a plane home to catch. So when I briefly caught Arthur’s eye, I just said “told you” and moved on. Anatoly paused briefly, apparently fathoming that his behavior, like that of the assembled crowd, was entirely predictable. Then the moment of awkward self-awareness passed, and he resumed haranguing Arthur.

Divergence

Divergence

Reality check

Before we can agree on the interpretation of a set of facts, we have to agree on what those facts are. Even if we agree on the facts, we can differ about their interpretation. It is OK to disagree, and anyone who practices astrophysics is going to be wrong from time to time. It is the inevitable risk we take in trying to understand a universe that is vast beyond human comprehension. Heck, some people have made successful careers out of being wrong. This is OK, so long as we recognize and correct our mistakes. That’s a painful process, and there is an urge in human nature to deny such things, to pretend they never happened, or to assert that what was wrong was right all along.

This happens a lot, and it leads to a lot of weirdness. Beyond the many people in the field whom I already know personally, I tend to meet two kinds of scientists. There are those (usually other astronomers and astrophysicists) who might be familiar with my work on low surface brightness galaxies or galaxy evolution or stellar populations or the gas content of galaxies or the oxygen abundances of extragalactic HII regions or the Tully-Fisher relation or the cusp-core problem or faint blue galaxies or big bang nucleosynthesis or high redshift structure formation or joint constraints on cosmological parameters. These people behave like normal human beings. Then there are those (usually particle physicists) who have only heard of me in the context of MOND. These people often do not behave like normal human beings. They conflate me as a person with a theory that is Milgrom’s. They seem to believe that both are evil and must be destroyed. My presence, even the mere mention of my name, easily destabilizes their surprisingly fragile grasp on sanity.

One of the things that scientists-gone-crazy do is project their insecurities about the dark matter paradigm onto me. People who barely know me frequently attribute to me motivations that I neither have nor recognize. They presume that I have some anti-cosmology, anti-DM, pro-MOND agenda, and are remarkably comfortably about asserting to me what it is that I believe. What they never explain, or apparently bother to consider, is why I would be so obtuse? What is my motivation? I certainly don’t enjoy having the same argument over and over again with their ilk, which is the only thing it seems to get me.

The only agenda I have is a pro-science agenda. I want to know how the universe works.

This agenda is not theory-specific. In addition to lots of other astrophysics, I have worked on both dark matter and MOND. I will continue to work on both until we have a better understanding of how the universe works. Right now we’re very far away from obtaining that goal. Anyone who tells you otherwise is fooling themselves – usually by dint of ignoring inconvenient aspects of the evidence. Everyone is susceptible to cognitive dissonance. Scientists are no exception – I struggle with it all the time. What disturbs me is the number of scientists who apparently do not. The field is being overrun with posers who lack the self-awareness to question their own assumptions and biases.

So, I feel like I’m repeating myself here, but let me state my bias. Oh wait. I already did. That’s why it felt like repetition. It is.

The following bit of this post is adapted from an old web page I wrote well over a decade ago. I’ve lost track of exactly when – the file has been through many changes in computer systems, and unix only records the last edit date. For the linked page, that’s 2016, when I added a few comments. The original is much older, and was written while I was at the University of Maryland. Judging from the html style, it was probably early to mid-’00s. Of course, the sentiment is much older, as it shouldn’t need to be said at all.

I will make a few updates as seem appropriate, so check the link if you want to see the changes. I will add new material at the end.


Long standing remarks on intellectual honesty

The debate about MOND often degenerates into something that falls well short of the sober, objective discussion that is suppose to characterize scientific debates. One can tell when voices are raised and baseless ad hominem accusations made. I have, with disturbing frequency, found myself accused of partisanship and intellectual dishonesty, usually by people who are as fair and balanced as Fox News.

Let me state with absolute clarity that intellectual honesty is a bedrock principle of mine. My attitude is summed up well by the quote

When a man lies, he murders some part of the world.

Paul Gerhardt

I first heard this spoken by the character Merlin in the movie Excalibur (1981 version). Others may have heard it in a song by Metallica. As best I can tell, it is originally attributable to the 17th century cleric Paul Gerhardt.

This is a great quote for science, as the intent is clear. We don’t get to pick and choose our facts. Outright lying about them is antithetical to science.

I would extend this to ignoring facts. One should not only be honest, but also as complete as possible. It does not suffice to be truthful while leaving unpleasant or unpopular facts unsaid. This is lying by omission.

I “grew up” believing in dark matter. Specifically, Cold Dark Matter, presumably a WIMP. I didn’t think MOND was wrong so much as I didn’t think about it at all. Barely heard of it; not worth the bother. So I was shocked – and angered – when it its predictions came true in my data for low surface brightness galaxies. So I understand when my colleagues have the same reaction.

Nevertheless, Milgrom got the prediction right. I had a prediction, it was wrong. There were other conventional predictions, they were also wrong. Indeed, dark matter based theories generically have a very hard time explaining these data. In a Bayesian sense, given the prior that we live in a ΛCDM universe, the probability that MONDian phenomenology would be observed is practically zero. Yet it is. (This is very well established, and has been for some time.)

So – confronted with an unpopular theory that nevertheless had some important predictions come true, I reported that fact. I could have ignored it, pretended it didn’t happen, covered my eyes and shouted LA LA LA NOT LISTENING. With the benefit of hindsight, that certainly would have been the savvy career move. But it would also be ignoring a fact, and tantamount to a lie.

In short, though it was painful and protracted, I changed my mind. Isn’t that what the scientific method says we’re suppose to do when confronted with experimental evidence?

That was my experience. When confronted with evidence that contradicted my preexisting world view, I was deeply troubled. I tried to reject it. I did an enormous amount of fact-checking. The people who presume I must be wrong have not had this experience, and haven’t bothered to do any fact-checking. Why bother when you already are sure of the answer?


Willful Ignorance

I understand being skeptical about MOND. I understand being more comfortable with dark matter. That’s where I started from myself, so as I said above, I can empathize with people who come to the problem this way. This is a perfectly reasonable place to start.

For me, that was over a quarter century ago. I can understand there being some time lag. That is not what is going on. There has been ample time to process and assimilate this information. Instead, most physicists have chosen to remain ignorant. Worse, many persist in spreading what can only be described as misinformation. I don’t think they are liars; rather, it seems that they believe their own bullshit.

To give an example of disinformation, I still hear said things like “MOND fits rotation curves but nothing else.” This is not true. The first thing I did was check into exactly that. Years of fact-checking went into McGaugh & de Blok (1998), and I’ve done plenty more since. It came as a great surprise to me that MOND explained the vast majority of the data as well or better than dark matter. Not everything, to be sure, but lots more than “just” rotation curves. Yet this old falsehood still gets repeated as if it were not a misconception that was put to rest in the previous century. We’re stuck in the dark ages by choice.

It is not a defensible choice. There is no excuse to remain ignorant of MOND at this juncture in the progress of astrophysics. It is incredibly biased to point to its failings without contending with its many predictive successes. It is tragi-comically absurd to assume that dark matter provides a better explanation when it cannot make the same predictions in advance. MOND may not be correct in every particular, and makes no pretense to be a complete theory of everything. But it is demonstrably less wrong than dark matter when it comes to predicting the dynamics of systems in the low acceleration regime. Pretending like this means nothing is tantamount to ignoring essential facts.

Even a lie of omission murders a part of the world.

25 years a heretic

25 years a heretic

People seem to like to do retrospectives at year’s end. I take a longer view, but the end of 2020 seems like a fitting time to do that. Below is the text of a paper I wrote in 1995 with collaborators at the Kapteyn Institute of the University of Groningen. The last edit date is from December of that year, so this text (in plain TeX, not LaTeX!) is now a quarter century old. I am just going to cut & paste it as-was; I even managed to recover the original figures and translate them into something web-friendly (postscript to jpeg). This is exactly how it was.

This was my first attempt to express in the scientific literature my concerns for the viability of the dark matter paradigm, and my puzzlement that the only theory to get any genuine predictions right was MOND. It was the hardest admission in my career that this could be even a remote possibility. Nevertheless, intellectual honesty demanded that I report it. To fail to do so would be an act of reality denial antithetical to the foundational principles of science.

It was never published. There were three referees. Initially, one was positive, one was negative, and one insisted that rotation curves weren’t flat. There was one iteration; this is the resubmitted version in which the concerns of the second referee were addressed to his apparent satisfaction by making the third figure a lot more complicated. The third referee persisted that none of this was valid because rotation curves weren’t flat. Seems like he had a problem with something beyond the scope of this paper, but the net result was rejection.

One valid concern that ran through the refereeing process from all sides was “what about everything else?” This is a good question that couldn’t fit into a short letter like this. Thanks to the support of Vera Rubin and a Carnegie Fellowship, I spent the next couple of years looking into everything else. The results were published in 1998 in a series of three long papers: one on dark matter, one on MOND, and one making detailed fits.

This had started from a very different place intellectually with my efforts to write a paper on galaxy formation that would have been similar to contemporaneous papers like Dalcanton, Spergel, & Summers and Mo, Mao, & White. This would have followed from my thesis and from work with Houjun Mo, who was an office mate when we were postdocs at the IoA in Cambridge. (The ideas discussed in Mo, McGaugh, & Bothun have been reborn recently in the galaxy formation literature under the moniker of “assembly bias.”) But I had realized by then that my ideas – and those in the papers cited – were wrong. So I didn’t write a paper that I knew to be wrong. I wrote this one instead.

Nothing substantive has changed since. Reading it afresh, I’m amazed how many of the arguments over the past quarter century were anticipated here. As a scientific community, we are stuck in a rut, and seem to prefer to spin the wheels to dig ourselves in deeper than consider the plain if difficult path out.


Testing hypotheses of dark matter and alternative gravity with low surface density galaxies

The missing mass problem remains one of the most vexing in astrophysics. Observations clearly indicate either the presence of a tremendous amount of as yet unidentified dark matter1,2, or the need to modify the law of gravity3-7. These hypotheses make vastly different predictions as a function of density. Observations of the rotation curves of galaxies of much lower surface brightness than previously studied therefore provide a powerful test for discriminating between them. The dark matter hypothesis requires a surprisingly strong relation between the surface brightness and mass to light ratio8, placing stringent constraints on theories of galaxy formation and evolution. Alternatively, the observed behaviour is predicted4 by one of the hypothesised alterations of gravity known as modified Newtonian dynamics3,5 (MOND).

Spiral galaxies are observed to have asymptotically flat [i.e., V(R) ~ constant for large R] rotation curves that extend well beyond their optical edges. This trend continues for as far (many, sometimes > 10 galaxy scale lengths) as can be probed by gaseous tracers1,2 or by the orbits of satellite galaxies9. Outside a galaxy’s optical radius, the gravitational acceleration is aN = GM/R2 = V2/R so one expects V(R) ~ R-1/2. This Keplerian behaviour is not observed in galaxies.

One approach to this problem is to increase M in the outer parts of galaxies in order to provide the extra gravitational acceleration necessary to keep the rotation curves flat. Indeed, this is the only option within the framework of Newtonian gravity since both V and R are directly measured. The additional mass must be invisible, dominant, and extend well beyond the optical edge of the galaxies.

Postulating the existence of this large amount of dark matter which reveals itself only by its gravitational effects is a radical hypothesis. Yet the kinematic data force it upon us, so much so that the existence of dark matter is generally accepted. Enormous effort has gone into attempting to theoretically predict its nature and experimentally verify its existence, but to date there exists no convincing detection of any hypothesised dark matter candidate, and many plausible candidates have been ruled out10.

Another possible solution is to alter the fundamental equation aN = GM/R2. Our faith in this simple equation is very well founded on extensive experimental tests of Newtonian gravity. Since it is so fundamental, altering it is an even more radical hypothesis than invoking the existence of large amounts of dark matter of completely unknown constituent components. However, a radical solution is required either way, so both possibilities must be considered and tested.

A phenomenological theory specifically introduced to address the problem of the flat rotation curves is MOND3. It has no other motivation and so far there is no firm physical basis for the theory. It provides no satisfactory cosmology, having yet to be reconciled with General Relativity. However, with the introduction of one new fundamental constant (an acceleration a0), it is empirically quite successful in fitting galaxy rotation curves11-14. It hypothesises that for accelerations a < a0 = 1.2 x 10-10 m s-2, the effective acceleration is given by aeff = (aN a0)1/2. This simple prescription works well with essentially only one free parameter per galaxy, the stellar mass to light ratio, which is subject to independent constraint by stellar evolution theory. More importantly, MOND makes predictions which are distinct and testable. One specific prediction4 is that the asymptotic (flat) value of the rotation velocity, Va, is Va = (GMa0)1/4. Note that Va does not depend on R, but only on M in the regime of small accelerations (a < a0).

In contrast, Newtonian gravity depends on both M and R. Replacing R with a mass surface density variable S = M(R)/R2, the Newtonian prediction becomes M S ~ Va4 which contrasts with the MOND prediction M ~ Va4. These relations are the theoretical basis in each case for the observed luminosity-linewidth relation L ~ Va4 (better known as the Tully-Fisher15 relation. Note that the observed value of the exponent is bandpass dependent, but does obtain the theoretical value of 4 in the near infrared16 which is considered the best indicator of the stellar mass. The systematic variation with bandpass is a very small effect compared to the difference between the two gravitational theories, and must be attributed to dust or stars under either theory.) To transform from theory to observation one requires the mass to light ratio Y: Y = M/L = S/s, where s is the surface brightness. Note that in the purely Newtonian case, M and L are very different functions of R, so Y is itself a strong function of R. We define Y to be the mass to light ratio within the optical radius R*, as this is the only radius which can be measured by observation. The global mass to light ratio would be very different (since M ~ R for R > R*, the total masses of dark haloes are not measurable), but the particular choice of definition does not affect the relevant functional dependences is all that matters. The predictions become Y2sL ~ Va4 for Newtonian gravity8,16 and YL ~ Va4 for MOND4.

The only sensible17 null hypothesis that can be constructed is that the mass to light ratio be roughly constant from galaxy to galaxy. Clearly distinct predictions thus emerge if galaxies of different surface brightnesses s are examined. In the Newtonian case there should be a family of parallel Tully-Fisher relations for each surface brightness. In the case of MOND, all galaxies should follow the same Tully-Fisher relation irrespective of surface brightness.

Recently it has been shown that extreme objects such as low surface brightness galaxies8,18 (those with central surface brightnesses fainter than s0 = 23 B mag./[] corresponding 40 L pc-2) obey the same Tully-Fisher relation as do the high surface brightness galaxies (typically with s0 = 21.65 B mag./[] or 140 L pc-2) which originally15 defined it. Fig. 1 shows the luminosity-linewidth plane for galaxies ranging over a factor of 40 in surface brightness. Regardless of surface brightness, galaxies fall on the same Tully-Fisher relation.

The luminosity-linewidth (Tully-Fisher) relation for spiral galaxies over a large range in surface brightness. The B-band relation is shown; the same result is obtained in all bands8,18. Absolute magnitudes are measured from apparent magnitudes assuming H0 = 75 km/s/Mpc. Rotation velocities Va are directly proportional to observed 21 cm linewidths (measured as the full width at 20% of maximum) W20 corrected for inclination [sin-1(i)]. Open symbols are an independent sample which defines42 the Tully-Fisher relation (solid line). The dotted lines show the expected shift of the Tully-Fisher relation for each step in surface brightness away from the canonical value s0 = 21.5 if the mass to light ratio remains constant. Low surface brightness galaxies are plotted as solid symbols, binned by surface brightness: red triangles: 22 < s0 < 23; green squares: 23 < s0 < 24; blue circles: s0 > 24. One galaxy with two independent measurements is connected by a line. This gives an indication of the typical uncertainty which is sufficient to explain nearly all the scatter. Contrary to the clear expectation of a readily detectable shift as indicated by the dotted lines, galaxies fall on the same Tully-Fisher relation regardless of surface brightness, as predicted by MOND.

MOND predicts this behaviour in spite of the very different surface densities of low surface brightness galaxies. In order to understand this observational fact in the framework of standard Newtonian gravity requires a subtle relation8 between surface brightness and the mass to light ratio to keep the product sY2 constant. If we retain normal gravity and the dark matter hypothesis, this result is unavoidable, and the null hypothesis of similar mass to light ratios (which, together with an assumed constancy of surface brightness, is usually invoked to explain the Tully-Fisher relation) is strongly rejected. Instead, the current epoch surface brightness is tightly correlated with the properties of the dark matter halo, placing strict constraints on models of galaxy formation and evolution.

The mass to light ratios computed for both cases are shown as a function of surface brightness in Fig. 2. Fig. 2 is based solely on galaxies with full rotation curves19,20 and surface photometry, so Va and R* are directly measured. The correlation in the Newtonian case is very clear (Fig. 2a), confirming our inference8 from the Tully-Fisher relation. Such tight correlations are very rare in extragalactic astronomy, and the Y-s relation is probably the real cause of an inferred Y-L relation. The latter is much weaker because surface brightness and luminosity are only weakly correlated21-24.

The mass to light ratio Y (in M/L) determined with (a) Newtonian dynamics and (b) MOND, plotted as a function of central surface brightness. The mass determination for Newtonian dynamics is M = V2 R*/G and for MOND is M = V4/(G a0). We have adopted as a consistent definition of the optical radius R* four scale lengths of the exponential optical disc. This is where discs tend to have edges, and contains essentially all the light21,22. The definition of R* makes a tremendous difference to the absolute value of the mass to light ratio in the Newtonian case, but makes no difference at all to the functional relation will be present regardless of the precise definition. These mass measurements are more sensitive to the inclination corrections than is the Tully-Fisher relation since there is a sin-2(i) term in the Newtonian case and one of sin-4(i) for MOND. It is thus very important that the inclination be accurately measured, and we have retained only galaxies which have adequate inclination determinations — error bars are plotted for a nominal uncertainty of 6 degrees. The sensitivity to inclination manifests itself as an increase in the scatter from (a) to (b). The derived mass is also very sensitive to the measured value of the asymptotic velocity itself, so we have used only those galaxies for which this can be taken directly from a full rotation curve19,20,42. We do not employ profile widths; the velocity measurements here are independent of those in Fig. 1. In both cases, we have subtracted off the known atomic gas mass19,20,42, so what remains is essentially only the stars and any dark matter that may exist. A very strong correlation (regression coefficient = 0.85) is apparent in (a): this is the mass to light ratio — surface brightness conspiracy. The slope is consistent (within the errors) with the theoretical expectation s ~ Y-2 derived from the Tully-Fisher relation8. At the highest surface brightnesses, the mass to light ratio is similar to that expected for the stellar population. At the faintest surface brightnesses, it has increased by a factor of nearly ten, indicating increasing dark matter domination within the optical disc as surface brightness decreases or a very systematic change in the stellar population, or both. In (b), the mass to light ratio scatters about a constant value of 2. This mean value, and the lack of a trend, is what is expected for stellar populations17,21-24.

The Y-s relation is not predicted by any dark matter theory25,26. It can not be purely an effect of the stellar mass to light ratio, since no other stellar population indicator such as color21-24 or metallicity27,28 is so tightly correlated with surface brightness. In principle it could be an effect of the stellar mass fraction, as the gas mass to light ratio follows a relation very similar to that of total mass to light ratio20. We correct for this in Fig. 2 by subtracting the known atomic gas mass so that Y refers only to the stars and any dark matter. We do not correct for molecular gas, as this has never been detected in low surface brightness galaxies to rather sensitive limits30 so the total mass of such gas is unimportant if current estimates31 of the variation of the CO to H2 conversion factor with metallicity are correct. These corrections have no discernible effect at all in Fig. 2 because the dark mass is totally dominant. It is thus very hard to see how any evolutionary effect in the luminous matter can be relevant.

In the case of MOND, the mass to light ratio directly reflects that of the stellar population once the correction for gas mass fraction is made. There is no trend of Y* with surface brightness (Fig. 2b), a more natural result and one which is consistent with our studies of the stellar populations of low surface brightness galaxies21-23. These suggest that Y* should be roughly constant or slightly declining as surface brightness decreases, with much scatter. The mean value Y* = 2 is also expected from stellar evolutionary theory17, which always gives a number 0 < Y* < 10 and usually gives 0.5 < Y* < 3 for disk galaxies. This is particularly striking since Y* is the only free parameter allowed to MOND, and the observed mean is very close to that directly observed29 in the Milky Way (1.7 ± 0.5 M/L).

The essence of the problem is illustrated by Fig. 3, which shows the rotation curves of two galaxies of essentially the same luminosity but vastly different surface brightnesses. Though the asymptotic velocities are the same (as required by the Tully-Fisher relation), the rotation curve of the low surface brightness galaxy rises less quickly than that of the high surface brightness galaxy as expected if the mass is distributed like the light. Indeed, the ratio of surface brightnesses is correct to explain the ratio of velocities at small radii if both galaxies have similar mass to light ratios. However, if this continues to be the case as R increases, the low surface brightness galaxy should reach a lower asymptotic velocity simply because R* must be larger for the same L. That this does not occur is the problem, and poses very significant systematic constraints on the dark matter distribution.

The rotation curves of two galaxies, one of high surface brightness11 (NGC 2403; open circles) and one of low surface brightness19 (UGC 128; filled circles). The two galaxies have very nearly the same asymptotic velocity, and hence luminosity, as required by the Tully-Fisher relation. However, they have central surface brightnesses which differ by a factor of 13. The lines give the contributions to the rotation curves of the various components. Green: luminous disk. Blue: dark matter halo. Red: luminous disk (stars and gas) with MOND. Solid lines refer to NGC 2403 and dotted lines to UGC 128. The fits for NGC 2403 are taken from ref. 11, for which the stars have Y* = 1.5 M/L. For UGC 128, no specific fit is made: the blue and green dotted lines are simply the NGC 2403 fits scaled by the ratio of disk scale lengths h. This provides a remarkably good description of the UGC 128 rotation curve and illustrates one possible manifestation of the fine tuning problem: if disks have similar Y, the halo parameters p0 and R0 must scale with the disk parameters s0 and h while conspiring to keep the product p0 R02 fixed at any given luminosity. Note also that the halo of NGC 2403 gives an adequate fit to the rotation curve of UGC 128. This is another possible manifestation of the fine tuning problem: all galaxies of the same luminosity have the same halo, with Y systematically varying with s0 so that Y* goes to zero as s0 goes to zero. Neither of these is exactly correct because the contribution of the gas can not be set to zero as is mathematically possible with the stars. This causes the resulting fin tuning problems to be even more complex, involving more parameters. Alternatively, the green dotted line is the rotation curve expected by MOND for a galaxy with the observed luminous mass distribution of UGC 128.

Satisfying the Tully-Fisher relation has led to some expectation that haloes all have the same density structure. This simplest possibility is immediately ruled out. In order to obtain L ~ Va4 ~ MS, one might suppose that the mass surface density S is constant from galaxy to galaxy, irrespective of the luminous surface density s. This achieves the correct asymptotic velocity Va, but requires that the mass distribution, and hence the complete rotation curve, be essentially identical for all galaxies of the same luminosity. This is obviously not the case (Fig. 3), as the rotation curves of lower surface brightness galaxies rise much more gradually than those of higher surface brightness galaxies (also a prediction4 of MOND). It might be possible to have approximately constant density haloes if the highest surface brightness disks are maximal and the lowest minimal in their contribution to the inner parts of the rotation curves, but this then requires fine tuning of Y* with this systematically decreasing with surface brightness.

The expected form of the halo mass distribution depends on the dominant form of dark matter. This could exist in three general categories: baryonic (e.g., MACHOs), hot (e.g., neutrinos), and cold exotic particles (e.g., WIMPs). The first two make no specific predictions. Baryonic dark matter candidates are most subject to direct detection, and most plausible candidates have been ruled out10 with remaining suggestions of necessity sounding increasingly contrived32. Hot dark matter is not relevant to the present problem. Even if neutrinos have a small mass, their velocities considerably exceed the escape velocities of the haloes of low mass galaxies where the problem is most severe. Cosmological simulations involving exotic cold dark matter33,34 have advanced to the point where predictions are being made about the density structure of haloes. These take the form33,34 p(R) = pH/[R(R+RH)b] where pH characterises the halo density and RH its radius, with b ~ 2 to 3. The characteristic density depends on the mean density of the universe at the collapse epoch, and is generally expected to be greater for lower mass galaxies since these collapse first in such scenarios. This goes in the opposite sense of the observations, which show that low mass and low surface brightness galaxies are less, not more, dense. The observed behaviour is actually expected in scenarios which do not smooth on a particular mass scale and hence allow galaxies of the same mass to collapse at a variety of epochs25, but in this case the Tully-Fisher relation should not be universal. Worse, note that at small R < RH, p(R) ~ R-1. It has already been noted32,35 that such a steep interior density distribution is completely inconsistent with the few (4) analysed observations of dwarf galaxies. Our data19,20 confirm and considerably extend this conclusion for 24 low surface brightness galaxies over a wide range in luminosity.

The failure of the predicted exotic cold dark matter density distribution either rules out this form of dark matter, indicates some failing in the simulations (in spite of wide-spread consensus), or requires some mechanism to redistribute the mass. Feedback from star formation is usually invoked for the last of these, but this can not work for two reasons. First, an objection in principle: a small mass of stars and gas must have a dramatic impact on the distribution of the dominant dark mass, with which they can only interact gravitationally. More mass redistribution is required in less luminous galaxies since they start out denser but end up more diffuse; of course progressively less baryonic material is available to bring this about as luminosity declines. Second, an empirical objection: in this scenario, galaxies explode and gas is lost. However, progressively fainter and lower surface brightness galaxies, which need to suffer more severe explosions, are actually very gas rich.

Observationally, dark matter haloes are inferred to have density distributions1,2,11 with constant density cores, p(R) = p0/[1 + (R/R0)g]. Here, p0 is the core density and R0 is the core size with g ~ 2 being required to produce flat rotation curves. For g = 2, the rotation curve resulting from this mass distribution is V(R) = Va [1-(R0/R) tan-1({R/R0)]1/2 where the asymptotic velocity is Va = (4πG p0 R02)1/2. To satisfy the Tully-Fisher relation, Va, and hence the product p0 R02, must be the same for all galaxies of the same luminosity. To decrease the rate of rise of the rotation curves as surface brightness decreases, R0 must increase. Together, these two require a fine tuning conspiracy to keep the product p0 R02 constant while R0 must vary with the surface brightness at a given luminosity. Luminosity and surface brightness themselves are only weakly correlated, so there exists a wide range in one parameter at any fixed value of the other. Thus the structural properties of the invisible dark matter halo dictate those of the luminous disk, or vice versa. So, s and L give the essential information about the mass distribution without recourse to kinematic information.

A strict s-p0-R0 relation is rigorously obeyed only if the haloes are spherical and dominate throughout. This is probably a good approximation for low surface brightness galaxies but may not be for the those of the highest surface brightness. However, a significant non-halo contribution can at best replace one fine tuning problem with another (e.g., surface brightness being strongly correlated with the stellar population mass to light ratio instead of halo core density) and generally causes additional conspiracies.

There are two perspectives for interpreting these relations, with the preferred perspective depending strongly on the philosophical attitude one has towards empirical and theoretical knowledge. One view is that these are real relations which galaxies and their haloes obey. As such, they provide a positive link between models of galaxy formation and evolution and reality.

The other view is that this list of fine tuning requirements makes it rather unattractive to maintain the dark matter hypothesis. MOND provides an empirically more natural explanation for these observations. In addition to the Tully-Fisher relation, MOND correctly predicts the systematics of the shapes of the rotation curves of low surface brightness galaxies19,20 and fits the specific case of UGC 128 (Fig. 3). Low surface brightness galaxies were stipulated4 to be a stringent test of the theory because they should be well into the regime a < a0. This is now observed to be true, and to the limit of observational accuracy the predictions of MOND are confirmed. The critical acceleration scale a0 is apparently universal, so there is a single force law acting in galactic disks for which MOND provides the correct description. The cause of this could be either a particular dark matter distribution36 or a real modification of gravity. The former is difficult to arrange, and a single force law strongly supports the latter hypothesis since in principle the dark matter could have any number of distributions which would give rise to a variety of effective force laws. Even if MOND is not correct, it is essential to understand why it so closely describe the observations. Though the data can not exclude Newtonian dynamics, with a working empirical alternative (really an extension) at hand, we would not hesitate to reject as incomplete any less venerable hypothesis.

Nevertheless, MOND itself remains incomplete as a theory, being more of a Kepler’s Law for galaxies. It provides only an empirical description of kinematic data. While successful for disk galaxies, it was thought to fail in clusters of galaxies37. Recently it has been recognized that there exist two missing mass problems in galaxy clusters, one of which is now solved38: most of the luminous matter is in X-ray gas, not galaxies. This vastly improves the consistency of MOND with with cluster dynamics39. The problem with the theory remains a reconciliation with Relativity and thereby standard cosmology (which is itself in considerable difficulty38,40), and a lack of any prediction about gravitational lensing41. These are theoretical problems which need to be more widely addressed in light of MOND’s empirical success.

ACKNOWLEDGEMENTS. We thank R. Sanders and M. Milgrom for clarifying aspects of a theory with which we were previously unfamiliar. SSM is grateful to the Kapteyn Astronomical Institute for enormous hospitality during visits when much of this work was done. [Note added in 2020: this work was supported by a cooperative grant funded by the EU and would no longer be possible thanks to Brexit.]

REFERENCES

  1. Rubin, V. C. Science 220, 1339-1344 (1983).
  2. Sancisi, R. & van Albada, T. S. in Dark Matter in the Universe, IAU Symp. No. 117, (eds. Knapp, G. & Kormendy, J.) 67-80 (Reidel, Dordrecht, 1987).
  3. Milgrom, M. Astrophys. J. 270, 365-370 (1983).
  4. Milgrom, M. Astrophys. J. 270, 371-383 (1983).
  5. Bekenstein, K. G., & Milgrom, M. Astrophys. J. 286, 7-14
  6. Mannheim, P. D., & Kazanas, D. 1989, Astrophys. J. 342, 635-651 (1989).
  7. Sanders, R. H. Astron. Atrophys. Rev. 2, 1-28 (1990).
  8. Zwaan, M.A., van der Hulst, J. M., de Blok, W. J. G. & McGaugh, S. S. Mon. Not. R. astr. Soc., 273, L35-L38, (1995).
  9. Zaritsky, D. & White, S. D. M. Astrophys. J. 435, 599-610 (1994).
  10. Carr, B. Ann. Rev. Astr. Astrophys., 32, 531-590 (1994).
  11. Begeman, K. G., Broeils, A. H. & Sanders, R. H. Mon. Not. R. astr. Soc. 249, 523-537 (1991).
  12. Kent, S. M. Astr. J. 93, 816-832 (1987).
  13. Milgrom, M. Astrophys. J. 333, 689-693 (1988).
  14. Milgrom, M. & Braun, E. Astrophys. J. 334, 130-134 (1988).
  15. Tully, R. B., & Fisher, J. R. Astr. Astrophys., 54, 661-673 (1977).
  16. Aaronson, M., Huchra, J., & Mould, J. Astrophys. J. 229, 1-17 (1979).
  17. Larson, R. B. & Tinsley, B. M. Astrophys. J. 219, 48-58 (1978).
  18. Sprayberry, D., Bernstein, G. M., Impey, C. D. & Bothun, G. D. Astrophys. J. 438, 72-82 (1995).
  19. van der Hulst, J. M., Skillman, E. D., Smith, T. R., Bothun, G. D., McGaugh, S. S. & de Blok, W. J. G. Astr. J. 106, 548-559 (1993).
  20. de Blok, W. J. G., McGaugh, S. S., & van der Hulst, J. M. Mon. Not. R. astr. Soc. (submitted).
  21. McGaugh, S. S., & Bothun, G. D. Astr. J. 107, 530-542 (1994).
  22. de Blok, W. J. G., van der Hulst, J. M., & Bothun, G. D. Mon. Not. R. astr. Soc. 274, 235-259 (1995).
  23. Ronnback, J., & Bergvall, N. Astr. Astrophys., 292, 360-378 (1994).
  24. de Jong, R. S. Ph.D. thesis, University of Groningen (1995).
  25. Mo, H. J., McGaugh, S. S. & Bothun, G. D. Mon. Not. R. astr. Soc. 267, 129-140 (1994).
  26. Dalcanton, J. J., Spergel, D. N., Summers, F. J. Astrophys. J., (in press).
  27. McGaugh, S. S. Astrophys. J. 426, 135-149 (1994).
  28. Ronnback, J., & Bergvall, N. Astr. Astrophys., 302, 353-359 (1995).
  29. Kuijken, K. & Gilmore, G. Mon. Not. R. astr. Soc., 239, 605-649 (1989).
  30. Schombert, J. M., Bothun, G. D., Impey, C. D., & Mundy, L. G. Astron. J., 100, 1523-1529 (1990).
  31. Wilson, C. D. Astrophys. J. 448, L97-L100 (1995).
  32. Moore, B. Nature 370, 629-631 (1994).
  33. Navarro, J. F., Frenk, C. S., & White, S. D. M. Mon. Not. R. astr. Soc., 275, 720-728 (1995).
  34. Cole, S. & Lacey, C. Mon. Not. R. astr. Soc., in press.
  35. Flores, R. A. & Primack, J. R. Astrophys. J. 427, 1-4 (1994).
  36. Sanders, R. H., & Begeman, K. G. Mon. Not. R. astr. Soc. 266, 360-366 (1994).
  37. The, L. S., & White, S. D. M. Astron. J., 95, 1642-1651 (1988).
  38. White, S. D. M., Navarro, J. F., Evrard, A. E. & Frenk, C. S. Nature 366, 429-433 (1993).
  39. Sanders, R. H. Astron. Astrophys. 284, L31-L34 (1994).
  40. Bolte, M., & Hogan, C. J. Nature 376, 399-402 (1995).
  41. Bekenstein, J. D. & Sanders, R. H. Astrophys. J. 429, 480-490 (1994).
  42. Broeils, A. H., Ph.D. thesis, Univ. of Groningen (1992).

Statistical detection of the external field effect from large scale structure

Statistical detection of the external field effect from large scale structure

A unique prediction of MOND

One curious aspect of MOND as a theory is the External Field Effect (EFE). The modified force law depends on an absolute acceleration scale, with motion being amplified over the Newtonian expectation when the force per unit mass falls below the critical acceleration scale a0 = 1.2 x 10-10 m/s/s. Usually we consider a galaxy to be an island universe: it is a system so isolated that we need consider only its own gravity. This is an excellent approximation in most circumstances, but in principle all sources of gravity from all over the universe matter.

The EFE in dwarf satellite galaxies

An example of the EFE is provided by dwarf satellite galaxies – small galaxies orbiting a larger host. It can happen that the stars in such a dwarf feel a stronger acceleration towards the host than to each other – the external field exceeds the internal self-gravity of the dwarf . In this limit, they’re more a collection of stars in a common orbit around the larger host than they are a self-gravitating island universe.

A weird consequence of the EFE in MOND is that a dwarf galaxy orbiting a large host will behave differently than it would if it were isolated in the depths of intergalactic space. MOND obeys the Weak Equivalence Principle but does not obey local position invariance. That means it violates the Strong Equivalence Principle while remaining consistent with the Einstein Equivalence Principle, a subtle but important distinction about how gravity self-gravitates.

Nothing like this happens conventionally, with or without dark matter. Gravity is local; it doesn’t matter what the rest of the universe is doing. Larger systems don’t impact smaller ones except in the extreme of tidal disruption, where the null geodesics diverge within the lesser object because it is no longer small compared to the gradient in the gravitational field. An amusing, if extreme, example is spaghettification. The EFE in MOND is a much subtler effect: when near a host, there is an extra source of acceleration, so a dwarf satellite is not as deep in the MOND regime as the equivalent isolated dwarf. Consequently, there is less of a boost from MOND: stars move a little slower, and conventionally one would infer a bit less dark matter.

The importance of the EFE in dwarf satellite galaxies is well documented. It was essential to the a priori prediction of the velocity dispersion in Crater 2 (where MOND correctly anticipated a velocity dispersion of just 2 km/s where the conventional expectation with dark matter was more like 17 km/s) and to the correct prediction of that for NGC 1052-DF2 (13 rather than 20 km/s). Indeed, one can see the difference between isolated and EFE cases in matched pairs of dwarfs satellites of Andromeda. Andromeda has enough satellites that one can pick out otherwise indistinguishable dwarfs where one happens to be subject to the EFE while its twin is practically isolated. The speeds of stars in the dwarfs affected by the EFE are consistently lower, as predicted. For example, the relatively isolated dwarf satellite of Andromeda known as And XXVIII has a velocity dispersion of 5 km/s, while its near twin And XVII (which has very nearly the same luminosity and size) is affected by the EFE and consequently has a velocity dispersion of only 3 km/s.

The case of dwarf satellites is the most obvious place where the EFE occurs. In principle, it applies everywhere all the time. It is most obvious in dwarf satellites because the external field can be comparable to or even greater than the internal field. In principle, the EFE also matters even when smaller than the internal field, albeit only a little bit: the extra acceleration causes an object to be not quite as deep in the MOND regime.

The EFE from large scale structure

Even in the depths of intergalactic space, there is some non-zero acceleration due to everything else in the universe. This is very reminiscent of Mach’s Principle, which Einstein reputedly struggled hard to incorporate into General Relativity. I’m not going to solve that in a blog post, but note that MOND is much more in the spirit of Mach and Lorenz and Einstein than its detractors generally seem to presume.

Here I describe the apparent detection of the subtle effect of a small but non-zero background acceleration field. This is very different from the case of dwarf satellites where the EFE can exceed the internal field. It is just a small tweak to the dominant internal fields of very nearly isolated island universes. It’s like the lapping of waves on their shores: hardly relevant to the existence of the island, but a pleasant feature as you walk along the beach.

The universe has structure; there are places with lots of galaxies (groups, clusters, walls, sheets) and other places with very few (voids). This large scale structure should impose a low-level but non-zero acceleration field that should vary in amplitude from place to place and affect all galaxies in their outskirts. For this reason, we do not expect rotation curves to remain flat forever; even in MOND, there comes an over-under point where the gravity of everything else takes over from any individual object. A test particle at the see-saw balance point between the Milky Way and Andromeda may not know which galaxy to call daddy, but it sure knows they’re both there. The background acceleration field matters to such diverse subjects as groups of galaxies and Lyman alpha absorbers at high redshift.

As an historical aside, Lyman alpha absorbers at high redshift were initially found to deviate from MOND by many orders of magnitude. That was withoug the EFE. With the EFE, the discrepancy is much smaller, but persists. The amplitude of the EFE at high redshift is very uncertain. I expect it is higher in MOND than estimated because structure forms fast in MOND; this might suffice to solve the problem. Whether or not this is the case, it makes a good example of how a simple calculation can make MOND seem way off when it isn’t. If I had a dollar for every time I’ve seen that happen, I could fly first class.

I made an early estimate of the average intergalactic acceleration field, finding the typical environmental acceleration eenv to be about 2% of a0 (eenv ~ 2.6 x 10-12 m/s/s, see just before eq. 31). This is highly uncertain and should be location dependent, differing a lot from voids to richer environments. It is hard to find systems that probe much below 10% of a0, and the effect it would cause on the average (non-satellite) galaxy is rather subtle, so I have mostly neglected this background acceleration as, well, pretty negligible.

This changed recently thanks to Kyu-Hyun Chae and Harry Desmond. We met at a conference in Bonn a year ago September. (Remember travel? I used to complain about how much travel work involved. Now I miss it – especially as experience demonstrates that some things really do require in-person interaction.) Kyu thought we should be able to tease out the EFE from SPARC data in a statistical way, and Harry offered to make a map of the environmental acceleration based on the locations of known galaxies. This is a distinct improvement over the crude average of my ancient first estimate as it specifies the EFE that ought to occur at the location of each individual galaxy. The results of this collaboration were recently published open-access in the Astrophysical Journal.

This did not come easily. I think I mentioned that the predicted effect is subtle. We’re no longer talking about the effect of a big host on a tiny dwarf up close to it. We’re talking about the background of everything on giant galaxies. Space is incomprehensibly vast, so every galaxy is far, far away, and the expected effect is small. So my first reaction was “Sure. Great idea. No way can we do this with current data.” I am please to report that I was wrong: with lots of hard work, perseverance, and the power of Bayesian statistics, we have obtained a positive detection of the EFE.

One reason for my initial skepticism was the importance of data quality. The rotation curves in SPARC are a heterogeneous lot, being the accumulated work of an entire community of radio astronomers over the course of several decades. Some galaxies are bright and their data stupendous, others… not so much. Having started myself working on low surface brightness galaxies – the least stupendous of them all – and having spent much of the past quarter century working long and hard to improve the data, I tend to be rather skeptical of what can be accomplished.

An example of a galaxy with good data is NGC 5055 (aka M63, aka the Sunflower galaxy, pictured atop as viewed by the Hubble Space Telescope). NGC 5055 happens to reside in a relatively high acceleration environment for a spiral, with eenv ~ 9% of a0. For comparison, the acceleration at the last measured point of its rotation curve is about 15% of a0. So they’re within a factor of two, which is pretty much the strongest effect in the whole sample. This additional bit of acceleration means NGC 5055 is not quite as deep in the MOND regime as it would be all by its lonesome, with the net effect that the rotation curve is predicted to decline a little bit faster than it would in the isolated case, as you can see in the figure below. See that? Or is it too subtle? I think I mentioned the effect was pretty subtle.

The rotation curve of NGC 5055 (velocity in km/s vs. radius in kpc). The blue and green bands are the rotation expected from the observed stars and gas. The red band is the MOND fit with (left) and without (right) the external field effect (EFE) from Chae et al. ΔBIC is a statistical measure that indicates that the fit with the EFE is a meaningful improvement over that without (in technical terms, “way better”).

That this case works well is encouraging. I like to start with a good case: if you can’t see what you’re looking for in the best of the data, stop. But I still didn’t hold out much hope for the rest of the sample. Then Kyu showed that the most isolated galaxies – those subject to the lowest environmental accelerations – showed no effect. That sounds boring, but null results are important. It could happen that the environmental acceleration was a promiscuous free parameter that appears to improve a fit without really adding any value. That it declined to do that in cases where it shouldn’t was intriguing. The galaxies in the most extreme environments show an effect when they should, but don’t when they shouldn’t.

Statistical detection of the EFE

Statistics become useful for interpreting the entirety of the large sample of galaxies. Because of the variability in data quality, we knew some cases would go astray. But we only need to know if the fit for any galaxy is improved relative to the case where the EFE is neglected, so each case sets its own standard. This relative measure is more robust than analyses that require an assessment of the absolute fit quality. All we’re really asking the data is whether the presence of an EFE helps. To my initial and ongoing amazement, it does.

The environmental acceleration predicted by the distribution of known galaxies, eenv, against the amplitude e of an external field that provides the best-fit to each rotation curve (Fig. 5 of Chae et al).

The figure above shows the amplitude of the EFE that best fits each rotation curve along the x-axis. The median is 5% of a0. This is non-zero at 4.7σ, and our detection of the EFE is comparable in quality to that of the Baryon Acoustic Oscillation or the accelerated expansion of the universe when these were first accepted. Of course, these were widely anticipated effects, while the EFE is expected only in MOND. Personally, I think it is a mistake to obsess over the number of σ, which is not as robust as people like to think. I am more impressed that the peak of the color map (the darkest color in the data density map above) is positive definite and clearly non-zero.

Taken together, the data prefer a small but clearly non-zero EFE. That’s a statistical statement for the whole sample. Of course, the amplitude (e) of the EFE inferred for individual galaxies is uncertain, and is occasionally negative. This is unphysical: it shouldn’t happen. Nevertheless, it is statistically expected given the amount of uncertainty in the data: for error bars this size, some of the data should spill over to e < 0.

I didn’t initially think we could detect the EFE in this way because I expected that the error bars would wash out the effect. That is, I expected the colored blob above would be smeared out enough that the peak would encompass zero. That’s not what happened, me of little faith. I am also encouraged that the distribution skews positive: the error bars scatter points in both direction, and wind up positive more often than negative. That’s an indication that they started from an underlying distribution centered on e > 0, not e = 0.

The y-axis in the figure above is the estimate of the environmental acceleration based on the 2M++ galaxy catalog. This is entirely independent of the best fit e from rotation curves. It is the expected EFE from the distribution of mass that we know about. The median environmental EFE found in this way is 3% of a0. This is pretty close to the 2% I estimated over 20 years ago. Given the uncertainties, it is quite compatible with the median of 5% found from the rotation curve fits.

In an ideal world where all quantities are perfectly known, there would be a correlation between the external field inferred from the best fit to the rotation curves and that of the environment predicted by large scale structure. We are nowhere near to that ideal. I can conceive of improving both measurements, but I find it hard to imagine getting to the point where we can see a correlation between e and eenv. The data quality required on both fronts would be stunning.

Then again, I never thought we could get this far, so I am game to give it a go.