25 years a heretic

25 years a heretic

People seem to like to do retrospectives at year’s end. I take a longer view, but the end of 2020 seems like a fitting time to do that. Below is the text of a paper I wrote in 1995 with collaborators at the Kapteyn Institute of the University of Groningen. The last edit date is from December of that year, so this text (in plain TeX, not LaTeX!) is now a quarter century old. I am just going to cut & paste it as-was; I even managed to recover the original figures and translate them into something web-friendly (postscript to jpeg). This is exactly how it was.

This was my first attempt to express in the scientific literature my concerns for the viability of the dark matter paradigm, and my puzzlement that the only theory to get any genuine predictions right was MOND. It was the hardest admission in my career that this could be even a remote possibility. Nevertheless, intellectual honesty demanded that I report it. To fail to do so would be an act of reality denial antithetical to the foundational principles of science.

It was never published. There were three referees. Initially, one was positive, one was negative, and one insisted that rotation curves weren’t flat. There was one iteration; this is the resubmitted version in which the concerns of the second referee were addressed to his apparent satisfaction by making the third figure a lot more complicated. The third referee persisted that none of this was valid because rotation curves weren’t flat. Seems like he had a problem with something beyond the scope of this paper, but the net result was rejection.

One valid concern that ran through the refereeing process from all sides was “what about everything else?” This is a good question that couldn’t fit into a short letter like this. Thanks to the support of Vera Rubin and a Carnegie Fellowship, I spent the next couple of years looking into everything else. The results were published in 1998 in a series of three long papers: one on dark matter, one on MOND, and one making detailed fits.

This had started from a very different place intellectually with my efforts to write a paper on galaxy formation that would have been similar to contemporaneous papers like Dalcanton, Spergel, & Summers and Mo, Mao, & White. This would have followed from my thesis and from work with Houjun Mo, who was an office mate when we were postdocs at the IoA in Cambridge. (The ideas discussed in Mo, McGaugh, & Bothun have been reborn recently in the galaxy formation literature under the moniker of “assembly bias.”) But I had realized by then that my ideas – and those in the papers cited – were wrong. So I didn’t write a paper that I knew to be wrong. I wrote this one instead.

Nothing substantive has changed since. Reading it afresh, I’m amazed how many of the arguments over the past quarter century were anticipated here. As a scientific community, we are stuck in a rut, and seem to prefer to spin the wheels to dig ourselves in deeper than consider the plain if difficult path out.


Testing hypotheses of dark matter and alternative gravity with low surface density galaxies

The missing mass problem remains one of the most vexing in astrophysics. Observations clearly indicate either the presence of a tremendous amount of as yet unidentified dark matter1,2, or the need to modify the law of gravity3-7. These hypotheses make vastly different predictions as a function of density. Observations of the rotation curves of galaxies of much lower surface brightness than previously studied therefore provide a powerful test for discriminating between them. The dark matter hypothesis requires a surprisingly strong relation between the surface brightness and mass to light ratio8, placing stringent constraints on theories of galaxy formation and evolution. Alternatively, the observed behaviour is predicted4 by one of the hypothesised alterations of gravity known as modified Newtonian dynamics3,5 (MOND).

Spiral galaxies are observed to have asymptotically flat [i.e., V(R) ~ constant for large R] rotation curves that extend well beyond their optical edges. This trend continues for as far (many, sometimes > 10 galaxy scale lengths) as can be probed by gaseous tracers1,2 or by the orbits of satellite galaxies9. Outside a galaxy’s optical radius, the gravitational acceleration is aN = GM/R2 = V2/R so one expects V(R) ~ R-1/2. This Keplerian behaviour is not observed in galaxies.

One approach to this problem is to increase M in the outer parts of galaxies in order to provide the extra gravitational acceleration necessary to keep the rotation curves flat. Indeed, this is the only option within the framework of Newtonian gravity since both V and R are directly measured. The additional mass must be invisible, dominant, and extend well beyond the optical edge of the galaxies.

Postulating the existence of this large amount of dark matter which reveals itself only by its gravitational effects is a radical hypothesis. Yet the kinematic data force it upon us, so much so that the existence of dark matter is generally accepted. Enormous effort has gone into attempting to theoretically predict its nature and experimentally verify its existence, but to date there exists no convincing detection of any hypothesised dark matter candidate, and many plausible candidates have been ruled out10.

Another possible solution is to alter the fundamental equation aN = GM/R2. Our faith in this simple equation is very well founded on extensive experimental tests of Newtonian gravity. Since it is so fundamental, altering it is an even more radical hypothesis than invoking the existence of large amounts of dark matter of completely unknown constituent components. However, a radical solution is required either way, so both possibilities must be considered and tested.

A phenomenological theory specifically introduced to address the problem of the flat rotation curves is MOND3. It has no other motivation and so far there is no firm physical basis for the theory. It provides no satisfactory cosmology, having yet to be reconciled with General Relativity. However, with the introduction of one new fundamental constant (an acceleration a0), it is empirically quite successful in fitting galaxy rotation curves11-14. It hypothesises that for accelerations a < a0 = 1.2 x 10-10 m s-2, the effective acceleration is given by aeff = (aN a0)1/2. This simple prescription works well with essentially only one free parameter per galaxy, the stellar mass to light ratio, which is subject to independent constraint by stellar evolution theory. More importantly, MOND makes predictions which are distinct and testable. One specific prediction4 is that the asymptotic (flat) value of the rotation velocity, Va, is Va = (GMa0)1/4. Note that Va does not depend on R, but only on M in the regime of small accelerations (a < a0).

In contrast, Newtonian gravity depends on both M and R. Replacing R with a mass surface density variable S = M(R)/R2, the Newtonian prediction becomes M S ~ Va4 which contrasts with the MOND prediction M ~ Va4. These relations are the theoretical basis in each case for the observed luminosity-linewidth relation L ~ Va4 (better known as the Tully-Fisher15 relation. Note that the observed value of the exponent is bandpass dependent, but does obtain the theoretical value of 4 in the near infrared16 which is considered the best indicator of the stellar mass. The systematic variation with bandpass is a very small effect compared to the difference between the two gravitational theories, and must be attributed to dust or stars under either theory.) To transform from theory to observation one requires the mass to light ratio Y: Y = M/L = S/s, where s is the surface brightness. Note that in the purely Newtonian case, M and L are very different functions of R, so Y is itself a strong function of R. We define Y to be the mass to light ratio within the optical radius R*, as this is the only radius which can be measured by observation. The global mass to light ratio would be very different (since M ~ R for R > R*, the total masses of dark haloes are not measurable), but the particular choice of definition does not affect the relevant functional dependences is all that matters. The predictions become Y2sL ~ Va4 for Newtonian gravity8,16 and YL ~ Va4 for MOND4.

The only sensible17 null hypothesis that can be constructed is that the mass to light ratio be roughly constant from galaxy to galaxy. Clearly distinct predictions thus emerge if galaxies of different surface brightnesses s are examined. In the Newtonian case there should be a family of parallel Tully-Fisher relations for each surface brightness. In the case of MOND, all galaxies should follow the same Tully-Fisher relation irrespective of surface brightness.

Recently it has been shown that extreme objects such as low surface brightness galaxies8,18 (those with central surface brightnesses fainter than s0 = 23 B mag./[] corresponding 40 L pc-2) obey the same Tully-Fisher relation as do the high surface brightness galaxies (typically with s0 = 21.65 B mag./[] or 140 L pc-2) which originally15 defined it. Fig. 1 shows the luminosity-linewidth plane for galaxies ranging over a factor of 40 in surface brightness. Regardless of surface brightness, galaxies fall on the same Tully-Fisher relation.

The luminosity-linewidth (Tully-Fisher) relation for spiral galaxies over a large range in surface brightness. The B-band relation is shown; the same result is obtained in all bands8,18. Absolute magnitudes are measured from apparent magnitudes assuming H0 = 75 km/s/Mpc. Rotation velocities Va are directly proportional to observed 21 cm linewidths (measured as the full width at 20% of maximum) W20 corrected for inclination [sin-1(i)]. Open symbols are an independent sample which defines42 the Tully-Fisher relation (solid line). The dotted lines show the expected shift of the Tully-Fisher relation for each step in surface brightness away from the canonical value s0 = 21.5 if the mass to light ratio remains constant. Low surface brightness galaxies are plotted as solid symbols, binned by surface brightness: red triangles: 22 < s0 < 23; green squares: 23 < s0 < 24; blue circles: s0 > 24. One galaxy with two independent measurements is connected by a line. This gives an indication of the typical uncertainty which is sufficient to explain nearly all the scatter. Contrary to the clear expectation of a readily detectable shift as indicated by the dotted lines, galaxies fall on the same Tully-Fisher relation regardless of surface brightness, as predicted by MOND.

MOND predicts this behaviour in spite of the very different surface densities of low surface brightness galaxies. In order to understand this observational fact in the framework of standard Newtonian gravity requires a subtle relation8 between surface brightness and the mass to light ratio to keep the product sY2 constant. If we retain normal gravity and the dark matter hypothesis, this result is unavoidable, and the null hypothesis of similar mass to light ratios (which, together with an assumed constancy of surface brightness, is usually invoked to explain the Tully-Fisher relation) is strongly rejected. Instead, the current epoch surface brightness is tightly correlated with the properties of the dark matter halo, placing strict constraints on models of galaxy formation and evolution.

The mass to light ratios computed for both cases are shown as a function of surface brightness in Fig. 2. Fig. 2 is based solely on galaxies with full rotation curves19,20 and surface photometry, so Va and R* are directly measured. The correlation in the Newtonian case is very clear (Fig. 2a), confirming our inference8 from the Tully-Fisher relation. Such tight correlations are very rare in extragalactic astronomy, and the Y-s relation is probably the real cause of an inferred Y-L relation. The latter is much weaker because surface brightness and luminosity are only weakly correlated21-24.

The mass to light ratio Y (in M/L) determined with (a) Newtonian dynamics and (b) MOND, plotted as a function of central surface brightness. The mass determination for Newtonian dynamics is M = V2 R*/G and for MOND is M = V4/(G a0). We have adopted as a consistent definition of the optical radius R* four scale lengths of the exponential optical disc. This is where discs tend to have edges, and contains essentially all the light21,22. The definition of R* makes a tremendous difference to the absolute value of the mass to light ratio in the Newtonian case, but makes no difference at all to the functional relation will be present regardless of the precise definition. These mass measurements are more sensitive to the inclination corrections than is the Tully-Fisher relation since there is a sin-2(i) term in the Newtonian case and one of sin-4(i) for MOND. It is thus very important that the inclination be accurately measured, and we have retained only galaxies which have adequate inclination determinations — error bars are plotted for a nominal uncertainty of 6 degrees. The sensitivity to inclination manifests itself as an increase in the scatter from (a) to (b). The derived mass is also very sensitive to the measured value of the asymptotic velocity itself, so we have used only those galaxies for which this can be taken directly from a full rotation curve19,20,42. We do not employ profile widths; the velocity measurements here are independent of those in Fig. 1. In both cases, we have subtracted off the known atomic gas mass19,20,42, so what remains is essentially only the stars and any dark matter that may exist. A very strong correlation (regression coefficient = 0.85) is apparent in (a): this is the mass to light ratio — surface brightness conspiracy. The slope is consistent (within the errors) with the theoretical expectation s ~ Y-2 derived from the Tully-Fisher relation8. At the highest surface brightnesses, the mass to light ratio is similar to that expected for the stellar population. At the faintest surface brightnesses, it has increased by a factor of nearly ten, indicating increasing dark matter domination within the optical disc as surface brightness decreases or a very systematic change in the stellar population, or both. In (b), the mass to light ratio scatters about a constant value of 2. This mean value, and the lack of a trend, is what is expected for stellar populations17,21-24.

The Y-s relation is not predicted by any dark matter theory25,26. It can not be purely an effect of the stellar mass to light ratio, since no other stellar population indicator such as color21-24 or metallicity27,28 is so tightly correlated with surface brightness. In principle it could be an effect of the stellar mass fraction, as the gas mass to light ratio follows a relation very similar to that of total mass to light ratio20. We correct for this in Fig. 2 by subtracting the known atomic gas mass so that Y refers only to the stars and any dark matter. We do not correct for molecular gas, as this has never been detected in low surface brightness galaxies to rather sensitive limits30 so the total mass of such gas is unimportant if current estimates31 of the variation of the CO to H2 conversion factor with metallicity are correct. These corrections have no discernible effect at all in Fig. 2 because the dark mass is totally dominant. It is thus very hard to see how any evolutionary effect in the luminous matter can be relevant.

In the case of MOND, the mass to light ratio directly reflects that of the stellar population once the correction for gas mass fraction is made. There is no trend of Y* with surface brightness (Fig. 2b), a more natural result and one which is consistent with our studies of the stellar populations of low surface brightness galaxies21-23. These suggest that Y* should be roughly constant or slightly declining as surface brightness decreases, with much scatter. The mean value Y* = 2 is also expected from stellar evolutionary theory17, which always gives a number 0 < Y* < 10 and usually gives 0.5 < Y* < 3 for disk galaxies. This is particularly striking since Y* is the only free parameter allowed to MOND, and the observed mean is very close to that directly observed29 in the Milky Way (1.7 ± 0.5 M/L).

The essence of the problem is illustrated by Fig. 3, which shows the rotation curves of two galaxies of essentially the same luminosity but vastly different surface brightnesses. Though the asymptotic velocities are the same (as required by the Tully-Fisher relation), the rotation curve of the low surface brightness galaxy rises less quickly than that of the high surface brightness galaxy as expected if the mass is distributed like the light. Indeed, the ratio of surface brightnesses is correct to explain the ratio of velocities at small radii if both galaxies have similar mass to light ratios. However, if this continues to be the case as R increases, the low surface brightness galaxy should reach a lower asymptotic velocity simply because R* must be larger for the same L. That this does not occur is the problem, and poses very significant systematic constraints on the dark matter distribution.

The rotation curves of two galaxies, one of high surface brightness11 (NGC 2403; open circles) and one of low surface brightness19 (UGC 128; filled circles). The two galaxies have very nearly the same asymptotic velocity, and hence luminosity, as required by the Tully-Fisher relation. However, they have central surface brightnesses which differ by a factor of 13. The lines give the contributions to the rotation curves of the various components. Green: luminous disk. Blue: dark matter halo. Red: luminous disk (stars and gas) with MOND. Solid lines refer to NGC 2403 and dotted lines to UGC 128. The fits for NGC 2403 are taken from ref. 11, for which the stars have Y* = 1.5 M/L. For UGC 128, no specific fit is made: the blue and green dotted lines are simply the NGC 2403 fits scaled by the ratio of disk scale lengths h. This provides a remarkably good description of the UGC 128 rotation curve and illustrates one possible manifestation of the fine tuning problem: if disks have similar Y, the halo parameters p0 and R0 must scale with the disk parameters s0 and h while conspiring to keep the product p0 R02 fixed at any given luminosity. Note also that the halo of NGC 2403 gives an adequate fit to the rotation curve of UGC 128. This is another possible manifestation of the fine tuning problem: all galaxies of the same luminosity have the same halo, with Y systematically varying with s0 so that Y* goes to zero as s0 goes to zero. Neither of these is exactly correct because the contribution of the gas can not be set to zero as is mathematically possible with the stars. This causes the resulting fin tuning problems to be even more complex, involving more parameters. Alternatively, the green dotted line is the rotation curve expected by MOND for a galaxy with the observed luminous mass distribution of UGC 128.

Satisfying the Tully-Fisher relation has led to some expectation that haloes all have the same density structure. This simplest possibility is immediately ruled out. In order to obtain L ~ Va4 ~ MS, one might suppose that the mass surface density S is constant from galaxy to galaxy, irrespective of the luminous surface density s. This achieves the correct asymptotic velocity Va, but requires that the mass distribution, and hence the complete rotation curve, be essentially identical for all galaxies of the same luminosity. This is obviously not the case (Fig. 3), as the rotation curves of lower surface brightness galaxies rise much more gradually than those of higher surface brightness galaxies (also a prediction4 of MOND). It might be possible to have approximately constant density haloes if the highest surface brightness disks are maximal and the lowest minimal in their contribution to the inner parts of the rotation curves, but this then requires fine tuning of Y* with this systematically decreasing with surface brightness.

The expected form of the halo mass distribution depends on the dominant form of dark matter. This could exist in three general categories: baryonic (e.g., MACHOs), hot (e.g., neutrinos), and cold exotic particles (e.g., WIMPs). The first two make no specific predictions. Baryonic dark matter candidates are most subject to direct detection, and most plausible candidates have been ruled out10 with remaining suggestions of necessity sounding increasingly contrived32. Hot dark matter is not relevant to the present problem. Even if neutrinos have a small mass, their velocities considerably exceed the escape velocities of the haloes of low mass galaxies where the problem is most severe. Cosmological simulations involving exotic cold dark matter33,34 have advanced to the point where predictions are being made about the density structure of haloes. These take the form33,34 p(R) = pH/[R(R+RH)b] where pH characterises the halo density and RH its radius, with b ~ 2 to 3. The characteristic density depends on the mean density of the universe at the collapse epoch, and is generally expected to be greater for lower mass galaxies since these collapse first in such scenarios. This goes in the opposite sense of the observations, which show that low mass and low surface brightness galaxies are less, not more, dense. The observed behaviour is actually expected in scenarios which do not smooth on a particular mass scale and hence allow galaxies of the same mass to collapse at a variety of epochs25, but in this case the Tully-Fisher relation should not be universal. Worse, note that at small R < RH, p(R) ~ R-1. It has already been noted32,35 that such a steep interior density distribution is completely inconsistent with the few (4) analysed observations of dwarf galaxies. Our data19,20 confirm and considerably extend this conclusion for 24 low surface brightness galaxies over a wide range in luminosity.

The failure of the predicted exotic cold dark matter density distribution either rules out this form of dark matter, indicates some failing in the simulations (in spite of wide-spread consensus), or requires some mechanism to redistribute the mass. Feedback from star formation is usually invoked for the last of these, but this can not work for two reasons. First, an objection in principle: a small mass of stars and gas must have a dramatic impact on the distribution of the dominant dark mass, with which they can only interact gravitationally. More mass redistribution is required in less luminous galaxies since they start out denser but end up more diffuse; of course progressively less baryonic material is available to bring this about as luminosity declines. Second, an empirical objection: in this scenario, galaxies explode and gas is lost. However, progressively fainter and lower surface brightness galaxies, which need to suffer more severe explosions, are actually very gas rich.

Observationally, dark matter haloes are inferred to have density distributions1,2,11 with constant density cores, p(R) = p0/[1 + (R/R0)g]. Here, p0 is the core density and R0 is the core size with g ~ 2 being required to produce flat rotation curves. For g = 2, the rotation curve resulting from this mass distribution is V(R) = Va [1-(R0/R) tan-1({R/R0)]1/2 where the asymptotic velocity is Va = (4πG p0 R02)1/2. To satisfy the Tully-Fisher relation, Va, and hence the product p0 R02, must be the same for all galaxies of the same luminosity. To decrease the rate of rise of the rotation curves as surface brightness decreases, R0 must increase. Together, these two require a fine tuning conspiracy to keep the product p0 R02 constant while R0 must vary with the surface brightness at a given luminosity. Luminosity and surface brightness themselves are only weakly correlated, so there exists a wide range in one parameter at any fixed value of the other. Thus the structural properties of the invisible dark matter halo dictate those of the luminous disk, or vice versa. So, s and L give the essential information about the mass distribution without recourse to kinematic information.

A strict s-p0-R0 relation is rigorously obeyed only if the haloes are spherical and dominate throughout. This is probably a good approximation for low surface brightness galaxies but may not be for the those of the highest surface brightness. However, a significant non-halo contribution can at best replace one fine tuning problem with another (e.g., surface brightness being strongly correlated with the stellar population mass to light ratio instead of halo core density) and generally causes additional conspiracies.

There are two perspectives for interpreting these relations, with the preferred perspective depending strongly on the philosophical attitude one has towards empirical and theoretical knowledge. One view is that these are real relations which galaxies and their haloes obey. As such, they provide a positive link between models of galaxy formation and evolution and reality.

The other view is that this list of fine tuning requirements makes it rather unattractive to maintain the dark matter hypothesis. MOND provides an empirically more natural explanation for these observations. In addition to the Tully-Fisher relation, MOND correctly predicts the systematics of the shapes of the rotation curves of low surface brightness galaxies19,20 and fits the specific case of UGC 128 (Fig. 3). Low surface brightness galaxies were stipulated4 to be a stringent test of the theory because they should be well into the regime a < a0. This is now observed to be true, and to the limit of observational accuracy the predictions of MOND are confirmed. The critical acceleration scale a0 is apparently universal, so there is a single force law acting in galactic disks for which MOND provides the correct description. The cause of this could be either a particular dark matter distribution36 or a real modification of gravity. The former is difficult to arrange, and a single force law strongly supports the latter hypothesis since in principle the dark matter could have any number of distributions which would give rise to a variety of effective force laws. Even if MOND is not correct, it is essential to understand why it so closely describe the observations. Though the data can not exclude Newtonian dynamics, with a working empirical alternative (really an extension) at hand, we would not hesitate to reject as incomplete any less venerable hypothesis.

Nevertheless, MOND itself remains incomplete as a theory, being more of a Kepler’s Law for galaxies. It provides only an empirical description of kinematic data. While successful for disk galaxies, it was thought to fail in clusters of galaxies37. Recently it has been recognized that there exist two missing mass problems in galaxy clusters, one of which is now solved38: most of the luminous matter is in X-ray gas, not galaxies. This vastly improves the consistency of MOND with with cluster dynamics39. The problem with the theory remains a reconciliation with Relativity and thereby standard cosmology (which is itself in considerable difficulty38,40), and a lack of any prediction about gravitational lensing41. These are theoretical problems which need to be more widely addressed in light of MOND’s empirical success.

ACKNOWLEDGEMENTS. We thank R. Sanders and M. Milgrom for clarifying aspects of a theory with which we were previously unfamiliar. SSM is grateful to the Kapteyn Astronomical Institute for enormous hospitality during visits when much of this work was done. [Note added in 2020: this work was supported by a cooperative grant funded by the EU and would no longer be possible thanks to Brexit.]

REFERENCES

  1. Rubin, V. C. Science 220, 1339-1344 (1983).
  2. Sancisi, R. & van Albada, T. S. in Dark Matter in the Universe, IAU Symp. No. 117, (eds. Knapp, G. & Kormendy, J.) 67-80 (Reidel, Dordrecht, 1987).
  3. Milgrom, M. Astrophys. J. 270, 365-370 (1983).
  4. Milgrom, M. Astrophys. J. 270, 371-383 (1983).
  5. Bekenstein, K. G., & Milgrom, M. Astrophys. J. 286, 7-14
  6. Mannheim, P. D., & Kazanas, D. 1989, Astrophys. J. 342, 635-651 (1989).
  7. Sanders, R. H. Astron. Atrophys. Rev. 2, 1-28 (1990).
  8. Zwaan, M.A., van der Hulst, J. M., de Blok, W. J. G. & McGaugh, S. S. Mon. Not. R. astr. Soc., 273, L35-L38, (1995).
  9. Zaritsky, D. & White, S. D. M. Astrophys. J. 435, 599-610 (1994).
  10. Carr, B. Ann. Rev. Astr. Astrophys., 32, 531-590 (1994).
  11. Begeman, K. G., Broeils, A. H. & Sanders, R. H. Mon. Not. R. astr. Soc. 249, 523-537 (1991).
  12. Kent, S. M. Astr. J. 93, 816-832 (1987).
  13. Milgrom, M. Astrophys. J. 333, 689-693 (1988).
  14. Milgrom, M. & Braun, E. Astrophys. J. 334, 130-134 (1988).
  15. Tully, R. B., & Fisher, J. R. Astr. Astrophys., 54, 661-673 (1977).
  16. Aaronson, M., Huchra, J., & Mould, J. Astrophys. J. 229, 1-17 (1979).
  17. Larson, R. B. & Tinsley, B. M. Astrophys. J. 219, 48-58 (1978).
  18. Sprayberry, D., Bernstein, G. M., Impey, C. D. & Bothun, G. D. Astrophys. J. 438, 72-82 (1995).
  19. van der Hulst, J. M., Skillman, E. D., Smith, T. R., Bothun, G. D., McGaugh, S. S. & de Blok, W. J. G. Astr. J. 106, 548-559 (1993).
  20. de Blok, W. J. G., McGaugh, S. S., & van der Hulst, J. M. Mon. Not. R. astr. Soc. (submitted).
  21. McGaugh, S. S., & Bothun, G. D. Astr. J. 107, 530-542 (1994).
  22. de Blok, W. J. G., van der Hulst, J. M., & Bothun, G. D. Mon. Not. R. astr. Soc. 274, 235-259 (1995).
  23. Ronnback, J., & Bergvall, N. Astr. Astrophys., 292, 360-378 (1994).
  24. de Jong, R. S. Ph.D. thesis, University of Groningen (1995).
  25. Mo, H. J., McGaugh, S. S. & Bothun, G. D. Mon. Not. R. astr. Soc. 267, 129-140 (1994).
  26. Dalcanton, J. J., Spergel, D. N., Summers, F. J. Astrophys. J., (in press).
  27. McGaugh, S. S. Astrophys. J. 426, 135-149 (1994).
  28. Ronnback, J., & Bergvall, N. Astr. Astrophys., 302, 353-359 (1995).
  29. Kuijken, K. & Gilmore, G. Mon. Not. R. astr. Soc., 239, 605-649 (1989).
  30. Schombert, J. M., Bothun, G. D., Impey, C. D., & Mundy, L. G. Astron. J., 100, 1523-1529 (1990).
  31. Wilson, C. D. Astrophys. J. 448, L97-L100 (1995).
  32. Moore, B. Nature 370, 629-631 (1994).
  33. Navarro, J. F., Frenk, C. S., & White, S. D. M. Mon. Not. R. astr. Soc., 275, 720-728 (1995).
  34. Cole, S. & Lacey, C. Mon. Not. R. astr. Soc., in press.
  35. Flores, R. A. & Primack, J. R. Astrophys. J. 427, 1-4 (1994).
  36. Sanders, R. H., & Begeman, K. G. Mon. Not. R. astr. Soc. 266, 360-366 (1994).
  37. The, L. S., & White, S. D. M. Astron. J., 95, 1642-1651 (1988).
  38. White, S. D. M., Navarro, J. F., Evrard, A. E. & Frenk, C. S. Nature 366, 429-433 (1993).
  39. Sanders, R. H. Astron. Astrophys. 284, L31-L34 (1994).
  40. Bolte, M., & Hogan, C. J. Nature 376, 399-402 (1995).
  41. Bekenstein, J. D. & Sanders, R. H. Astrophys. J. 429, 480-490 (1994).
  42. Broeils, A. H., Ph.D. thesis, Univ. of Groningen (1992).

Oh… you don’t want to look in there

Oh… you don’t want to look in there

This post is a recent conversation with David Garofalo for his blog.


Today we talk to Dr. Stacy McGaugh, Chair of the Astronomy Department at Case Western Reserve University.

David: Hi Stacy. You had set out to disprove MOND and instead found evidence to support it. That sounds like the poster child for how science works. Was praise forthcoming?

Stacy: In the late 1980s and into the 1990s, I set out to try to understand low surface brightness galaxies. These are diffuse systems of stars and gas that rotate like the familiar bright spirals, but whose stars are much more spread out. Why? How did these things come to be? Why were they different from brighter galaxies? How could we explain their properties? These were the problems I started out working on that inadvertently set me on a collision course with MOND.

I did not set out to prove or disprove either MOND or dark matter. I was not really even aware of MOND at that time. I had head of it only on a couple of occasions, but I hadn’t payed any attention, and didn’t really know anything about it. Why would I bother? It was already well established that there had to be dark matter.

I worked to develop our understanding of low surface brightness galaxies in the context of dark matter. Their blue colors, low metallicities, high gas fractions, and overall diffuse nature could be explained if they had formed in dark matter halos that are themselves lower than average density: they occupy the low concentration side of the distribution of dark matter halos at a given mass. I found this interpretation quite satisfactory, so gave me no cause to doubt dark matter to that point.

This picture made two genuine predictions that had yet to be tested. First, low surface brightness galaxies should be less strongly clustered than brighter galaxies. Second, having their mass spread over a larger area, they should shift off of the Tully-Fisher relation defined by denser galaxies. The first prediction came true, and for a period I was jubilant that we had made an important new contribution to out understanding of both galaxies and dark matter. The second prediction failed badly: low surface brightness galaxies adhere to the same Tully-Fisher relation that other galaxies follow.

I tried desperately to understand the failure of the second prediction in terms of dark matter. I tried what seemed like a thousand ways to explain this, but ultimately they were all tautological: I could only explain it if I assumed the answer from the start. The adherence of low surface brightness galaxies to the Tully-Fisher relation poses a serious fine-tuning problem: the distribution of dark matter must be adjusted to exactly counterbalance that of the visible matter so as not to leave any residuals. This makes no sense, and anyone who claims it does is not thinking clearly.

It was in this crisis of comprehension in which I became aware that MOND predicted exactly what I was seeing. No fine-tuning was required. Low surface brightness galaxies followed the same Tully-Fisher relation as other galaxies because the modified force law stipulates that they must. It was only at this point (in the mid-’90s) at which I started to take MOND seriously. If it had got this prediction right, what else did it predict?

I was still convinced that the right answer had to be dark matter. There was, after all, so much evidence for it. So this one prediction must be a fluke; surely it would fail the next test. That was not what happened: MOND passed test after test after test, successfully predicting observations both basic and detailed that dark matter theory got wrong or did not even address. It was only after this experience that I realized that what I thought was evidence for dark matter was really just evidence that something was wrong: the data cannot be explained with ordinary gravity without invisible mass. The data – and here I mean ALL the data – were mostly ambiguous: they did not clearly distinguish whether the problem was with mass we couldn’t see or with the underlying equations from which we inferred the need for dark matter.

So to get back to your original question, yes – this is how science should work. I hadn’t set out to test MOND, but I had inadvertently performed exactly the right experiment for that purpose. MOND had its predictions come true where the predictions of other theories did not: both my own theory and those of others who were working in the context of dark matter. We got it wrong while MOND got it right. That led me to change my mind: I had been wrong to be sure the answer had to be dark matter, and to be so quick to dismiss MOND. Admitting this was the most difficult struggle I ever faced in my career.

David: From the perspective of dark matter, how does one understand MOND’s success?

Stacy: One does not.

That the predictions of MOND should come true in a universe dominated by dark matter makes no sense.

Before I became aware of MOND, I spent lots of time trying to come up with dark matter-based explanations for what I was seeing. It didn’t work. Since then, I have continued to search for a viable explanation with dark matter. I have not been successful. Others have claimed such success, but whenever I look at their work, it always seems that what they assert to be a great success is just a specific elaboration of a model I had already considered and rejected as obviously unworkable. The difference boils down to Occam’s razor. If you give dark matter theory enough free parameters, it can be adjusted to “predict” pretty much anything. But the best we can hope to do with dark matter theory is to retroactively explain what MOND successfully predicted in advance. Why should we be impressed by that?

David: Does MOND fail in clusters?

Stacy: Yes and no: there are multiple tests in clusters. MOND passes some and flunks others – as does dark matter.

The most famous test is the baryon fraction. This should be one in MOND – all the mass is normal baryonic matter. With dark matter, it should be the cosmic ratio of normal to dark matter (about 1:5).

MOND fails this test: it explains most of the discrepancy in clusters, but not all of it. The dark matter picture does somewhat better here, as the baryon fraction is close to the cosmic expectation — at least for the richest clusters of galaxies. In smaller clusters and groups of galaxies, the normal matter content falls short of the cosmic value. So both theories suffer a “missing baryon” problem: MOND in rich clusters; dark matter in everything smaller.

Another test is the mass-temperature relation. Both theories predict a relation between the mass of a cluster and the temperature of the gas it contains, but they predict different slopes for this relation. MOND gets the slope right but the amplitude wrong, leading to the missing baryon problem above. Dark matter gets the amplitude right for the most massive clusters, but gets the slope wrong – which leads to it having a missing baryon problem for systems smaller than the largest clusters.

There are other tests. Clusters continue to merge; the collision velocity of merging clusters is predicted to be higher in MOND than with dark matter. For example, the famous bullet cluster, which is often cited as a contradiction to MOND, has a collision speed that is practically impossible with dark matter: there just isn’t enough time for the two components of the bullet to accelerate up to the observed relative speed if they fall together under the influence of normal gravity and the required amount of dark mass. People have argued over the severity of this perplexing problem, but the high collision speed happens quite naturally in MOND as a consequence of its greater effective force of attraction. So, taken at face value, the bullet cluster both confirms and refutes both theories!

I could go on… one expects clusters to form earlier and become more massive in MOND than in dark matter. There are some indications that this is the case – the highest redshift clusters came as a surprise to conventional structure formation theory – but the relative numbers of clusters as a function of mass seems to agree well with current expectations with dark matter. So clusters are a mixed bag.

More generally, there is a widespread myth that MOND fits rotation curves, but gets nothing else right. This is what I expected to find when I started fact checking, but the opposite is true. MOND explains a huge variety of data well. The presumptive superiority of dark matter is just that – a presumption.

David: At a physics colloquium two decades ago, Vera Rubin described how theorists were willing and eager to explain her data to her. At an astronomy colloquium a few years later, you echoed that sentiment in relation to your data on velocity curves. One concludes that theorists are uniquely insightful and generous people. Is there anyone you would like to thank for putting you straight? 
 
Stacy:  So they perceive themselves to be.

MOND has made many successful a priori predictions. This is the golden standard of the scientific method. If there is another explanation for it, I’d like to know what it is.

As your questions supposes, many theorists have offered such explanations. At most one of them can be correct. I have yet to hear a satisfactory explanation.


David: What are MOND people working on these days? 
 
Stacy: Any problem that is interesting in extragalactic astronomy is interesting in the context of MOND. Outstanding questions include planes of satellite dwarf galaxies, clusters of galaxies, the formation of large scale structure, and the microwave background. MOND-specific topics include the precise value of the MOND acceleration constant, predicting the velocity dispersions of dwarf galaxies, and the search for the predicted external field effect, which is a unique signature of MOND.

The phrasing of this question raises a sociological issue. I don’t know what a “MOND person” is. Before now, I have only heard it used as a pejorative.

I am a scientist who has worked on many topics. MOND is just one of them. Does that make me a “MOND person”? I have also worked on dark matter, so am I also a “dark matter person”? Are these mutually exclusive?

I have attended conferences where I have heard people say ‘“MOND people” do this’ or ‘“MOND people” fail to do that.’ Never does the speaker of these words specify who they’re talking about: “MOND people” are a nameless Other. In all cases, I am more familiar with the people and the research they pretend to describe, but in no way do I recognize what they’re talking about. It is just a way of saying “Those People” are Bad.

There are many experts on dark matter in the world. I am one of them. There are rather fewer experts on MOND. I am also one of them. Every one of these “MOND people” is also an expert on dark matter. This situation is not reciprocated: many experts on dark matter are shockingly ignorant about MOND. I was once guilty of that myself, but realized that ignorance is not a sound basis on which to base a scientific judgement.

David: Are you tired of getting these types of questions? 
 
Stacy: Yes and no.

No, in that these are interesting questions about fundamental science. That is always fun to talk about.

Yes, in that I find myself having the same arguments over and over again, usually with scientists who remain trapped in the misconceptions I suffered myself a quarter century ago, but whose minds are closed to ideas that threaten their sacred cows. If dark matter is a real, physical substance, then show me a piece already.

Big Trouble in a Deep Void

Big Trouble in a Deep Void

The following is a guest post by Indranil Banik, Moritz Haslbauer, and Pavel Kroupa (bios at end) based on their new paper

Modifying gravity to save cosmology

Cosmology is currently in a major crisis because of many severe tensions, the most serious and well-known being that local observations of how quickly the Universe is expanding (the so-called ‘Hubble constant’) exceed the prediction of the standard cosmological model, ΛCDM. This prediction is based on the cosmic microwave background (CMB), the most ancient light we can observe – which is generally thought to have been emitted about 400,000 years after the Big Bang. For ΛCDM to fit the pattern of fluctuations observed in the CMB by the Planck satellite and other experiments, the Hubble constant must have a particular value of 67.4 ± 0.5 km/s/Mpc. Local measurements are nearly all above this ‘Planck value’, but are consistent with each other. In our paper, we use a local value of 73.8 ± 1.1 km/s/Mpc using a combination of supernovae and gravitationally lensed quasars, two particularly precise yet independent techniques.

This unexpectedly rapid local expansion of the Universe could be due to us residing in a huge underdense region, or void. However, a void wide and deep enough to explain the Hubble tension is not possible in ΛCDM, which is built on Einstein’s theory of gravity, General Relativity. Still, there is quite strong evidence that we are indeed living within a large void with a radius of about 300 Mpc, or one billion light years. This evidence comes from many surveys covering the whole electromagnetic spectrum, from radio to X-rays. The most compelling evidence comes from analysis of galaxy number counts in the near-infrared, giving the void its name of the Keenan-Barger-Cowie (KBC) void. Gravity from matter outside the void would pull more than matter inside it, making the Universe appear to expand faster than it actually is for an observer inside the void. This ‘Hubble bubble’ scenario (depicted in Figure 1) could solve the Hubble tension, a possibility considered – and rejected – in several previous works (e.g. Kenworthy+ 2019). We will return to their objections against this idea.

Figure 1: Illustration of the Universe’s large scale structure. The darker regions are voids, and the bright dots represent galaxies. The arrows show how gravity from surrounding denser regions pulls outwards on galaxies in a void. If we were living in such a void (as indicated by the yellow star), the Universe would expand faster locally than it does on average. This could explain the Hubble tension. Credit: Technology Review

One of the main objections seemed to be that since such a large and deep void is incompatible with ΛCDM, it can’t exist. This is a common way of thinking, but the problem with it was clear to us from a very early stage. The first part of this logic is sound – assuming General Relativity, a hot Big Bang, and that the state of the Universe at early times is apparent in the CMB (i.e. it was flat and almost homogeneous then), we are led to the standard flat ΛCDM model. By studying the largest suitable simulation of this model (called MXXL), we found that it should be completely impossible to find ourselves inside a void with the observed size and depth (or fractional underdensity) of the KBC void – this possibility can be rejected with more confidence than the discovery of the Higgs boson when first announced. We therefore applied one of the leading alternative gravity theories called Milgromian Dynamics (MOND), a controversial idea developed in the early 1980s by Israeli physicist Mordehai Milgrom. We used MOND (explained in a simple way here) to evolve a small density fluctuation forwards from early times, studying if 13 billion years later it fits the density and velocity field of the local Universe. Before describing our results, we briefly introduce MOND and explain how to use it in a potentially viable cosmological framework. Astronomers often assume MOND cannot be extended to cosmological scales (typically >10 Mpc), which is probably true without some auxiliary assumptions. This is also the case for General Relativity, though in that case the scale where auxiliary assumptions become crucial is only a few kpc, namely in galaxies.

MOND was originally designed to explain why galaxies rotate faster in their outskirts than they should if one applies General Relativity to their luminous matter distribution. This discrepancy gave rise to the idea of dark matter halos around individual galaxies. For dark matter to cluster on such scales, it would have to be ‘cold’, or equivalently consist of rather heavy particles (above a few thousand eV/c2, or a millionth of a proton mass). Any lighter and the gravity from galaxies could not hold on to the dark matter. MOND assumes these speculative and unexplained cold dark matter haloes do not exist – the need for them is after all dependent on the validity of General Relativity. In MOND once the gravity from any object gets down to a certain very low threshold called a0, it declines more gradually with increasing distance, following an inverse distance law instead of the usual inverse square law. MOND has successfully predicted many galaxy rotation curves, highlighting some remarkable correlations with their visible mass. This is unexpected if they mostly consist of invisible dark matter with quite different properties to visible mass. The Local Group satellite galaxy planes also strongly favour MOND over ΛCDM, as explained using the logic of Figure 2 and in this YouTube video.

Figure 2: the satellite galaxies of the Milky Way and Andromeda mostly lie within thin planes. These are difficult to form unless the galaxies in them are tidal dwarfs born from the interaction of two major galaxies. Since tidal dwarfs should be free of dark matter due to the way they form, the satellites in the satellite planes should have rather weak self-gravity in ΛCDM. This is not the case as measured from their high internal velocity dispersions. So the extra gravity needed to hold galaxies together should not come from dark matter that can in principle be separated from the visible.

To extend MOND to cosmology, we used what we call the νHDM framework (with ν pronounced “nu”), originally proposed by Angus (2009). In this model, the cold dark matter of ΛCDM is replaced by the same total mass in sterile neutrinos with a mass of only 11 eV/c2, almost a billion times lighter than a proton. Their low mass means they would not clump together in galaxies, consistent with the original idea of MOND to explain galaxies with only their visible mass. This makes the extra collisionless matter ‘hot’, hence the name of the model. But this collisionless matter would exist inside galaxy clusters, helping to explain unusual configurations like the Bullet Cluster and the unexpectedly strong gravity (even in MOND) in quieter clusters. Considering the universe as a whole, νHDM has the same overall matter content as ΛCDM. This makes the overall expansion history of the universe very similar in both models, so both can explain the amounts of deuterium and helium produced in the first few minutes after the Big Bang. They should also yield similar fluctuations in the CMB because both models contain the same amount of dark matter. These fluctuations would get somewhat blurred by sterile neutrinos of such a low mass due to their rather fast motion in the early Universe. However, it has been demonstrated that Planck data are consistent with dark matter particles more massive than 10 eV/c2. Crucially, we showed that the density fluctuations evident in the CMB typically yield a gravitational field strength of 21 a0 (correcting an earlier erroneous estimate of 570 a0 in the above paper), making the gravitational physics nearly identical to General Relativity. Clearly, the main lines of early Universe evidence used to argue in favour of ΛCDM are not sufficiently unique to distinguish it from νHDM (Angus 2009).

The models nonetheless behave very differently later on. We estimated that for redshifts below about 50 (when the Universe is older than about 50 million years), the gravity would typically fall below a0 thanks to the expansion of the Universe (the CMB comes from a redshift of 1100). After this ‘MOND moment’, both the ordinary matter and the sterile neutrinos would clump on large scales just like in ΛCDM, but there would also be the extra gravity from MOND. This would cause structures to grow much faster (Figure 3), allowing much wider and deeper voids.


Figure 3: Evolution of the density contrast within a 300 co-moving Mpc sphere in different Newtonian (red) and MOND (blue) models, shown as a function of the Universe’s size relative to its present size (this changes almost linearly with time). Notice the much faster structure growth in MOND. The solid blue line uses a time-independent external field on the void, while the dot-dashed blue line shows the effect of a stronger external field in the past. This requires a deeper initial void to match present-day observations.

We used this basic framework to set up a dynamical model of the void. By making various approximations and trying different initial density profiles, we were able to simultaneously fit the apparent local Hubble constant, the observed density profile of the KBC void, and many other observables like the acceleration parameter, which we come to below. We also confirmed previous results that the same observables rule out standard cosmology at 7.09σ significance. This is much more than the typical threshold of 5σ used to claim a discovery in cases like the Higgs boson, where the results agree with prior expectations.

One objection to our model was that a large local void would cause the apparent expansion of the Universe to accelerate at late times. Equivalently, observations that go beyond the void should see a standard Planck cosmology, leading to a step-like behaviour near the void edge. At stake is the so-called acceleration parameter q0 (which we defined oppositely to convention to correct a historical error). In ΛCDM, we expect q0 = 0.55, while in general much higher values are expected in a Hubble bubble scenario. The objection of Kenworthy+ (2019) was that since the observed q0 is close to 0.55, there is no room for a void. However, their data analysis fixed q0 to the ΛCDM expectation, thereby removing any hope of discovering a deviation that might be caused by a local void. Other analyses (e.g. Camarena & Marra 2020b) which do not make such a theory-motivated assumption find q0 = 1.08, which is quite consistent with our best-fitting model (Figure 4). We also discussed other objections to a large local void, for instance the Wu & Huterer (2017) paper which did not consider a sufficiently large void, forcing the authors to consider a much deeper void to try and solve the Hubble tension. This led to some serious observational inconsistencies, but a larger and shallower void like the observed KBC void seems to explain the data nicely. In fact, combining all the constraints we applied to our model, the overall tension is only 2.53σ, meaning the data have a 1.14% chance of arising if ours were the correct model. The actual observations are thus not the most likely consequence of our model, but could plausibly arise if it were correct. Given also the high likelihood that some if not all of the observational errors we took from publications are underestimates, this is actually a very good level of consistency.

Figure 4: The predicted local Hubble constant (x-axis) and acceleration parameter (y-axis) as measured with local supernovae (black dot, with red error ellipses). Our best-fitting models with different initial void density profiles (blue symbols) can easily explain the observations. However, there is significant tension with the prediction of ΛCDM based on parameters needed to fit Planck observations of the CMB (green dot). In particular, local observations favour a higher acceleration parameter, suggestive of a local void.

Unlike other attempts to solve the Hubble tension, ours is unique in using an already existing theory (MOND) developed for a different reason (galaxy rotation curves). The use of unseen collisionless matter made of hypothetical sterile neutrinos is still required to explain the properties of galaxy clusters, which otherwise do not sit well with MOND. In addition, these neutrinos provide an easy way to explain the CMB and background expansion history, though recently Skordis & Zlosnik (2020) showed that this is possible in MOND with only ordinary matter. In any case, MOND is a theory of gravity, while dark matter is a hypothesis that more matter exists than meets the eye. The ideas could both be right, and should be tested separately.

A dark matter-MOND hybrid thus appears to be a very promising way to resolve the current crisis in cosmology. Still, more work is required to construct a fully-fledged relativistic MOND theory capable of addressing cosmology. This could build on the theory proposed by Skordis & Zlosnik (2019) in which gravitational waves travel at the speed of light, which was considered to be a major difficulty for MOND. We argued that such a theory would enhance structure formation to the required extent under a wide range of plausible theoretical assumptions, but this needs to be shown explicitly starting from a relativistic MOND theory. Cosmological structure formation simulations are certainly required in this scenario – these are currently under way in Bonn. Further observations would also help greatly, especially of the matter density in the outskirts of the KBC void at distances of about 500 Mpc. This could hold vital clues to how quickly the void has grown, helping to pin down the behaviour of the sought-after MOND theory.

There is now a very real prospect of obtaining a single theory that works across all astronomical scales, from the tiniest dwarf galaxies up to the largest structures in the Universe & its overall expansion rate, and from a few seconds after the birth of the Universe until today. Rather than argue whether this theory looks more like MOND or standard cosmology, what we should really do is combine the best elements of both, paying careful attention to all observations.


Authors

Indranil Banik is a Humboldt postdoctoral fellow in the Helmholtz Institute for Radiation and Nuclear Physics (HISKP) at the University of Bonn, Germany. He did his undergraduate and masters at Trinity College, Cambridge, and his PhD at Saint Andrews under Hongsheng Zhao. His research focuses on testing whether gravity continues to follow the Newtonian inverse square law at the low accelerations typical of galactic outskirts, with MOND being the best-developed alternative.

Moritz Haslbauer is a PhD student at the Max Planck Institute for Radio Astronomy (MPIfR) in Bonn. He obtained his undergraduate degree from the University of Vienna and his masters from the University of Bonn. He works on the formation and evolution of galaxies and their distribution in the local Universe in order to test different cosmological models and gravitational theories. Prof. Pavel Kroupa is his PhD supervisor.

Pavel Kroupa is a professor at the University of Bonn and professorem hospitem at Charles University in Prague. He went to school in Germany and South Africa, studied physics in Perth, Australia, and obtained his PhD at Trinity College, Cambridge, UK. He researches stellar populations and their dynamics as well as the dark matter problem, therewith testing gravitational theories and cosmological models.

Link to the published science paper.

YouTube video on the paper

Contact: ibanik@astro.uni-bonn.de.

Indranil Banik’s YouTube channel.

A lengthy personal experience with experimental searches for WIMPs

A lengthy personal experience with experimental searches for WIMPs

This post is adopted from a web page I wrote in 2008, before starting this blog. It covers some ground that I guess is now historic about things that were known about WIMPs from their beginnings in the 1980s, and experimental searches therefore. In part, I was just trying to keep track of experimental limits, with updates added as noted since the first writing. This is motivated now by some troll on Twitter trying to gaslight people into believing there were no predictions for WIMPs prior to the discovery of the Higgs boson. Contrary to this assertion, the field had already gone through many generations of predictions, with the theorists moving the goal posts every time a prediction was excluded. I have colleagues involved in WIMP searches that have left that field in disgust at having the goal posts moved on them: what good are the experimental searches if, every time they reach the promised land, they’re simply told the promised land is over the next horizon? You experimentalists just keep your noses to the grindstone, and don’t bother the Big Brains with any inconvenient questions!

We were already very far down this path in 2008 – so far down it, I called it the express elevator to hell, since the predicted interaction cross-section kept decreasing to evade experimental limits. Since that time, theorists have added sideways in mass to their evasion tactics, with some advocating for “light” dark matter (less in mass than the 2 GeV Lee-Weinberg limit for the minimum WIMP mass) while others advocate for undetectably high mass WIMPzillas (because there’s a lot of unexplored if unexpected parameter space at high mass to roam around in before hitting the unitarity bound. Theorists love to go free range.)

These evasion tactics had become ridiculous well before the Higgs was discovered in 2012. Many people don’t seem to have memories that long, so let’s review. Text in normal font was written in 2008; later additions are italicized.

Seeking WIMPs in all the wrong places

This article has been updated many times since it was first written in 2008, at which time we were already many years down the path it describes.

The Need for Dark Matter
Extragalactic systems like spiral galaxies and clusters of galaxies exhibit mass discrepancies. The application of Newton’s Law of Gravity to the observed stars and gas fails to explain the rapid observed motions. This leads to the inference that some form of invisible mass – dark matter – dominates the dynamics of the universe.

WIMPs
If asked what the dark matter is, most scientists working in the field will respond honestly that we have no idea. There are many possible candidates. Some, like MACHOs (Massive Compact Halo Objects, perhaps brown dwarfs) have essentially been ruled out. However, in our heart of hearts there is a huge odds-on favorite: the WIMP.

WIMP stands for Weakly Interacting Massive Particle. This is an entire class of new fundamental particles that emerge from supersymmetry. Supersymmetry (SUSY) is a theoretical notion by which known elementary particles have supersymmetric partner particles. This notion is not part of the highly successful Standard Model of particle physics, but might exist provided that the Higgs boson exists. In the so-called Minimal Supersymmetric Standard Model (MSSM), which was hypothesized to explain the hierarchy problem (i.e., why do the elementary particles have the various masses that they do), the lightest stable supersymmetric particle is the neutralino. This is the WIMP that presumably makes up the dark matter.

2020 update: the Higgs does indeed exist. Unfortunately, it is too normal. That is, it fits perfectly well with the Standard Model without any need for SUSY. Indeed, it is so normal that MSSM is pretty much excluded. One can persist with more complicated theories (as always) but to date SUSY has flunked every experimental test, including the “golden test” of the decay of the Bs meson. Never heard of the golden test? The theorists were all about it until SUSY flunked it; now they never seem to mention it.

Cosmology, meet particle physics
There is a confluence of history in the development of previously distinct fields. The need for cosmological dark matter became clear in the 1980s, the same time that MSSM was hypothesized to solve the hierarchy problem in particle physics. Moreover, it was quickly realized that the cosmological dark matter could not be normal (“baryonic“) matter. New fundamental particles therefore seemed a natural fit.

The cosmic craving for CDM
There are two cosmological reason why we need non-baryonic cold dark matter (CDM):

  1. The measured density of gravitating mass appears to considerably exceed that in normal matter as constrained by Big Bang Nucleosynthesis (BBN): Ωm = 6 Ωb (so Ωnot baryons = 5 Ωbaryons).
  2. Gravity is too weak to grow the presently observed structures (e.g., galaxies, clusters, filaments) from the smooth initial condition observed in the cosmic microwave background (CMB) unless something speeds up the process. Extra mass will do this, but it must not interact with the photons of the CMB the way ordinary matter does.

By themselves, either of these arguments are strong. Together, they were compelling enough to launch the CDM paradigm. (Like most scientists of my generation, I certainly bought into it.)

From the astronomical perspective, all that is required is that the dark matter be non-baryonic and dynamically cold. Non-baryonic so that it does not participate in Big Bang Nucleosynthesis or interact with photons (a good way to remain invisible!), and dynamically cold (i.e., slow moving, not relativistic) so that it can clump and form gravitationally bound structures. Many things might satisfy these astronomical requirements. For example, supermassive black holes fit the bill, though they would somehow have to form in the first second of creation in order not to impact BBN.

The WIMP Miracle
From a particle physics perspective, the early universe was a high energy place where energy and mass could switch from one form to the other freely as enshrined in Einstein’s E = mc2. Pairs of particles and their antiparticles could come and go. However, as the universe expands, it cools. As it cools, it loses the energy necessary to create particle pairs. When this happens for a particular particle depends on the mass of the particle – the more mass, the more energy is required, and the earlier that particle-antiparticle pair “freeze out.” After freeze-out, the remaining particle-antiparticle pairs can mutually annihilate, leaving only energy. To avoid this fate, there must either be some asymmetry (apparently there was about one extra proton for every billion proton-antiproton pairs – an asymmetry on which our existence depends even if we don’t yet understand it) or the “cross section” – the probability for interacting – must be so low that particles and their antiparticles go their separate ways without meeting often enough to annihilate completely. This process leaves some relic density that depends on the properties of the particles.

If one asks what relic density is necessary to make up the cosmic dark matter, the cross section that comes out is about that of the weak nuclear force. A particle that interacts through the weak force but not the electromagnetic force will have the about the right relic density. Moreover, it won’t interfere with BBN or the CMB. The WIMPs hypothesized by supersymmetry fit the bill for cosmologists’ CDM. This coincidence of scales – the relic density and the weak force interaction scale – is sometimes referred to as the “WIMP miracle” and was part of the motivation to adopt the WIMP as the leading candidate for cosmological dark matter.

WIMP detection experiments
WIMPs as CDM is a well posed scientific hypothesis subject to experimental verification. From astronomical measurements, we know how much we need in the solar neighborhood – about 0.3 GeV c-2 cm-3. (That means there are a few hundred WIMPs passing through your body at any given moment, depending on the exact mass of the particle.) From particle physics, we know the weak interaction cross section, so can calculate the probability of a WIMP interacting with normal matter. In this respect, WIMPs are very much like neutrinos – they can pass right through solid matter because they do not experience the electromagnetic interactions that make ordinary matter solid. But once in a very rare while, they may come close enough to an atomic nucleus to interact with it via the weak force. This is the signature that can be sought experimentally.

There is a Nobel Prize waiting for whoever discovers the dark matter, so there are now many experiments seeking to do so. Generically, these use very pure samples of some element (like Germanium or Argon or Xenon) to act as targets for the WIMPs making up the dark matter component of our Milky Way Galaxy. The sensitivity required is phenomenal, and many mundane background events (cosmic rays, natural radioactivity, clumsy colleagues dropping beer cans) that might mimic WIMPs must be screened out. For this reason, there is a strong desire to perform these experiments in deep mine shafts where the apparatus can be shielded from the cosmic rays that bombard our planet and other practical nuisances.

The technology development involved in the hunt for WIMPs is amazing. The experimentalists have accomplished phenomenal things in the hunt for dark matter. That they have so far failed to detect it should give pause to any thinking person aquainted with the history, techniques, and successes of particle physics. This failure is both a surprise and disappointment to those who understand modern cosmology. It should not come as a surprise to anyone familiar with the dynamical evidence for – and against – dark matter.

Searches for WIMPs are proceeding apace. The sensitivity of these experiments is increasing at an accelerating rate. They already provide important constraints – see the figure:


Searching for WIMPs

This 2008 graph shows the status of searches for Weakly Interacting Massive Particles (WIMPs). The abscissa is the mass of the putative WIMP particle. For reference, the proton has a mass of about one in these units. The ordinate is a measure of the probability for WIMPs to interact with normal matter. Not much! The shaded regions represent theoretical expectations for WIMPs. The light red region is the original (Ellis et al.) forecast. The blue and green regions are more recent predictions (Trotta et al. 2008). The lines are representative experimental limits. The region above each line is excluded – if WIMPs had existed in that range of mass and interaction probability, they would have been detected already. The top line (from CDMS in 2004) excluded much of the original prediction. More recent work (colored lines, circa 2008) now approach the currently expected region.

April 2011 update: XENON100 sees nada. Note how the “expected” region continues to retreat downward in cross section as experiments exclude the previous sweet spots in this parameter. This is the express elevator to hell (see below).

September 2011 update: CREST-II claims a detection. Unfortunately, their positive result violates limits imposed by several other experiments, including XENON100. Somebody is doing their false event rejection wrong.

July 2012 update: XENON100 still seeing squat. Note that the “head” of the most probable (blue) region in the figure above is now excluded.
It is interesting to compare the time sequence of their results: first | run 8 | run 10.

November 2013 update: LUX sees nothing and excludes the various claims for detections of light dark matter (see inset). This exclusion of light dark matter appears to be highly significant as the very recent prediction was for about dozen of detections per month, which should have added up to an easy detection rather than the observed absence of events in excess of the expected background. Note also that the new exclusion boundary cuts deeply into the region predicted for traditional heavy (~ 100 GeV) WIMPs by Buchmuelleur et al. as depicted by Xenon100. The Buchmuelleur et al. “prediction” is already a downscaling from the bulk of probability predicted by Trotta et al. (2008 – the blue region in the figure above). This perpetual adjustment of the expectation for the WIMP cross-section is precisely the dodgy moving of the goal posts that prompted me to first write this web page years ago.

May 2014: “Crunch time” for dark matter comes and goes.

July 2016 update: PandaX sees nada.

August 2016 update: LUX continues to see nada. The minimum of their exclusion line now reaches the bottom axis of the 2009 plot (above the line, with the now-excluded blue blob). The “predicted” WIMP (gray area in the plot within this section) appears to have migrated to higher mass in addition to the downward migration of the cross-section. I guess this is the sideways turbolift to evil-Kirk universe hell.


Indeed, the experiments have perhaps been too successful. The original region of cross section-mass parameter space in which WIMPs were expected to reside was excluded years ago. Not easily dissuaded, theorists waved their hands, invoked the Majorana see-saw mechanism, and reduced the interaction probability to safely non-detectable levels. This is the vertical separation of the reddish and blue-green regions in the figure.

To quote a particle physicist, “The most appealing possibility – a weak scale dark matter particle interacting with matter via Z-boson exchange – leads to the cross section of order 10-39 cm2 which was excluded back in the ’80s by the first round of dark matter experiments. There exists another natural possibility for WIMP dark matter: a particle interacting via Higgs boson exchange. This would lead to the cross section in the 10-42-10-46 cm2 ballpark (depending on the Higgs mass and on the coupling of dark matter to the Higgs).”

From this 2011 Resonaances post

Though set back and discouraged by this theoretical slight of hand (the WIMP “miracle” is now more of a vague coincidence, like seeing an old flame in Grand Central Station but failing to say anything because (a) s/he is way over on another platform and (b) on reflection, you’re not really sure it was him or her after all), experimentallists have been gaining ground on the newly predicted region. If all goes as planned, most of the plausible parameter space will have been explored in a few more years. (I have heard it asserted that “we’ll know what the dark matter is in 5 years” every 5 years for the past two decades. Make that three decades now.)

The express elevator to hell

We’re on an express elevator to hell – going down!

There is a slight problem with the current predictions for WIMPs. While there is a clear focus point where WIMPs most probably reside (the blue blob in the figure), there is also a long tail to low interaction cross section. If we fail to detect WIMPs when experimental sensitivity encompasses the blob, the presumption will be that we’re just unlucky and WIMPs happen to live in the low-probability tail that is not yet excluded. (Low probability regions tend to seem more reasonable as higher probability regions are rejected and we forget about them.) This is the express elevator to hell. No matter how much time, money, and effort we invest in further experimentation, the answer will always be right around the corner. This process can go on forever.

Is dark matter a falsifiable hypothesis?

The existence of dark matter is an inference, not an experimental fact. Individual candidates for the dark matter can be tested and falsified. For example, it was once reasonable to imagine that innumerable brown dwarfs could be the dark matter. That is no longer true – were there that many brown dwarfs out there, we would have seen them directly by now. The brown dwarf hypothesis has been falsified. WIMPs are falsifiable dark matter candidates – provided we don’t continually revise their interaction probability. If we keep doing this, the hypothesis ceases to have predictive power and is no longer subject to falsification.

The concept of dark matter is not falsifiable. If we exclude one candidate, we are free to make up another one. After WIMPs, the next obvious candidate is axions. Should those be falsified, we invent something else. (Particle physicists love to do this. The literature is littered with half-baked dark matter candidates invented for dubious reasons, often to explain phenomena with obvious astrophysical causes. The ludicrous uproar over the ATIC and PAMELA cosmic ray experiments is a good example.) (Circa 2008, there was a lot of enthusiasm that certain signals detected by cosmic ray experiments were caused by dark matter. These have gone away.)


September 2011 update: Fermi confirms the PAMELA positron excess. Too well for it to be dark matter: there is no upper threshold energy corresponding to the WIMP mass. Apparently these cosmic rays are astrophysical in origin, which comes as no surprise to high energy astrophysicists.

April 2013 update: AMS makes claims to detect dark matter that are so laughably absurd they do not warrant commentary.

September 2016 update: There is no update. People seem to have given up on claiming that there is any sign of dark matter in cosmic rays. There have been claims of dark matter causing signatures in gamma ray data and separately in X-ray data. These never looked credible and went away on a time scale shorter so short that on one occasion, an entire session of a 2014 conference had been planned to discuss a gamma ray signal at 126 GeV as dark matter. I asked the organizers a few months in advance if that was even going to be a thing by the time we met. It wasn’t: every speaker scheduled for that session gave some completely unrelated talk.

November 2019 update: Xenon1T sees no sign of WIMPs. (There is some hint of an excess of electron recoils. These are completely the wrong energy scale to be the signal that this experiment was designed to detect.

WIMP prediction and limits. The shaded region marks the prediction of Trotta et al. (2008) for the WIMP mass and interaction cross-section. The lighter shade depicts the 95% confidence limit, the dark region the 68% c.l., and the cross the best fit. The heavy line shows the 90% c.l. exclusion limit from the Xenon1T experiment. Everything above the line is excluded, ruling out virtually all the parameter space in which WIMPs had been predicted to reside.

2020 comment: I was present at a meeting in 2009 when the predictions of Trotta et al (above, in grey, and higher up, in blue and green) was new and fresh. I was, at that point, already feeling like we’d been led down this garden path more than one too many times. So I explicitly asked about the long tail to low cross-section. I was assured that the probability in that tail was < 2%; we would surely detect the WIMP at somewhere around the favored value (the X in the gray figure). We did not. Essentially all of that predicted parameter space has been excluded, with only a tiny fraction of the 2% tail extending below current limits. Worse, the top border of the Trotta et al prediction was based on the knowledge that the parameter space to higher cross section – where the WIMP was originally predicted to reside – had already been experimentally excluded. So the grey region understates the range of parameter space over which WIMPs were reasonably expected to exist. I’m sure there are people who would like to pretend that the right “prediction” for the WIMP is at still lower cross section. That would be an example of how those who are ignorant (or in denial) of history are doomed to repeat it.

I predict that none the big, expensive WIMP experiments will ever find what they’re looking for. It is past time to admit that the lack of detections is because WIMPs don’t exist. I could be proven wrong by the simple expedient of obtaining a credible WIMP detection. I’m sure there are many bright, ambitious scientists who will take up that challenge. To them I say: after you’ve spent your career at the bottom of a mine shaft with no result to show for it, look up at the sky and remember that I tried to warn you.


A Significant Theoretical Advance

A Significant Theoretical Advance

The missing mass problem has been with us many decades now. Going on a century if you start counting from the work of Oort and Zwicky in the 1930s. Not quite a half a century if we date it from the 1970s when most of the relevant scientific community started to take it seriously. Either way, that’s a very long time for a major problem to go unsolved in physics. The quantum revolution that overturned our classical view of physics was lightning fast in comparison – see the discussion of Bohr’s theory in the foundation of quantum mechanics in David Merritt’s new book.

To this day, despite tremendous efforts, we have yet to obtain a confirmed laboratory detection of a viable dark matter particle – or even a hint of persuasive evidence for the physics beyond the Standard Model of Particle Physics (e.g., supersymmetry) that would be required to enable the existence of such particles. We cannot credibly claim (as many of my colleagues insist they can) to know that such invisible mass exists. All we really know is that there is a discrepancy between what we see and what we get: the universe and the galaxies within it cannot be explained by General Relativity and the known stable of Standard Model particles.

If we assume that General Relativity is both correct and sufficient to explain the universe, which seems like a very excellent assumption, then we are indeed obliged to invoke non-baryonic dark matter. The amount of astronomical evidence that points in this direction is overwhelming. That is how we got to where we are today: once we make the obvious, imminently well-motivated assumption, then we are forced along a path in which we become convinced of the reality of the dark matter, not merely as a hypothetical convenience to cosmological calculations, but as an essential part of physical reality.

I think that the assumption that General Relativity is correct is indeed an excellent one. It has repeatedly passed many experimental and observational tests too numerous to elaborate here. However, I have come to doubt the assumption that it suffices to explain the universe. The only data that test it on scales where the missing mass problem arises is the data from which we infer the existence of dark matter. Which we do by assuming that General Relativity holds. The opportunity for circular reasoning is apparent – and frequently indulged.

It should not come as a shock that General Relativity might not be completely sufficient as a theory in all circumstances. This is exactly the motivation for and the working presumption of quantum theories of gravity. That nothing to do with cosmology will be affected along the road to quantum gravity is just another assumption.

I expect that some of my colleagues will struggle to wrap their heads around what I just wrote. I sure did. It was the hardest thing I ever did in science to accept that I might be wrong to be so sure it had to be dark matter – because I was sure it was. As sure of it as any of the folks who remain sure of it now. So imagine my shock when we obtained data that made no sense in terms of dark matter, but had been predicted in advance by a completely different theory, MOND.

When comparing dark matter and MOND, one must weigh all evidence in the balance. Much of the evidence is gratuitously ambiguous, so the conclusion to which one comes depends on how one weighs the more definitive lines of evidence. Some of this points very clearly to MOND, while other evidence prefers non-baryonic dark matter. One of the most important lines of evidence in favor of dark matter is the acoustic power spectrum of the cosmic microwave background (CMB) – the pattern of minute temperature fluctuations in the relic radiation field imprinted on the sky a few hundred thousand years after the Big Bang.

The equations that govern the acoustic power spectrum require General Relativity, but thankfully the small amplitude of the temperature variations permits them to be solved in the limit of linear perturbation theory. So posed, they can be written as a damped and driven oscillator. The power spectrum favors features corresponding to standing waves at the epoch of recombination when the universe transitioned rather abruptly from an opaque plasma to a transparent neutral gas. The edge of a cloud provides an analog: light inside the cloud scatters off the water molecules and doesn’t get very far: the cloud is opaque. Any light that makes it to the edge of the cloud meets no further resistance, and is free to travel to our eyes – which is how we perceive the edge of the cloud. The CMB is the expansion-redshifted edge of the plasma cloud of the early universe.

An easy way to think about a damped and a driven oscillator is a kid being pushed on a swing. The parent pushing the child is a driver of the oscillation. Any resistance – like the child dragging his feet – damps the oscillation. Normal matter (baryons) damps the oscillations – it acts as a net drag force on the photon fluid whose oscillations we observe. If there is nothing going on but General Relativity plus normal baryons, we should see a purely damped pattern of oscillations in which each peak is smaller than the one before it, as seen in the solid line here:

CMB_Pl_CLonly
The CMB acoustic power spectrum predicted by General Relativity with no cold dark matter (line) and as observed by the Planck satellite (data points).

As one can see, the case of no Cold Dark Matter (CDM) does well to explain the amplitudes of the first two peaks. Indeed, it was the only hypothesis to successfully predict this aspect of the data in advance of its observation. The small amplitude of the second peak came as a great surprise from the perspective of LCDM. However, without CDM, there is only baryonic damping. Each peak should have a progressively lower amplitude. This is not observed. Instead, the third peak is almost the same amplitude as the second, and clearly higher than expected in the pure damping scenario of no-CDM.

CDM provides a net driving force in the oscillation equations. It acts like the parent pushing the kid. Even though the kid drags his feet, the parent keeps pushing, and the amplitude of the oscillation is maintained. For the third peak at any rate. The baryons are an intransigent child and keep dragging their feet; eventually they win and the power spectrum damps away on progressively finer angular scales (large 𝓁 in the plot).

As I wrote in this review, the excess amplitude of the third peak over the no-CDM prediction is the best evidence to my mind in favor of the existence of non-baryonic CDM. Indeed, this observation is routinely cited by many cosmologists to absolutely require dark matter. It is argued that the observed power spectrum is impossible without it. The corollary is that any problem the dark matter picture encounters is a mere puzzle. It cannot be an anomaly because the CMB tells us that CDM has to exist.

Impossible is a high standard. I hope the reader can see the flaw in this line of reasoning. It is the same as above. In order to compute the oscillation power spectrum, we have assumed General Relativity. While not replacing it, the persistent predictive successes of a theory like MOND implies the existence of a more general theory. We do not know that such a theory cannot explain the CMB until we develop said theory and work out its predictions.

That said, it is a tall order. One needs a theory that provides a significant driving term without a large amount of excess invisible mass. Something has to push the swing in a universe full of stuff that only drags its feet. That does seem nigh on impossible. Or so I thought until I heard a talk by Pedro Ferreira where he showed how the scalar field in TeVeS – the relativistic MONDian theory proposed by Bekenstein – might play the same role as CDM. However, he and his collaborators soon showed that the desired effect was indeed impossible, at least in TeVeS: one could not simultaneously fit the third peak and the data preceding the first. This was nevertheless an important theoretical development, as it showed how it was possible, at least in principle, to affect the peak ratios without massive amounts of non-baryonic CDM.

At this juncture, there are two options. One is to seek a theory that might work, and develop it to the point where it can be tested. This is a lot of hard work that is bound to lead one down many blind alleys without promise of ultimate success. The much easier option is to assume that it cannot be done. This is the option adopted by most cosmologists, who have spent the last 15 years arguing that the CMB power spectrum requires the existence of CDM. Some even seem to consider it to be a detection thereof, in which case we might wonder why we bother with all those expensive underground experiments to detect the stuff.

Rather fewer people have invested in the approach that requires hard work. There are a few brave souls who have tried it; these include Constantinos Skordis and Tom Złosnik. Very recently, the have shown a version of a relativistic MOND theory (which they call RelMOND) that does fit the CMB power spectrum. Here is the plot from their paper:

CMB_RelMOND_2020

Note that black line in their plot is the fit of the LCDM model to the Planck power spectrum data. Their theory does the same thing, so it necessarily fits the data as well. Indeed, a good fit appears to follow for a range of parameters. This is important, because it implies that little or no fine-tuning is needed: this is just what happens. That is arguably better than the case for LCDM, in which the fit is very fine-tuned. Indeed, that was a large point of making the measurement, as it requires a very specific set of parameters in order to work. It also leads to tensions with independent measurements of the Hubble constant, the baryon density, and the amplitude of the matter power spectrum at low redshift.

As with any good science result, this one raises a host of questions. It will take time to explore these. But this in itself is a momentous result. Irrespective if RelMOND is the right theory or, like TeVeS, just a step on a longer path, it shows that the impossible is in fact possible. The argument that I have heard repeated by cosmologists ad nauseam like a rosary prayer, that dark matter is the only conceivable way to explain the CMB power spectrum, is simply WRONG.

A Philosophical Approach to MOND

A Philosophical Approach to MOND is a new book by David Merritt. This is a major development in the both the science of cosmology and astrophysics, on the one hand, and the philosophy and history of science on the other. It should be required reading for anyone interested in any of these topics.

For many years, David Merritt was a professor of astrophysics who specialized in gravitational dynamics, leading a number of breakthroughs in the effects of supermassive black holes in galaxies on the orbits of stars around them. He has since transitioned to the philosophy of science. This may not sound like a great leap, but it is: these are different scholarly fields, each with their own traditions, culture, and required background education. Changing fields like this is a bit like switching boats mid-stream: even a strong swimmer may flounder in the attempt given the many boulders academic disciplines traditionally place in the stream of knowledge to mark their territory. Merritt has managed the feat with remarkable grace, devouring the background reading and coming up to speed in a different discipline to the point of a lucid fluency.

For the most part, practicing scientists have little interaction with philosophers and historians of science. Worse, we tend to have little patience for them. The baseline presumption of many physical scientists is that we know what we’re doing; there is nothing the philosophers can teach us. In the daily practice of what Kuhn called normal science, this is close to true. When instead we are faced with potential paradigm shifts, the philosophy of science is critical, and the absence of training in it on the part of many scientists becomes glaring.

In my experience, most scientists seem to have heard of Popper and Kuhn. If that. Physical scientists will almost always pay lip service to Popper’s ideal of falsifiablity, and that’s pretty much the extent of it. Living up to applying that ideal is another matter. If an idea that is near and dear to their hearts and careers is under threat, the knee-jerk response is more commonly “let’s not get carried away!”

There is more to the philosophy of science than that. The philosophers of science have invested lots of effort in considering both how science works in practice (e.g., Kuhn) and how it should work (Popper, Lakatos, …) The practice and the ideal of science are not always the same thing.

The debate about dark matter and MOND hinges on the philosophy of science in a profound way. I do not think it is possible to make real progress out of our current intellectual morass without a deep examination of what science is and what it should be.

Merritt takes us through the methodology of scientific research programs, spelling out what we’ve learned from past experience (the history of science) and from careful consideration of how science should work (its philosophical basis). For example, all scientists agree that it is important for a scientific theory to have predictive power. But we are disturbingly fuzzy on what that means. I frequently hear my colleagues say things like “my theory predicts that” in reference to some observation, when in fact no such prediction was made in advance. What they usually mean is that it fits well with the theory. This is sometimes true – they could have predicted the observation in advance if they had considered that particular case. But sometimes it is retroactive fitting more than prediction – consistency, perhaps, but it could have gone a number of other ways equally well. Worse, it is sometimes a post facto assertion that is simply false: not only was the prediction not made in advance, but the observation was genuinely surprising at the time it was made. Only in retrospect is it “correctly” “predicted.”

The philosophers have considered these situations. One thing I appreciate is Merritt’s review of the various takes philosophers have on what counts as a prediction. I wish I had known these things when I wrote the recent review in which I took a very restrictive definition to avoid the foible above. The philosophers provide better definitions, of which more than one can be usefully applicable. I’m not going to go through them here: you should read Merritt’s book, and those of the philosophers he cites.

From this philosophical basis, Merritt makes a systematic, dare I say, scientific, analysis of the basic tenets of MOND and MONDian theories, and how they fare with regard to their predictions and observational tests. Along the way, he also considers the same material in the light of the dark matter paradigm. Of comparable import to confirmed predictions are surprising observations: if a new theory predicts that the sun will rise in the morning, that isn’t either new or surprising. If instead a theory expects one thing but another is observed, that is surprising, and it counts against that theory even if it can be adjusted to accommodate the new fact. I have seen this happen over and over with dark matter: surprising observations (e.g., the absence of cusps in dark matter halos, the small numbers of dwarf galaxies, downsizing in which big galaxies appear to form earliest) are at first ignored, doubted, debated, then partially explained with some mental gymnastics until it is Known and of course, we knew it all along. Merritt explicitly points out examples of this creeping determinism, in which scientists come to believe they predicted something they merely rationalized post-facto (hence the preeminence of genuinely a priori predictions that can’t be fudged).

Merritt’s book is also replete with examples of scientists failing to take alternatives seriously. This is natural: we have invested an enormous amount of time developing physical science to the point we have now reached; there is an enormous amount of background material that cannot simply be ignored or discarded. All too often, we are confronted with crackpot ideas that do exactly this. This makes us reluctant to consider ideas that sound crazy on first blush, and most of us will rightly display considerable irritation when asked to do so. For reasons both valid and not, MOND skirts this bondary. I certainly didn’t take it seriously myself, nor really considered it at all, until its predictions came true in my own data. It was so far below my radar that at first I did not even recognize that this is what had happened. But I did know I was surprised; what I was seeing did not make sense in terms of dark matter. So, from this perspective, I can see why other scientists are quick to dismiss it. I did so myself, initially. I was wrong to do so, and so are they.

A common failure mode is to ignore MOND entirely: despite dozens of confirmed predictions, it simply remains off the radar for many scientists. They seem never to have given it a chance, so they simply don’t pay attention when it gets something right. This is pure ignorance, which is not a strong foundation from which to render a scientific judgement.

Another common reaction is to acknowledge then dismiss. Merritt provides many examples where eminent scientists do exactly this with a construction like: “MOND correctly predicted X but…” where X is a single item, as if this is the only thing that [they are aware that] it does. Put this way, it is easy to dismiss – a common refrain I hear is “MOND fits rotation curves but nothing else.” This is a long-debunked falsehood that is asserted and repeated until it achieves the status of common knowledge within the echo chamber of scientists who refuse to think outside the dark matter box.

This is where the philosophy of science is crucial to finding our way forward. Merritt’s book illuminates how this is done. If you are reading these words, you owe it to yourself to read his book.

The halo mass function

The halo mass function

I haven’t written much here of late. This is mostly because I have been busy, but also because I have been actively refraining from venting about some of the sillier things being said in the scientific literature. I went into science to get away from the human proclivity for what is nowadays called “fake news,” but we scientists are human too, and are not immune from the same self-deception one sees so frequently exercised in other venues.

So let’s talk about something positive. Current grad student Pengfei Li recently published a paper on the halo mass function. What is that and why should we care?

One of the fundamental predictions of the current cosmological paradigm, ΛCDM, is that dark matter clumps into halos. Cosmological parameters are known with sufficient precision that we have a very good idea of how many of these halos there ought to be. Their number per unit volume as a function of mass (so many big halos, so many more small halos) is called the halo mass function.

An important test of the paradigm is thus to measure the halo mass function. Does the predicted number match the observed number? This is hard to do, since dark matter halos are invisible! So how do we go about it?

Galaxies are thought to form within dark matter halos. Indeed, that’s kinda the whole point of the ΛCDM galaxy formation paradigm. So by counting galaxies, we should be able to count dark matter halos. Counting galaxies was an obvious task long before we thought there was dark matter, so this should be straightforward: all one needs is the measured galaxy luminosity function – the number density of galaxies as a function of how bright they are, or equivalently, how many stars they are made of (their stellar mass). Unfortunately, this goes tragically wrong.

Galaxy stellar mass function and the predicted halo mass function
Fig. 5 from the review by Bullock & Boylan-Kolchin. The number density of objects is shown as a function of their mass. Colored points are galaxies. The solid line is the predicted number of dark matter halos. The dotted line is what one would expect for galaxies if all the normal matter associated with each dark matter halo turned into stars.

This figure shows a comparison of the observed stellar mass function of galaxies and the predicted halo mass function. It is from a recent review, but it illustrates a problem that goes back as long as I can remember. We extragalactic astronomers spent all of the ’90s obsessing over this problem. [I briefly thought that I had solved this problem, but I was wrong.] The observed luminosity function is nearly flat while the predicted halo mass function is steep. Consequently, there should be lots and lots of faint galaxies for every bright one, but instead there are relatively few. This discrepancy becomes progressively more severe to lower masses, with the predicted number of halos being off by a factor of many thousands for the faintest galaxies. The problem is most severe in the Local Group, where the faintest dwarf galaxies are known. Locally it is called the missing satellite problem, but this is just a special case of a more general problem that pervades the entire universe.

Indeed, the small number of low mass objects is just one part of the problem. There are also too few galaxies at large masses. Even where the observed and predicted numbers come closest, around the scale of the Milky Way, they still miss by a large factor (this being a log-log plot, even small offsets are substantial). If we had assigned “explain the observed galaxy luminosity function” as a homework problem and the students had returned as an answer a line that had the wrong shape at both ends and at no point intersected the data, we would flunk them. This is, in effect, what theorists have been doing for the past thirty years. Rather than entertain the obvious interpretation that the theory is wrong, they offer more elaborate interpretations.

Faced with the choice between changing one’s mind and proving that there is no need to do so, almost everybody gets busy on the proof.

J. K. Galbraith

Theorists persist because this is what CDM predicts, with or without Λ, and we need cold dark matter for independent reasons. If we are unwilling to contemplate that ΛCDM might be wrong, then we are obliged to pound the square peg into the round hole, and bend the halo mass function into the observed luminosity function. This transformation is believed to take place as a result of a variety of complex feedback effects, all of which are real and few of which are likely to have the physical effects that are required to solve this problem. That’s way beyond the scope of this post; all we need to know here is that this is the “physics” behind the transformation that leads to what is currently called Abundance Matching.

Abundance matching boils down to drawing horizontal lines in the above figure, thus matching galaxies with dark matter halos with equal number density (abundance). So, just reading off the graph, a galaxy of stellar mass M* = 108 M resides in a dark matter halo of 1011 M, one like the Milky Way with M* = 5 x 1010 M resides in a 1012 M halo, and a giant galaxy with M* = 1012 M is the “central” galaxy of a cluster of galaxies with a halo mass of several 1014 M. And so on. In effect, we abandon the obvious and long-held assumption that the mass in stars should be simply proportional to that in dark matter, and replace it with a rolling fudge factor that maps what we see to what we predict. The rolling fudge factor that follows from abundance matching is called the stellar mass–halo mass relation. Many of the discussions of feedback effects in the literature amount to a post hoc justification for this multiplication of forms of feedback.

This is a lengthy but insufficient introduction to a complicated subject. We wanted to get away from this, and test the halo mass function more directly. We do so by use of the velocity function rather than the stellar mass function.

The velocity function is the number density of galaxies as a function of how fast they rotate. It is less widely used than the luminosity function, because there is less data: one needs to measure the rotation speed, which is harder to obtain than the luminosity. Nevertheless, it has been done, as with this measurement from the HIPASS survey:

Galaxy velocity function
The number density of galaxies as a function of their rotation speed (Zwaan et al. 2010). The bottom panel shows the raw number of galaxies observed; the top panel shows the velocity function after correcting for the volume over which galaxies can be detected. Faint, slow rotators cannot be seen as far away as bright, fast rotators, so the latter are always over-represented in galaxy catalogs.

The idea here is that the flat rotation speed is the hallmark of a dark matter halo, providing a dynamical constraint on its mass. This should make for a cleaner measurement of the halo mass function. This turns out to be true, but it isn’t as clean as we’d like.

Those of you who are paying attention will note that the velocity function Martin Zwaan measured has the same basic morphology as the stellar mass function: approximately flat at low masses, with a steep cut off at high masses. This looks no more like the halo mass function than the galaxy luminosity function did. So how does this help?

To measure the velocity function, one has to use some readily obtained measure of the rotation speed like the line-width of the 21cm line. This, in itself, is not a very good measurement of the halo mass. So what Pengfei did was to fit dark matter halo models to galaxies of the SPARC sample for which we have good rotation curves. Thanks to the work of Federico Lelli, we also have an empirical relation between line-width and the flat rotation velocity. Together, these provide a connection between the line-width and halo mass:

Halo mass-line width relation
The relation Pengfei found between halo mass (M200) and line-width (W) for the NFW (ΛCDM standard) halo model fit to rotation curves from the SPARC galaxy sample.

Once we have the mass-line width relation, we can assign a halo mass to every galaxy in the HIPASS survey and recompute the distribution function. But now we have not the velocity function, but the halo mass function. We’ve skipped the conversion of light to stellar mass to total mass and used the dynamics to skip straight to the halo mass function:

Empirical halo mass function
The halo mass function. The points are the data; these are well fit by a Schechter function (black line; this is commonly used for the galaxy luminosity function). The red line is the prediction of ΛCDM for dark matter halos.

The observed mass function agrees with the predicted one! Test successful! Well, mostly. Let’s think through the various aspects here.

First, the normalization is about right. It does not have the offset seen in the first figure. As it should not – we’ve gone straight to the halo mass in this exercise, and not used the luminosity as an intermediary proxy. So that is a genuine success. It didn’t have to work out this well, and would not do so in a very different cosmology (like SCDM).

Second, it breaks down at high mass. The data shows the usual Schechter cut-off at high mass, while the predicted number of dark matter halos continues as an unabated power law. This might be OK if high mass dark matter halos contain little neutral hydrogen. If this is the case, they will be invisible to HIPASS, the 21cm survey on which this is based. One expects this, to a certain extent: the most massive galaxies tend to be gas-poor ellipticals. That helps, but only by shifting the turn-down to slightly higher mass. It is still there, so the discrepancy is not entirely cured. At some point, we’re talking about large dark matter halos that are groups or even rich clusters of galaxies, not individual galaxies. Still, those have HI in them, so it is not like they’re invisible. Worse, examining detailed simulations that include feedback effects, there do seem to be more predicted high-mass halos that should have been detected than actually are. This is a potential missing gas-rich galaxy problem at the high mass end where galaxies are easy to detect. However, the simulations currently available to us do not provide the information we need to clearly make this determination. They don’t look right, so far as we can tell, but it isn’t clear enough to make a definitive statement.

Finally, the faint-end slope is about right. That’s amazing. The problem we’ve struggled with for decades is that the observed slope is too flat. Here a steep slope just falls out. It agrees with the ΛCDM down to the lowest mass bin. If there is a missing satellite-type problem here, it is at lower masses than we probe.

That sounds great, and it is. But before we get too excited, I hope you noticed that the velocity function from the same survey is flat like the luminosity function. So why is the halo mass function steep?

When we fit rotation curves, we impose various priors. That’s statistics talk for a way of keeping parameters within reasonable bounds. For example, we have a pretty good idea of what the mass-to-light ratio of a stellar population should be. We can therefore impose as a prior that the fit return something within the bounds of reason.

One of the priors we imposed on the rotation curve fits was that they be consistent with the stellar mass-halo mass relation. Abundance matching is now part and parcel of ΛCDM, so it made sense to apply it as a prior. The total mass of a dark matter halo is an entirely notional quantity; rotation curves (and other tracers) pretty much never extend far enough to measure this. So abundance matching is great for imposing sense on a parameter that is otherwise ill-constrained. In this case, it means that what is driving the slope of the halo mass function is a prior that builds-in the right slope. That’s not wrong, but neither is it an independent test. So while the observationally constrained halo mass function is consistent with the predictions of ΛCDM; we have not corroborated the prediction with independent data. What we really need at low mass is some way to constrain the total mass of small galaxies out to much larger radii that currently available. That will keep us busy for some time to come.

Two fields divided by a common interest

Two fields divided by a common interest

Britain and America are two nations divided by a common language.

attributed to George Bernard Shaw

Physics and Astronomy are two fields divided by a common interest in how the universe works. There is a considerable amount of overlap between some sub-fields of these subjects, and practically none at all in others. The aims and goals are often in common, but the methods, assumptions, history, and culture are quite distinct. This leads to considerable confusion, as with the English language – scientists with different backgrounds sometimes use the same words to mean rather different things.

A few terms that are commonly used to describe scientists who work on the subjects that I do include astronomer, astrophysicist, and cosmologist. I could be described as any of the these. But I also know lots of scientists to whom these words could be applied, but would mean something rather different.

A common question I get is “What’s the difference between an astronomer and an astrophysicist?” This is easy to answer from my experience as a long-distance commuter. If I get on a plane, and the person next to me is chatty and asks what I do, if I feel like chatting, I am an astronomer. If I don’t, I’m an astrophysicist. The first answer starts a conversation, the second shuts it down.

Flippant as that anecdote is, it is excruciatingly accurate – both for how people react (commuting between Cleveland and Baltimore for a dozen years provided lots of examples), and for what the difference is: practically none. If I try to offer a more accurate definition, then I am sure to fail to provide a complete answer, as I don’t think there is one. But to make the attempt:

Astronomy is the science of observing the sky, encompassing all elements required to do so. That includes practical matters like the technology of telescopes and their instruments across all wavelengths of the electromagnetic spectrum, and theoretical matters that allow us to interpret what we see up there: what’s a star? a nebula? a galaxy? How does the light emitted by these objects get to us? How do we count photons accurately and interpret what they mean?

Astrophysics is the science of how things in the sky work. What makes a star shine? [Nuclear reactions]. What produces a nebular spectrum? [The atomic physics of incredibly low density interstellar plasma.] What makes a spiral galaxy rotate? [Gravity! Gravity plus, well, you know, something. Or, if you read this blog, you know that we don’t really know.] So astrophysics is the physics of the objects astronomy discovers in the sky. This is a rather broad remit, and covers lots of physics.

With this definition, astrophysics is a subset of astronomy – such a large and essential subset that the terms can and often are used interchangeably. These definitions are so intimately intertwined that the distinction is not obvious even for those of us who publish in the learned journals of the American Astronomical Society: the Astronomical Journal (AJ) and the Astrophysical Journal (ApJ). I am often hard-pressed to distinguish between them, but to attempt it in brief, the AJ is where you publish a paper that says “we observed these objects” and the ApJ is where you write “here is a model to explain these objects.” The opportunity for overlap is obvious: a paper that says “observations of these objects test/refute/corroborate this theory” could appear in either. Nevertheless, there was a clearly a sufficient need to establish a separate journal focused on the physics of how things in the sky worked to launch the Astrophysical Journal in 1895 to complement the older Astronomical Journal (dating from 1849).

Cosmology is the study of the entire universe. As a science, it is the subset of astrophysics that encompasses observations that measure the universe as a physical entity: its size, age, expansion rate, and temporal evolution. Examples are sufficiently diverse that practicing scientists who call themselves cosmologists may have rather different ideas about what it encompasses, or whether it even counts as astrophysics in the way defined above.

Indeed, more generally, cosmology is where science, philosophy, and religion collide. People have always asked the big questions – we want to understand the world in which we find ourselves, our place in it, our relation to it, and to its Maker in the religious sense – and we have always made up stories to fill in the gaping void of our ignorance. Stories that become the stuff of myth and legend until they are unquestionable aspects of a misplaced faith that we understand all of this. The science of cosmology is far from immune to myth making, and often times philosophical imperatives have overwhelmed observational facts. The lengthy persistence of SCDM in the absence of any credible evidence that Ωm = 1 is a recent example. Another that comes and goes is the desire for a Phoenix universe – one that expands, recollapses, and is then reborn for another cycle of expansion and contraction that repeats ad infinitum. This is appealing for philosophical reasons – the universe isn’t just some bizarre one-off – but there’s precious little that we know (or perhaps can know) to suggest it is a reality.

battlestar_galactica-last-supper
This has all happened before, and will all happen again.

Nevertheless, genuine and enormous empirical progress has been made. It is stunning what we know now that we didn’t a century ago. It has only been 90 years since Hubble established that there are galaxies external to the Milky Way. Prior to that, the prevailing cosmology consisted of a single island universe – the Milky Way – that tapered off into an indefinite, empty void. Until Hubble established otherwise, it was widely (though not universally) thought that the spiral nebulae were some kind of gas clouds within the Milky Way. Instead, the universe is filled with millions and billions of galaxies comparable in stature to the Milky Way.

We have sometimes let our progress blind us to the gaping holes that remain in our knowledge. Some of our more imaginative and less grounded colleagues take some of our more fanciful stories to be established fact, which sometimes just means the problem is old and familiar so boring if still unsolved. They race ahead to create new stories about entities like multiverses. To me, multiverses are manifestly metaphysical: great fun for late night bull sessions, but not a legitimate branch of physics.

So cosmology encompasses a lot. It can mean very different things to different people, and not all of it is scientific. I am not about to touch on the world-views of popular religions, all of which have some flavor of cosmology. There is controversy enough about these definitions among practicing scientists.

I started as a physicist. I earned an SB in physics from MIT in 1985, and went on to the physics (not the astrophysics) department of Princeton for grad school. I had elected to study physics because I had a burning curiosity about how the world works. It was not specific to astronomy as defined above. Indeed, astronomy seemed to me at the time to be but one of many curiosities, and not necessarily the main one.

There was no clear department of astronomy at MIT. Some people who practiced astrophysics were in the physics department; others in Earth, Atmospheric, and Planetary Science, still others in Mathematics. At the recommendation of my academic advisor Michael Feld, I wound up doing a senior thesis with George W. Clark, a high energy astrophysicist who mostly worked on cosmic rays and X-ray satellites. There was a large high energy astrophysics group at MIT who studied X-ray sources and the physics that produced them – things like neutron stars, black holes, supernova remnants, and the intracluster medium of clusters of galaxies – celestial objects with sufficiently extreme energies to make X-rays. The X-ray group needed to do optical follow-up (OK, there’s an X-ray source at this location on the sky. What’s there?) so they had joined the MDM Observatory. I had expressed a vague interest in orbital dynamics, and Clark had become interested in the structure of elliptical galaxies, motivated by the elegant orbital structures described by Martin Schwarzschild. The astrophysics group did a lot of work on instrumentation, so we had access to a new-fangled CCD. These made (and continue to make) much more sensitive detectors than photographic plates.

Empowered by this then-new technology, we embarked on a campaign to image elliptical galaxies with the MDM 1.3 m telescope. The initial goal was to search for axial twists as the predicted consequence of triaxial structure – Schwarzschild had shown that elliptical galaxies need not be oblate or prolate, but could have three distinct characteristic lengths along their principal axes. What we noticed instead with the sensitive CCD was a wonder of new features in the low surface brightness outskirts of these galaxies. Most elliptical galaxies just fade smoothly into obscurity, but every fourth or fifth case displayed distinct shells and ripples – features that were otherwise hard to spot that had only recently been highlighted by Malin & Carter.

Arp227_crop
A modern picture (courtesy of Pierre-Alain Duc) of the shell galaxy Arp 227 (NGC 474). Quantifying the surface brightness profiles of the shells in order to constrain theories for their origin became the subject of my senior thesis. I found that they were most consistent with stars on highly elliptical orbits, as expected from the shredded remnants of a cannibalized galaxy. Observations like this contributed to a sea change in the thinking about galaxies as isolated island universes that never interacted to the modern hierarchical view in which galaxy mergers are ubiquitous.

At the time I was doing this work, I was of course reading up on galaxies in general, and came across Mike Disney’s arguments as to how low surface brightness galaxies could be ubiquitous and yet missed by many surveys. This resonated with my new observing experience. Look hard enough, and you would find something new that had never before been seen. This proved to be true, and remains true to this day.

I went on only two observing runs my senior year. The weather was bad for the first one, clearing only the last night during which I collected all the useful data. The second run came too late to contribute to my thesis. But I was enchanted by the observatory as a remote laboratory, perched in the solitude of the rugged mountains, themselves alone in an empty desert of subtly magnificent beauty. And it got dark at night. You could actually see the stars. More stars than can be imagined by those confined to the light pollution of a city.

It hadn’t occurred to me to apply to an astronomy graduate program. I continued on to Princeton, where I was assigned to work in the atomic physics lab of Will Happer. There I mostly measured the efficiency of various buffer gases in moderating spin exchange between sodium and xenon. This resulted in my first published paper.

In retrospect, this is kinda cool. As an alkali, the atomic structure of sodium is basically that of a noble gas with a spare electron it’s eager to give away in a chemical reaction. Xenon is a noble gas, chemically inert as it already has nicely complete atomic shells; it wants neither to give nor receive electrons from other elements. Put the two together in a vapor, and they can form weak van der Waals molecules in which they share the unwanted valence electron like a hot potato. The nifty thing is that one can spin-polarize the electron by optical pumping with a laser. As it happens, the wave function of the electron has a lot of overlap with the nucleus of the xenon (one of the allowed states has no angular momentum). Thanks to this overlap, the spin polarization imparted to the electron can be transferred to the xenon nucleus. In this way, it is possible to create large amounts of spin-polarized xenon nuclei. This greatly enhances the signal of MRI, and has found an application in medical imaging: a patient can breathe in a chemically inert [SAFE], spin polarized noble gas, making visible all the little passageways of the lungs that are otherwise invisible to an MRI. I contributed very little to making this possible, but it is probably the closest I’ll ever come to doing anything practical.

The same technology could, in principle, be applied to make dark matter detection experiments phenomenally more sensitive to spin-dependent interactions. Giant tanks of xenon have already become one of the leading ways to search for WIMP dark matter, gobbling up a significant fraction of the world supply of this rare noble gas. Spin polarizing the xenon on the scales of tons rather than grams is a considerable engineering challenge.

Now, in that last sentence, I lapsed into a bit of physics arrogance. We understand the process. Making it work is “just” a matter of engineering. In general, there is a lot of hard work involved in that “just,” and a lot of times it is a practical impossibility. That’s probably the case here, as the polarization decays away quickly – much more quickly than one could purify and pump tons of the stuff into a vat maintained at a temperature near absolute zero.

At the time, I did not appreciate the meaning of what I was doing. I did not like working in Happer’s lab. The windowless confines kept dark but for the sickly orange glow of a sodium D laser was not a positive environment to be in day after day after day. More importantly, the science did not call to my heart. I began to dream of a remote lab on a scenic mountain top.

I also found the culture in the physics department at Princeton to be toxic. Nothing mattered but to be smarter than the next guy (and it was practically all guys). There was no agreed measure for this, and for the most part people weren’t so brazen as to compare test scores. So the thing to do was Be Arrogant. Everybody walked around like they were too frickin’ smart to be bothered to talk to anyone else, or even see them under their upturned noses. It was weird – everybody there was smart, but no human could possible be as smart as these people thought they were. Well, not everybody, of course – Jim Peebles is impossibly intelligent, sane, and even nice (perhaps he is an alien, or at least a Canadian) – but for most of Princeton arrogance was a defining characteristic that seeped unpleasantly into every interaction.

It was, in considerable part, arrogance that drove me away from physics. I was appalled by it. One of the best displays was put on by David Gross in a colloquium that marked the take-over of theoretical physics by string theory. The dude was talking confidently in bold positivist terms about predictions that were twenty orders of magnitude in energy beyond any conceivable experimental test. That, to me, wasn’t physics.

More than thirty years on, I can take cold comfort that my youthful intuition was correct. String theory has conspicuously failed to provide the vaunted “theory of everything” that was promised. Instead, we have vague “landscapes” of 10500 possible theories. Just want one. 10500 is not progress. It’s getting hopelessly lost. That’s what happens when brilliant ideologues are encouraged to wander about in their hyperactive imaginations without experimental guidance. You don’t get physics, you get metaphysics. If you think that sounds harsh, note that Gross himself takes exactly this issue with multiverses, saying the notion “smells of angels” and worrying that a generation of physicists will be misled down a garden path – exactly the way he misled a generation with string theory.

So I left Princeton, and switched to a field where progress could be made. I chose to go to the University of Michigan, because I knew it had access to the MDM telescopes (one of the M’s stood for Michigan, the other MIT, with the D for Dartmouth) and because I was getting married. My wife is an historian, and we needed a university that was good in both our fields.

When I got to Michigan, I was ready to do research. I wanted to do more on shell galaxies, and low surface brightness galaxies in general. I had had enough coursework, I reckoned; I was ready to DO science. So I was somewhat taken aback that they wanted me to do two more years of graduate coursework in astronomy.

Some of the physics arrogance had inevitably been incorporated into my outlook. To a physicist, all other fields are trivial. They are just particular realizations of some subset of physics. Chemistry is just applied atomic physics. Biology barely even counts as science, and those parts that do could be derived from physics, in principle. As mere subsets of physics, any other field can and will be picked up trivially.

After two years of graduate coursework in astronomy, I had the epiphany that the field was not trivial. There were excellent reasons, both practical and historical, why it was a separate field. I had been wrong to presume otherwise.

Modern physicists are not afflicted by this epiphany. That bad attitude I was guilty of persists and is remarkably widespread. I am frequently confronted by young physicists eager to mansplain my own field to me, who casually assume that I am ignorant of subjects that I wrote papers on before they started reading the literature, and who equate a disagreement with their interpretation on any subject with ignorance on my part. This is one place the fields diverge enormously. In physics, if it appears in a textbook, it must be true. In astronomy, we recognize that we’ve been wrong about the universe so many times, we’ve learned to be tolerant of interpretations that initially sound absurd. Today’s absurdity may be tomorrow’s obvious fact. Physicists don’t share this history, and often fail to distinguish interpretation from fact, much less cope with the possibility that a single set of facts may admit multiple interpretations.

Cosmology has often been a leader in being wrong, and consequently enjoyed a shady reputation in both physics and astronomy for much of the 20th century. When I started on the faculty at the University of Maryland in 1998, there was no graduate course in the subject. This seemed to me to be an obvious gap to fill, so I developed one. Some of the senior astronomy faculty expressed concern as to whether this could be a rigorous 3 credit graduate course, and sent a neutral representative to discuss the issue with me. He was satisfied. As would be any cosmologist – I was teaching LCDM before most other cosmologists had admitted it was a thing.

At that time, 1998, my wife was also a new faculty member at John Carroll University. They held a welcome picnic, which I attended as the spouse. So I strike up a conversation with another random spouse who is also standing around looking similarly out of place. Ask him what he does. “I’m a physicist.” Ah! common ground – what do you work on? “Cosmology and dark matter.” I was flabbergasted. How did I not know this person? It was Glenn Starkman, and this was my first indication that sometime in the preceding decade, cosmology had become an acceptable field in physics and not a suspect curiosity best left to woolly-minded astronomers.

This was my first clue that there were two entirely separate groups of professional scientists who self-identified as cosmologists. One from the astronomy tradition, one from physics. These groups use the same words to mean the same things – sometimes. There is a common language. But like British English and American English, sometimes different things are meant by the same words.

“Dark matter” is a good example. When I say dark matter, I mean the vast diversity of observational evidence for a discrepancy between measurable probes of gravity (orbital speeds, gravitational lensing, equilibrium hydrostatic temperatures, etc.) and what is predicted by the gravity of the observed baryonic material – the stars and gas we can see. When a physicist says “dark matter,” he seems usually to mean the vast array of theoretical hypotheses for what new particle the dark matter might be.

To give a recent example, a colleague who is a world-reknowned expert on dark matter, and an observational astronomer in a physics department dominated by particle cosmologists, noted that their chairperson had advocated a particular hiring plan because “we have no one who works on dark matter.” This came across as incredibly disrespectful, which it is. But it is also simply clueless. It took some talking to work through, but what we think he meant was that they had no one who worked on laboratory experiments to detect dark matter. That’s a valid thing to do, which astronomers don’t deny. But it is a severely limited way to think about it.

To date, the evidence for dark matter to date is 100% astronomical in nature. That’s all of it. Despite enormous effort and progress, laboratory experiments provide 0%. Zero point zero zero zero. And before some fool points to the cosmic microwave background, that is not a laboratory experiment. It is astronomy as defined above: information gleaned from observation of the sky. That it is done with photons from the mm and microwave part of the spectrum instead of the optical part of the spectrum doesn’t make it fundamentally different: it is still an observation of the sky.

And yet, apparently the observational work that my colleague did was unappreciated by his own department head, who I know to fancy himself an expert on the subject. Yet existence of a complementary expert in his own department didn’t ever register him. Even though, as chair, he would be responsible for reviewing the contributions of the faculty in his department on an annual basis.

To many physicists we astronomers are simply invisible. What could we possibly teach them about cosmology or dark matter? That we’ve been doing it for a lot longer is irrelevant. Only what they [re]invent themselves is valid, because astronomy is a subservient subfield populated by people who weren’t smart enough to become particle physicists. Because particle physicists are the smartest people in the world. Just ask one. He’ll tell you.

To give just one personal example of many: a few years ago, after I had published a paper in the premiere physics journal, I had a particle physics colleague ask, in apparent sincerity, “Are you an astrophysicist?” I managed to refrain from shouting YES YOU CLUELESS DUNCE! Only been doing astrophysics for my entire career!

As near as I can work out, his erroneous definition of astrophysicist involved having a Ph.D. in physics. That’s a good basis to start learning astrophysics, but it doesn’t actually qualify. Kris Davidson noted a similar sociology among his particle physics colleagues: “They simply declare themselves to be astrophysicsts.” Well, I can tell you – having made that same mistake personally – it ain’t that simple. I’m pleased that so many physicists are finally figuring out what I did in the 1980s, and welcome their interest in astrophysics and cosmology. But they need to actually learn the subject, just not assume they’ll pick it up in a snap without actually doing so.

 

Hypothesis testing with gas rich galaxies

Hypothesis testing with gas rich galaxies

This Thanksgiving, I’d highlight something positive. Recently, Bob Sanders wrote a paper pointing out that gas rich galaxies are strong tests of MOND. The usual fit parameter, the stellar mass-to-light ratio, is effectively negligible when gas dominates. The MOND prediction follows straight from the gas distribution, for which there is no equivalent freedom. We understand the 21 cm spin-flip transition well enough to relate observed flux directly to gas mass.

In any human endeavor, there are inevitably unsung heroes who carry enormous amounts of water but seem to get no credit for it. Sanders is one of those heroes when it comes to the missing mass problem. He was there at the beginning, and has a valuable perspective on how we got to where we are. I highly recommend his books, The Dark Matter Problem: A Historical Perspective and Deconstructing Cosmology.

In bright spiral galaxies, stars are usually 80% or so of the mass, gas only 20% or less. But in many dwarf galaxies,  the mass ratio is reversed. These are often low surface brightness and challenging to observe. But it is a worthwhile endeavor, as their rotation curve is predicted by MOND with extraordinarily little freedom.

Though gas rich galaxies do indeed provide an excellent test of MOND, nothing in astronomy is perfectly clean. The stellar mass-to-light ratio is an irreducible need-to-know parameter. We also need to know the distance to each galaxy, as we do not measure the gas mass directly, but rather the flux of the 21 cm line. The gas mass scales with flux and the square of the distance (see equation 7E7), so to get the gas mass right, we must first get the distance right. We also need to know the inclination of a galaxy as projected on the sky in order to get the rotation to which we’re fitting right, as the observed line of sight Doppler velocity is only sin(i) of the full, in-plane rotation speed. The 1/sin(i) correction becomes increasingly sensitive to errors as i approaches zero (face-on galaxies).

The mass-to-light ratio is a physical fit parameter that tells us something meaningful about the amount of stellar mass that produces the observed light. In contrast, for our purposes here, distance and inclination are “nuisance” parameters. These nuisance parameters can be, and generally are, measured independently from mass modeling. However, these measurements have their own uncertainties, so one has to be careful about taking these measured values as-is. One of the powerful aspects of Bayesian analysis is the ability to account for these uncertainties to allow for the distance to be a bit off the measured value, so long as it is not too far off, as quantified by the measurement uncertainties. This is what current graduate student Pengfei Li did in Li et al. (2018). The constraints on MOND are so strong in gas rich galaxies that often the nuisance parameters cannot be ignored, even when they’re well measured.

To illustrate what I’m talking about, let’s look at one famous example, DDO 154. This galaxy is over 90% gas. The stars (pictured above) just don’t matter much. If the distance and inclination are known, the MOND prediction for the rotation curve follows directly. Here is an example of a MOND fit from a recent paper:

DDO154_MOND_180805695
The MOND fit to DDO 154 from Ren et al. (2018). The black points are the rotation curve data, the green line is the Newtonian expectation for the baryons, and the red line is their MOND fit.

This is terrible! The MOND fit – essentially a parameter-free prediction – misses all of the data. MOND is falsified. If one is inclined to hate MOND, as many seem to be, then one stops here. No need to think further.

If one is familiar with the ups and downs in the history of astronomy, one might not be so quick to dismiss it. Indeed, one might notice that the shape of the MOND prediction closely tracks the shape of the data. There’s just a little difference in scale. That’s kind of amazing for a theory that is wrong, especially when it is amplifying the green line to predict the red one: it needn’t have come anywhere close.

Here is the fit to the same galaxy using the same data [already] published in Li et al.:

DDO154_RAR_Li2018
The MOND fit to DDO 154 from Li et al. (2018) using the same data as above, as tabulated in SPARC.

Now we have a good fit, using the same data! How can this be so?

I have not checked what Ren et al. did to obtain their MOND fits, but having done this exercise myself many times, I recognize the slight offset they find as a typical consequence of holding the nuisance parameters fixed. What if the measured distance is a little off?

Distance estimates to DDO 154 in the literature range from 3.02 Mpc to 6.17 Mpc. The formally most accurate distance measurement is 4.04 ± 0.08 Mpc. In the fit shown here, we obtained 3.87 ± 0.16 Mpc. The error bars on these distances overlap, so they are the same number, to measurement accuracy. These data do not falsify MOND. They demonstrate that it is sensitive enough to tell the difference between 3.8 and 4.1 Mpc.

One will never notice this from a dark matter fit. Ren et al. also make fits with self-interacting dark matter (SIDM). The nifty thing about SIDM is that it makes quasi-constant density cores in dark matter halos. Halos of this form are not predicted by “ordinary” cold dark matter (CDM), but often give better fits than either MOND of the NFW halos of dark matter-only CDM simulations. For this galaxy, Ren et al. obtain the following SIDM fit.

DDO154_SIDM_180805695
The SIDM fit to DDO 154 from Ren et al.

This is a great fit. Goes right through the data. That makes it better, right?

Not necessarily. In addition to the mass-to-light ratio (and the nuisance parameters of distance and inclination), dark matter halo fits have [at least] two additional free parameters to describe the dark matter halo, such as its mass and core radius. These parameters are highly degenerate – one can obtain equally good fits for a range of mass-to-light ratios and core radii: one makes up for what the other misses. Parameter degeneracy of this sort is usually a sign that there is too much freedom in the model. In this case, the data are adequately described by one parameter (the MOND fit M*/L, not counting the nuisances in common), so using three (M*/L, Mhalo, Rcore) is just an exercise in fitting a French curve. There is ample freedom to fit the data. As a consequence, you’ll never notice that one of the nuisance parameters might be a tiny bit off.

In other words, you can fool a dark matter fit, but not MOND. Erwin de Blok and I demonstrated this 20 years ago. A common myth at that time was that “MOND is guaranteed to fit rotation curves.” This seemed patently absurd to me, given how it works: once you stipulate the distribution of baryons, the rotation curve follows from a simple formula. If the two don’t match, they don’t match. There is no guarantee that it’ll work. Instead, it can’t be forced.

As an illustration, Erwin and I tried to trick it. We took two galaxies that are identical in the Tully-Fisher plane (NGC 2403 and UGC 128) and swapped their mass distribution and rotation curve. These galaxies have the same total mass and the same flat velocity in the outer part of the rotation curve, but the detailed distribution of their baryons differs. If MOND can be fooled, this closely matched pair ought to do the trick. It does not.

NGC2403UGC128trickMOND
An attempt to fit MOND to a hybrid galaxy with the rotation curve of NGC 2403 and the baryon distribution of UGC 128. The mass-to-light ratio is driven to unphysical values (6 in solar units), but an acceptable fit is not obtained.

Our failure to trick MOND should not surprise anyone who bothers to look at the math involved. There is a one-to-one relation between the distribution of the baryons and the resulting rotation curve. If there is a mismatch between them, a fit cannot be obtained.

We also attempted to play this same trick on dark matter. The standard dark matter halo fitting function at the time was the pseudo-isothermal halo, which has a constant density core. It is very similar to the halos of SIDM and to the cored dark matter halos produced by baryonic feedback in some simulations. Indeed, that is the point of those efforts: they  are trying to capture the success of cored dark matter halos in fitting rotation curve data.

NGC2403UGC128trickDM
A fit to the hybrid galaxy with a cored (pseudo-isothermal) dark matter halo. A satisfactory fit is readily obtained.

Dark matter halos with a quasi-constant density core do indeed provide good fits to rotation curves. Too good. They are easily fooled, because they have too many degrees of freedom. They will fit pretty much any plausible data that you throw at them. This is why the SIDM fit to DDO 154 failed to flag distance as a potential nuisance. It can’t. You could double (or halve) the distance and still find a good fit.

This is why parameter degeneracy is bad. You get lost in parameter space. Once lost there, it becomes impossible to distinguish between successful, physically meaningful fits and fitting epicycles.

Astronomical data are always subject to improvement. For example, the THINGS project obtained excellent data for a sample of nearby galaxies. I made MOND fits to all the THINGS (and other) data for the MOND review Famaey & McGaugh (2012). Here’s the residual diagram, which has been on my web page for many years:

rcresid_mondfits
Residuals of MOND fits from Famaey & McGaugh (2012).

These are, by and large, good fits. The residuals have a well defined peak centered on zero.  DDO 154 was one of the THINGS galaxies; lets see what happens if we use those data.

DDO154mond_i66
The rotation curve of DDO 154 from THINGS (points with error bars). The Newtonian expectation for stars is the green line; the gas is the blue line. The red line is the MOND prediction. Not that the gas greatly outweighs the stars beyond 1.5 kpc; the stellar mass-to-light ratio has extremely little leverage in this MOND fit.

The first thing one is likely to notice is that the THINGS data are much better resolved than the previous generation used above. The first thing I noticed was that THINGS had assumed a distance of 4.3 Mpc. This was prior to the measurement of 4.04, so lets just start over from there. That gives the MOND prediction shown above.

And it is a prediction. I haven’t adjusted any parameters yet. The mass-to-light ratio is set to the mean I expect for a star forming stellar population, 0.5 in solar units in the Sptizer 3.6 micron band. D=4.04 Mpc and i=66 as tabulated by THINGS. The result is pretty good considering that no parameters have been harmed in the making of this plot. Nevertheless, MOND overshoots a bit at large radii.

Constraining the inclinations for gas rich dwarf galaxies like DDO 154 is a bit of a nightmare. Literature values range from 20 to 70 degrees. Seriously. THINGS itself allows the inclination to vary with radius; 66 is just a typical value. Looking at the fit Pengfei obtained, i=61. Let’s try that.

DDO154mond_i61
MOND fit to the THINGS data for DDO 154 with the inclination adjusted to the value found by Li et al. (2018).

The fit is now satisfactory. One tweak to the inclination, and we’re done. This tweak isn’t even a fit to these data; it was adopted from Pengfei’s fit to the above data. This tweak to the inclination is comfortably within any plausible assessment of the uncertainty in this quantity. The change in sin(i) corresponds to a mere 4% in velocity. I could probably do a tiny bit better with further adjustment – I have left both the distance and the mass-to-light ratio fixed – but that would be a meaningless exercise in statistical masturbation. The result just falls out: no muss, no fuss.

Hence the point Bob Sanders makes. Given the distribution of gas, the rotation curve follows. And it works, over and over and over, within the bounds of the uncertainties on the nuisance parameters.

One cannot do the same exercise with dark matter. It has ample ability to fit rotation curve data, once those are provided, but zero power to predict it. If all had been well with ΛCDM, the rotation curves of these galaxies would look like NFW halos. Or any number of other permutations that have been discussed over the years. In contrast, MOND makes one unique prediction (that was not at all anticipated in dark matter), and that’s what the data do. Out of the huge parameter space of plausible outcomes from the messy hierarchical formation of galaxies in ΛCDM, Nature picks the one that looks exactly like MOND.

star_trek_tv_spock_3_copy_-_h_2018
This outcome is illogical.

It is a bad sign for a theory when it can only survive by mimicking its alternative. This is the case here: ΛCDM must imitate MOND. There are now many papers asserting that it can do just this, but none of those were written before the data were provided. Indeed, I consider it to be problematic that clever people can come with ways to imitate MOND with dark matter. What couldn’t it imitate? If the data had all looked like technicolor space donkeys, we could probably find a way to make that so as well.

Cosmologists will rush to say “microwave background!” I have some sympathy for that, because I do not know how to explain the microwave background in a MOND-like theory. At least I don’t pretend to, even if I had more predictive success there than their entire community. But that would be a much longer post.

For now, note that the situation is even worse for dark matter than I have so far made it sound. In many dwarf galaxies, the rotation velocity exceeds that attributable to the baryons (with Newton alone) at practically all radii. By a lot. DDO 154 is a very dark matter dominated galaxy. The baryons should have squat to say about the dynamics. And yet, all you need to know to predict the dynamics is the baryon distribution. The baryonic tail wags the dark matter dog.

But wait, it gets better! If you look closely at the data, you will note a kink at about 1 kpc, another at 2, and yet another around 5 kpc. These kinks are apparent in both the rotation curve and the gas distribution. This is an example of Sancisi’s Law: “For any feature in the luminosity profile there is a corresponding feature in the rotation curve and vice versa.” This is a general rule, as Sancisi observed, but it makes no sense when the dark matter dominates. The features in the baryon distribution should not be reflected in the rotation curve.

The observed baryons orbit in a disk with nearly circular orbits confined to the same plane. The dark matter moves on eccentric orbits oriented every which way to provide pressure support to a quasi-spherical halo. The baryonic and dark matter occupy very different regions of phase space, the six dimensional volume of position and momentum. The two are not strongly coupled, communicating only by the weak force of gravity in the standard CDM paradigm.

One of the first lessons of galaxy dynamics is that galaxy disks are subject to a variety of instabilities that grow bars and spiral arms. These are driven by disk self-gravity. The same features do not appear in elliptical galaxies because they are pressure supported, 3D blobs. They don’t have disks so they don’t have disk self-gravity, much less the features that lead to the bumps and wiggles observed in rotation curves.

Elliptical galaxies are a good visual analog for what dark matter halos are believed to be like. The orbits of dark matter particles are unable to sustain features like those seen in  baryonic disks. They are featureless for the same reasons as elliptical galaxies. They don’t have disks. A rotation curve dominated by a spherical dark matter halo should bear no trace of the features that are seen in the disk. And yet they’re there, often enough for Sancisi to have remarked on it as a general rule.

It gets worse still. One of the original motivations for invoking dark matter was to stabilize galactic disks: a purely Newtonian disk of stars is not a stable configuration, yet the universe is chock full of long-lived spiral galaxies. The cure was to place them in dark matter halos.

The problem for dwarfs is that they have too much dark matter. The halo stabilizes disks by  suppressing the formation of structures that stem from disk self-gravity. But you need some disk self-gravity to have the observed features. That can be tuned to work in bright spirals, but it fails in dwarfs because the halo is too massive. As a practical matter, there is no disk self-gravity in dwarfs – it is all halo, all the time. And yet, we do see such features. Not as strong as in big, bright spirals, but definitely present. Whenever someone tries to analyze this aspect of the problem, they inevitably come up with a requirement for more disk self-gravity in the form of unphysically high stellar mass-to-light ratios (something I predicted would happen). In contrast, this is entirely natural in MOND (see, e.g., Brada & Milgrom 1999 and Tiret & Combes 2008), where it is all disk self-gravity since there is no dark matter halo.

The net upshot of all this is that it doesn’t suffice to mimic the radial acceleration relation as many simulations now claim to do. That was not a natural part of CDM to begin with, but perhaps it can be done with smooth model galaxies. In most cases, such models lack the resolution to see the features seen in DDO 154 (and in NGC 1560 and in IC 2574, etc.) If they attain such resolution, they better not show such features, as that would violate some basic considerations. But then they wouldn’t be able to describe this aspect of the data.

Simulators by and large seem to remain sanguine that this will all work out. Perhaps I have become too cynical, but I recall hearing that 20 years ago. And 15. And ten… basically, they’ve always assured me that it will work out even though it never has. Maybe tomorrow will be different. Or would that be the definition of insanity?