People seem to like to do retrospectives at year’s end. I take a longer view, but the end of 2020 seems like a fitting time to do that. Below is the text of a paper I wrote in 1995 with collaborators at the Kapteyn Institute of the University of Groningen. The last edit date is from December of that year, so this text (in plain TeX, not LaTeX!) is now a quarter century old. I am just going to cut & paste it as-was; I even managed to recover the original figures and translate them into something web-friendly (postscript to jpeg). This is exactly how it was.

This was my first attempt to express in the scientific literature my concerns for the viability of the dark matter paradigm, and my puzzlement that the only theory to get any genuine predictions right was MOND. It was the hardest admission in my career that this could be even a remote possibility. Nevertheless, intellectual honesty demanded that I report it. To fail to do so would be an act of reality denial antithetical to the foundational principles of science.

It was never published. There were three referees. Initially, one was positive, one was negative, and one insisted that rotation curves weren’t flat. There was one iteration; this is the resubmitted version in which the concerns of the second referee were addressed to his apparent satisfaction by making the third figure a lot more complicated. The third referee persisted that none of this was valid because rotation curves weren’t flat. Seems like he had a problem with something beyond the scope of this paper, but the net result was rejection.

One valid concern that ran through the refereeing process from all sides was “what about everything else?” This is a good question that couldn’t fit into a short letter like this. Thanks to the support of Vera Rubin and a Carnegie Fellowship, I spent the next couple of years looking into everything else. The results were published in 1998 in a series of three long papers: one on dark matter, one on MOND, and one making detailed fits.

This had started from a very different place intellectually with my efforts to write a paper on galaxy formation that would have been similar to contemporaneous papers like Dalcanton, Spergel, & Summers and Mo, Mao, & White. This would have followed from my thesis and from work with Houjun Mo, who was an office mate when we were postdocs at the IoA in Cambridge. (The ideas discussed in Mo, McGaugh, & Bothun have been reborn recently in the galaxy formation literature under the moniker of “assembly bias.”) But I had realized by then that my ideas – and those in the papers cited – were wrong. So I didn’t write a paper that I knew to be wrong. I wrote this one instead.

Nothing substantive has changed since. Reading it afresh, I’m amazed how many of the arguments over the past quarter century were anticipated here. As a scientific community, we are stuck in a rut, and seem to prefer to spin the wheels to dig ourselves in deeper than consider the plain if difficult path out.

Testing hypotheses of dark matter and alternative gravity with low surface density galaxies

The missing mass problem remains one of the most vexing in astrophysics. Observations clearly indicate either the presence of a tremendous amount of as yet unidentified dark matter1,2, or the need to modify the law of gravity3-7. These hypotheses make vastly different predictions as a function of density. Observations of the rotation curves of galaxies of much lower surface brightness than previously studied therefore provide a powerful test for discriminating between them. The dark matter hypothesis requires a surprisingly strong relation between the surface brightness and mass to light ratio8, placing stringent constraints on theories of galaxy formation and evolution. Alternatively, the observed behaviour is predicted4 by one of the hypothesised alterations of gravity known as modified Newtonian dynamics3,5 (MOND).

Spiral galaxies are observed to have asymptotically flat [i.e., V(R) ~ constant for large R] rotation curves that extend well beyond their optical edges. This trend continues for as far (many, sometimes > 10 galaxy scale lengths) as can be probed by gaseous tracers1,2 or by the orbits of satellite galaxies9. Outside a galaxy’s optical radius, the gravitational acceleration is aN = GM/R2 = V2/R so one expects V(R) ~ R-1/2. This Keplerian behaviour is not observed in galaxies.

One approach to this problem is to increase M in the outer parts of galaxies in order to provide the extra gravitational acceleration necessary to keep the rotation curves flat. Indeed, this is the only option within the framework of Newtonian gravity since both V and R are directly measured. The additional mass must be invisible, dominant, and extend well beyond the optical edge of the galaxies.

Postulating the existence of this large amount of dark matter which reveals itself only by its gravitational effects is a radical hypothesis. Yet the kinematic data force it upon us, so much so that the existence of dark matter is generally accepted. Enormous effort has gone into attempting to theoretically predict its nature and experimentally verify its existence, but to date there exists no convincing detection of any hypothesised dark matter candidate, and many plausible candidates have been ruled out10.

Another possible solution is to alter the fundamental equation aN = GM/R2. Our faith in this simple equation is very well founded on extensive experimental tests of Newtonian gravity. Since it is so fundamental, altering it is an even more radical hypothesis than invoking the existence of large amounts of dark matter of completely unknown constituent components. However, a radical solution is required either way, so both possibilities must be considered and tested.

A phenomenological theory specifically introduced to address the problem of the flat rotation curves is MOND3. It has no other motivation and so far there is no firm physical basis for the theory. It provides no satisfactory cosmology, having yet to be reconciled with General Relativity. However, with the introduction of one new fundamental constant (an acceleration a0), it is empirically quite successful in fitting galaxy rotation curves11-14. It hypothesises that for accelerations a < a0 = 1.2 x 10-10 m s-2, the effective acceleration is given by aeff = (aN a0)1/2. This simple prescription works well with essentially only one free parameter per galaxy, the stellar mass to light ratio, which is subject to independent constraint by stellar evolution theory. More importantly, MOND makes predictions which are distinct and testable. One specific prediction4 is that the asymptotic (flat) value of the rotation velocity, Va, is Va = (GMa0)1/4. Note that Va does not depend on R, but only on M in the regime of small accelerations (a < a0).

In contrast, Newtonian gravity depends on both M and R. Replacing R with a mass surface density variable S = M(R)/R2, the Newtonian prediction becomes M S ~ Va4 which contrasts with the MOND prediction M ~ Va4. These relations are the theoretical basis in each case for the observed luminosity-linewidth relation L ~ Va4 (better known as the Tully-Fisher15 relation. Note that the observed value of the exponent is bandpass dependent, but does obtain the theoretical value of 4 in the near infrared16 which is considered the best indicator of the stellar mass. The systematic variation with bandpass is a very small effect compared to the difference between the two gravitational theories, and must be attributed to dust or stars under either theory.) To transform from theory to observation one requires the mass to light ratio Y: Y = M/L = S/s, where s is the surface brightness. Note that in the purely Newtonian case, M and L are very different functions of R, so Y is itself a strong function of R. We define Y to be the mass to light ratio within the optical radius R*, as this is the only radius which can be measured by observation. The global mass to light ratio would be very different (since M ~ R for R > R*, the total masses of dark haloes are not measurable), but the particular choice of definition does not affect the relevant functional dependences is all that matters. The predictions become Y2sL ~ Va4 for Newtonian gravity8,16 and YL ~ Va4 for MOND4.

The only sensible17 null hypothesis that can be constructed is that the mass to light ratio be roughly constant from galaxy to galaxy. Clearly distinct predictions thus emerge if galaxies of different surface brightnesses s are examined. In the Newtonian case there should be a family of parallel Tully-Fisher relations for each surface brightness. In the case of MOND, all galaxies should follow the same Tully-Fisher relation irrespective of surface brightness.

Recently it has been shown that extreme objects such as low surface brightness galaxies8,18 (those with central surface brightnesses fainter than s0 = 23 B mag./[] corresponding 40 L pc-2) obey the same Tully-Fisher relation as do the high surface brightness galaxies (typically with s0 = 21.65 B mag./[] or 140 L pc-2) which originally15 defined it. Fig. 1 shows the luminosity-linewidth plane for galaxies ranging over a factor of 40 in surface brightness. Regardless of surface brightness, galaxies fall on the same Tully-Fisher relation.

The luminosity-linewidth (Tully-Fisher) relation for spiral galaxies over a large range in surface brightness. The B-band relation is shown; the same result is obtained in all bands8,18. Absolute magnitudes are measured from apparent magnitudes assuming H0 = 75 km/s/Mpc. Rotation velocities Va are directly proportional to observed 21 cm linewidths (measured as the full width at 20% of maximum) W20 corrected for inclination [sin-1(i)]. Open symbols are an independent sample which defines42 the Tully-Fisher relation (solid line). The dotted lines show the expected shift of the Tully-Fisher relation for each step in surface brightness away from the canonical value s0 = 21.5 if the mass to light ratio remains constant. Low surface brightness galaxies are plotted as solid symbols, binned by surface brightness: red triangles: 22 < s0 < 23; green squares: 23 < s0 < 24; blue circles: s0 > 24. One galaxy with two independent measurements is connected by a line. This gives an indication of the typical uncertainty which is sufficient to explain nearly all the scatter. Contrary to the clear expectation of a readily detectable shift as indicated by the dotted lines, galaxies fall on the same Tully-Fisher relation regardless of surface brightness, as predicted by MOND.

MOND predicts this behaviour in spite of the very different surface densities of low surface brightness galaxies. In order to understand this observational fact in the framework of standard Newtonian gravity requires a subtle relation8 between surface brightness and the mass to light ratio to keep the product sY2 constant. If we retain normal gravity and the dark matter hypothesis, this result is unavoidable, and the null hypothesis of similar mass to light ratios (which, together with an assumed constancy of surface brightness, is usually invoked to explain the Tully-Fisher relation) is strongly rejected. Instead, the current epoch surface brightness is tightly correlated with the properties of the dark matter halo, placing strict constraints on models of galaxy formation and evolution.

The mass to light ratios computed for both cases are shown as a function of surface brightness in Fig. 2. Fig. 2 is based solely on galaxies with full rotation curves19,20 and surface photometry, so Va and R* are directly measured. The correlation in the Newtonian case is very clear (Fig. 2a), confirming our inference8 from the Tully-Fisher relation. Such tight correlations are very rare in extragalactic astronomy, and the Y-s relation is probably the real cause of an inferred Y-L relation. The latter is much weaker because surface brightness and luminosity are only weakly correlated21-24.

The mass to light ratio Y (in M/L) determined with (a) Newtonian dynamics and (b) MOND, plotted as a function of central surface brightness. The mass determination for Newtonian dynamics is M = V2 R*/G and for MOND is M = V4/(G a0). We have adopted as a consistent definition of the optical radius R* four scale lengths of the exponential optical disc. This is where discs tend to have edges, and contains essentially all the light21,22. The definition of R* makes a tremendous difference to the absolute value of the mass to light ratio in the Newtonian case, but makes no difference at all to the functional relation will be present regardless of the precise definition. These mass measurements are more sensitive to the inclination corrections than is the Tully-Fisher relation since there is a sin-2(i) term in the Newtonian case and one of sin-4(i) for MOND. It is thus very important that the inclination be accurately measured, and we have retained only galaxies which have adequate inclination determinations — error bars are plotted for a nominal uncertainty of 6 degrees. The sensitivity to inclination manifests itself as an increase in the scatter from (a) to (b). The derived mass is also very sensitive to the measured value of the asymptotic velocity itself, so we have used only those galaxies for which this can be taken directly from a full rotation curve19,20,42. We do not employ profile widths; the velocity measurements here are independent of those in Fig. 1. In both cases, we have subtracted off the known atomic gas mass19,20,42, so what remains is essentially only the stars and any dark matter that may exist. A very strong correlation (regression coefficient = 0.85) is apparent in (a): this is the mass to light ratio — surface brightness conspiracy. The slope is consistent (within the errors) with the theoretical expectation s ~ Y-2 derived from the Tully-Fisher relation8. At the highest surface brightnesses, the mass to light ratio is similar to that expected for the stellar population. At the faintest surface brightnesses, it has increased by a factor of nearly ten, indicating increasing dark matter domination within the optical disc as surface brightness decreases or a very systematic change in the stellar population, or both. In (b), the mass to light ratio scatters about a constant value of 2. This mean value, and the lack of a trend, is what is expected for stellar populations17,21-24.

The Y-s relation is not predicted by any dark matter theory25,26. It can not be purely an effect of the stellar mass to light ratio, since no other stellar population indicator such as color21-24 or metallicity27,28 is so tightly correlated with surface brightness. In principle it could be an effect of the stellar mass fraction, as the gas mass to light ratio follows a relation very similar to that of total mass to light ratio20. We correct for this in Fig. 2 by subtracting the known atomic gas mass so that Y refers only to the stars and any dark matter. We do not correct for molecular gas, as this has never been detected in low surface brightness galaxies to rather sensitive limits30 so the total mass of such gas is unimportant if current estimates31 of the variation of the CO to H2 conversion factor with metallicity are correct. These corrections have no discernible effect at all in Fig. 2 because the dark mass is totally dominant. It is thus very hard to see how any evolutionary effect in the luminous matter can be relevant.

In the case of MOND, the mass to light ratio directly reflects that of the stellar population once the correction for gas mass fraction is made. There is no trend of Y* with surface brightness (Fig. 2b), a more natural result and one which is consistent with our studies of the stellar populations of low surface brightness galaxies21-23. These suggest that Y* should be roughly constant or slightly declining as surface brightness decreases, with much scatter. The mean value Y* = 2 is also expected from stellar evolutionary theory17, which always gives a number 0 < Y* < 10 and usually gives 0.5 < Y* < 3 for disk galaxies. This is particularly striking since Y* is the only free parameter allowed to MOND, and the observed mean is very close to that directly observed29 in the Milky Way (1.7 ± 0.5 M/L).

The essence of the problem is illustrated by Fig. 3, which shows the rotation curves of two galaxies of essentially the same luminosity but vastly different surface brightnesses. Though the asymptotic velocities are the same (as required by the Tully-Fisher relation), the rotation curve of the low surface brightness galaxy rises less quickly than that of the high surface brightness galaxy as expected if the mass is distributed like the light. Indeed, the ratio of surface brightnesses is correct to explain the ratio of velocities at small radii if both galaxies have similar mass to light ratios. However, if this continues to be the case as R increases, the low surface brightness galaxy should reach a lower asymptotic velocity simply because R* must be larger for the same L. That this does not occur is the problem, and poses very significant systematic constraints on the dark matter distribution.

The rotation curves of two galaxies, one of high surface brightness11 (NGC 2403; open circles) and one of low surface brightness19 (UGC 128; filled circles). The two galaxies have very nearly the same asymptotic velocity, and hence luminosity, as required by the Tully-Fisher relation. However, they have central surface brightnesses which differ by a factor of 13. The lines give the contributions to the rotation curves of the various components. Green: luminous disk. Blue: dark matter halo. Red: luminous disk (stars and gas) with MOND. Solid lines refer to NGC 2403 and dotted lines to UGC 128. The fits for NGC 2403 are taken from ref. 11, for which the stars have Y* = 1.5 M/L. For UGC 128, no specific fit is made: the blue and green dotted lines are simply the NGC 2403 fits scaled by the ratio of disk scale lengths h. This provides a remarkably good description of the UGC 128 rotation curve and illustrates one possible manifestation of the fine tuning problem: if disks have similar Y, the halo parameters p0 and R0 must scale with the disk parameters s0 and h while conspiring to keep the product p0 R02 fixed at any given luminosity. Note also that the halo of NGC 2403 gives an adequate fit to the rotation curve of UGC 128. This is another possible manifestation of the fine tuning problem: all galaxies of the same luminosity have the same halo, with Y systematically varying with s0 so that Y* goes to zero as s0 goes to zero. Neither of these is exactly correct because the contribution of the gas can not be set to zero as is mathematically possible with the stars. This causes the resulting fin tuning problems to be even more complex, involving more parameters. Alternatively, the green dotted line is the rotation curve expected by MOND for a galaxy with the observed luminous mass distribution of UGC 128.

Satisfying the Tully-Fisher relation has led to some expectation that haloes all have the same density structure. This simplest possibility is immediately ruled out. In order to obtain L ~ Va4 ~ MS, one might suppose that the mass surface density S is constant from galaxy to galaxy, irrespective of the luminous surface density s. This achieves the correct asymptotic velocity Va, but requires that the mass distribution, and hence the complete rotation curve, be essentially identical for all galaxies of the same luminosity. This is obviously not the case (Fig. 3), as the rotation curves of lower surface brightness galaxies rise much more gradually than those of higher surface brightness galaxies (also a prediction4 of MOND). It might be possible to have approximately constant density haloes if the highest surface brightness disks are maximal and the lowest minimal in their contribution to the inner parts of the rotation curves, but this then requires fine tuning of Y* with this systematically decreasing with surface brightness.

The expected form of the halo mass distribution depends on the dominant form of dark matter. This could exist in three general categories: baryonic (e.g., MACHOs), hot (e.g., neutrinos), and cold exotic particles (e.g., WIMPs). The first two make no specific predictions. Baryonic dark matter candidates are most subject to direct detection, and most plausible candidates have been ruled out10 with remaining suggestions of necessity sounding increasingly contrived32. Hot dark matter is not relevant to the present problem. Even if neutrinos have a small mass, their velocities considerably exceed the escape velocities of the haloes of low mass galaxies where the problem is most severe. Cosmological simulations involving exotic cold dark matter33,34 have advanced to the point where predictions are being made about the density structure of haloes. These take the form33,34 p(R) = pH/[R(R+RH)b] where pH characterises the halo density and RH its radius, with b ~ 2 to 3. The characteristic density depends on the mean density of the universe at the collapse epoch, and is generally expected to be greater for lower mass galaxies since these collapse first in such scenarios. This goes in the opposite sense of the observations, which show that low mass and low surface brightness galaxies are less, not more, dense. The observed behaviour is actually expected in scenarios which do not smooth on a particular mass scale and hence allow galaxies of the same mass to collapse at a variety of epochs25, but in this case the Tully-Fisher relation should not be universal. Worse, note that at small R < RH, p(R) ~ R-1. It has already been noted32,35 that such a steep interior density distribution is completely inconsistent with the few (4) analysed observations of dwarf galaxies. Our data19,20 confirm and considerably extend this conclusion for 24 low surface brightness galaxies over a wide range in luminosity.

The failure of the predicted exotic cold dark matter density distribution either rules out this form of dark matter, indicates some failing in the simulations (in spite of wide-spread consensus), or requires some mechanism to redistribute the mass. Feedback from star formation is usually invoked for the last of these, but this can not work for two reasons. First, an objection in principle: a small mass of stars and gas must have a dramatic impact on the distribution of the dominant dark mass, with which they can only interact gravitationally. More mass redistribution is required in less luminous galaxies since they start out denser but end up more diffuse; of course progressively less baryonic material is available to bring this about as luminosity declines. Second, an empirical objection: in this scenario, galaxies explode and gas is lost. However, progressively fainter and lower surface brightness galaxies, which need to suffer more severe explosions, are actually very gas rich.

Observationally, dark matter haloes are inferred to have density distributions1,2,11 with constant density cores, p(R) = p0/[1 + (R/R0)g]. Here, p0 is the core density and R0 is the core size with g ~ 2 being required to produce flat rotation curves. For g = 2, the rotation curve resulting from this mass distribution is V(R) = Va [1-(R0/R) tan-1({R/R0)]1/2 where the asymptotic velocity is Va = (4πG p0 R02)1/2. To satisfy the Tully-Fisher relation, Va, and hence the product p0 R02, must be the same for all galaxies of the same luminosity. To decrease the rate of rise of the rotation curves as surface brightness decreases, R0 must increase. Together, these two require a fine tuning conspiracy to keep the product p0 R02 constant while R0 must vary with the surface brightness at a given luminosity. Luminosity and surface brightness themselves are only weakly correlated, so there exists a wide range in one parameter at any fixed value of the other. Thus the structural properties of the invisible dark matter halo dictate those of the luminous disk, or vice versa. So, s and L give the essential information about the mass distribution without recourse to kinematic information.

A strict s-p0-R0 relation is rigorously obeyed only if the haloes are spherical and dominate throughout. This is probably a good approximation for low surface brightness galaxies but may not be for the those of the highest surface brightness. However, a significant non-halo contribution can at best replace one fine tuning problem with another (e.g., surface brightness being strongly correlated with the stellar population mass to light ratio instead of halo core density) and generally causes additional conspiracies.

There are two perspectives for interpreting these relations, with the preferred perspective depending strongly on the philosophical attitude one has towards empirical and theoretical knowledge. One view is that these are real relations which galaxies and their haloes obey. As such, they provide a positive link between models of galaxy formation and evolution and reality.

The other view is that this list of fine tuning requirements makes it rather unattractive to maintain the dark matter hypothesis. MOND provides an empirically more natural explanation for these observations. In addition to the Tully-Fisher relation, MOND correctly predicts the systematics of the shapes of the rotation curves of low surface brightness galaxies19,20 and fits the specific case of UGC 128 (Fig. 3). Low surface brightness galaxies were stipulated4 to be a stringent test of the theory because they should be well into the regime a < a0. This is now observed to be true, and to the limit of observational accuracy the predictions of MOND are confirmed. The critical acceleration scale a0 is apparently universal, so there is a single force law acting in galactic disks for which MOND provides the correct description. The cause of this could be either a particular dark matter distribution36 or a real modification of gravity. The former is difficult to arrange, and a single force law strongly supports the latter hypothesis since in principle the dark matter could have any number of distributions which would give rise to a variety of effective force laws. Even if MOND is not correct, it is essential to understand why it so closely describe the observations. Though the data can not exclude Newtonian dynamics, with a working empirical alternative (really an extension) at hand, we would not hesitate to reject as incomplete any less venerable hypothesis.

Nevertheless, MOND itself remains incomplete as a theory, being more of a Kepler’s Law for galaxies. It provides only an empirical description of kinematic data. While successful for disk galaxies, it was thought to fail in clusters of galaxies37. Recently it has been recognized that there exist two missing mass problems in galaxy clusters, one of which is now solved38: most of the luminous matter is in X-ray gas, not galaxies. This vastly improves the consistency of MOND with with cluster dynamics39. The problem with the theory remains a reconciliation with Relativity and thereby standard cosmology (which is itself in considerable difficulty38,40), and a lack of any prediction about gravitational lensing41. These are theoretical problems which need to be more widely addressed in light of MOND’s empirical success.

ACKNOWLEDGEMENTS. We thank R. Sanders and M. Milgrom for clarifying aspects of a theory with which we were previously unfamiliar. SSM is grateful to the Kapteyn Astronomical Institute for enormous hospitality during visits when much of this work was done. [Note added in 2020: this work was supported by a cooperative grant funded by the EU and would no longer be possible thanks to Brexit.]


  1. Rubin, V. C. Science 220, 1339-1344 (1983).
  2. Sancisi, R. & van Albada, T. S. in Dark Matter in the Universe, IAU Symp. No. 117, (eds. Knapp, G. & Kormendy, J.) 67-80 (Reidel, Dordrecht, 1987).
  3. Milgrom, M. Astrophys. J. 270, 365-370 (1983).
  4. Milgrom, M. Astrophys. J. 270, 371-383 (1983).
  5. Bekenstein, K. G., & Milgrom, M. Astrophys. J. 286, 7-14
  6. Mannheim, P. D., & Kazanas, D. 1989, Astrophys. J. 342, 635-651 (1989).
  7. Sanders, R. H. Astron. Atrophys. Rev. 2, 1-28 (1990).
  8. Zwaan, M.A., van der Hulst, J. M., de Blok, W. J. G. & McGaugh, S. S. Mon. Not. R. astr. Soc., 273, L35-L38, (1995).
  9. Zaritsky, D. & White, S. D. M. Astrophys. J. 435, 599-610 (1994).
  10. Carr, B. Ann. Rev. Astr. Astrophys., 32, 531-590 (1994).
  11. Begeman, K. G., Broeils, A. H. & Sanders, R. H. Mon. Not. R. astr. Soc. 249, 523-537 (1991).
  12. Kent, S. M. Astr. J. 93, 816-832 (1987).
  13. Milgrom, M. Astrophys. J. 333, 689-693 (1988).
  14. Milgrom, M. & Braun, E. Astrophys. J. 334, 130-134 (1988).
  15. Tully, R. B., & Fisher, J. R. Astr. Astrophys., 54, 661-673 (1977).
  16. Aaronson, M., Huchra, J., & Mould, J. Astrophys. J. 229, 1-17 (1979).
  17. Larson, R. B. & Tinsley, B. M. Astrophys. J. 219, 48-58 (1978).
  18. Sprayberry, D., Bernstein, G. M., Impey, C. D. & Bothun, G. D. Astrophys. J. 438, 72-82 (1995).
  19. van der Hulst, J. M., Skillman, E. D., Smith, T. R., Bothun, G. D., McGaugh, S. S. & de Blok, W. J. G. Astr. J. 106, 548-559 (1993).
  20. de Blok, W. J. G., McGaugh, S. S., & van der Hulst, J. M. Mon. Not. R. astr. Soc. (submitted).
  21. McGaugh, S. S., & Bothun, G. D. Astr. J. 107, 530-542 (1994).
  22. de Blok, W. J. G., van der Hulst, J. M., & Bothun, G. D. Mon. Not. R. astr. Soc. 274, 235-259 (1995).
  23. Ronnback, J., & Bergvall, N. Astr. Astrophys., 292, 360-378 (1994).
  24. de Jong, R. S. Ph.D. thesis, University of Groningen (1995).
  25. Mo, H. J., McGaugh, S. S. & Bothun, G. D. Mon. Not. R. astr. Soc. 267, 129-140 (1994).
  26. Dalcanton, J. J., Spergel, D. N., Summers, F. J. Astrophys. J., (in press).
  27. McGaugh, S. S. Astrophys. J. 426, 135-149 (1994).
  28. Ronnback, J., & Bergvall, N. Astr. Astrophys., 302, 353-359 (1995).
  29. Kuijken, K. & Gilmore, G. Mon. Not. R. astr. Soc., 239, 605-649 (1989).
  30. Schombert, J. M., Bothun, G. D., Impey, C. D., & Mundy, L. G. Astron. J., 100, 1523-1529 (1990).
  31. Wilson, C. D. Astrophys. J. 448, L97-L100 (1995).
  32. Moore, B. Nature 370, 629-631 (1994).
  33. Navarro, J. F., Frenk, C. S., & White, S. D. M. Mon. Not. R. astr. Soc., 275, 720-728 (1995).
  34. Cole, S. & Lacey, C. Mon. Not. R. astr. Soc., in press.
  35. Flores, R. A. & Primack, J. R. Astrophys. J. 427, 1-4 (1994).
  36. Sanders, R. H., & Begeman, K. G. Mon. Not. R. astr. Soc. 266, 360-366 (1994).
  37. The, L. S., & White, S. D. M. Astron. J., 95, 1642-1651 (1988).
  38. White, S. D. M., Navarro, J. F., Evrard, A. E. & Frenk, C. S. Nature 366, 429-433 (1993).
  39. Sanders, R. H. Astron. Astrophys. 284, L31-L34 (1994).
  40. Bolte, M., & Hogan, C. J. Nature 376, 399-402 (1995).
  41. Bekenstein, J. D. & Sanders, R. H. Astrophys. J. 429, 480-490 (1994).
  42. Broeils, A. H., Ph.D. thesis, Univ. of Groningen (1992).

84 thoughts on “25 years a heretic

  1. Very true Stacy, thank you for this New Year 2021 post! Great scientists truly follow the empirical evidence, and it is very obvious you fall into this category. I think your relentless and consistently outstanding work stands the test of time and has moved us forward like no other research in extragalactic astrophysics and cosmology! Congratulations and I raise a glass of Mumm sparkling wine to you and the future efforts on gravitational physics. It is clear from your track record that you are one of the greatest astronomers!

    Liked by 1 person

  2. Your intellectual honesty is inspiring, but this:
    “Nothing substantive has changed since”
    is not..

    What would it take to change the lines ?


    1. I don’t know. The results reported here should have had that effect. For some they did, but not for most. The prediction of the second peak of the power spectrum of the CMB should also have that effect. Only a few paid attention; the rest of the cosmological community seems to have not merely ignored that, but made up a fictional narrative in which it didn’t even happen. Many other moments have come and passed, like the prediction of the velocity dispersions of the dwarf satellites of Andromeda (where the most problematic cases for LCDM are simply interpreted to be out of equilibrium without acknowledging that MOND can somehow predict which dwarfs LCDM will need to dismiss as being out of equilibrium) or Crater 2 (same story) or NGC1052-DF2 (people got all excited about that as a MOND-killer until we did the calculation right, then they either forgot about it or persisted in the preferred belief that it was a problem for MOND) or the recent detection of the external field.
      Most scientists who work in the field simply don’t want to hear it, so they choose not to. That’s the attitude that successful predictions are supposed to change. If we abandon that, then we abandon science itself.
      It will take a very direct shock – if it is possible at all. The second peak was such a shock, but that was quickly rationalized then forgotten. EDGES was such a shock, so people just chose to ignore it. Those data will get better, so maybe they won’t be able to ignore it in future, but they already came up with some very sad excuses (e.g., millicharged dark matter), so I expect those will just be revived as needed. The only thing I can think of offhand that they couldn’t easily explain away would be if the mass of the neutrino was measured in the lab to be much higher than allowed by cosmic structure formation limits. That should be impossible to explain away, but in the past any lame excuse has sufficed provided it gives the right answer.

      Liked by 1 person

      1. Are we then looking at one of those sad situations where the paradigm can’t shift until the proponents of the old paradigm die off?


        1. I fear so. Planck had reason to say “Science progresses one funeral at a time.” It may now be even worse, as the chief proponents reproduce and their progeny proliferate so that it becomes a self-sustaining social construct.


          1. It’s troubling when scientists become advocates for any one theory. I wonder what it is in academia that pushes us towards that. The grant system? The “publish or perish” logic? tenure tracks?


            1. I think it is just human nature. We can’t help but become enamored of our own ideas. This is part of why I had such a hard time with this myself – how could this crazy theory succeed where my own fails? Of course, the whole point of the scientific method is to counteract this aspect of human nature, and this is why we put such a premium on objectivity (which seems to be a lots part of the value system) and a priori prediction. There’s no way to go back in time and fudge a genuine prediction. Now we’re in a situation where it has become accepted, even expected, that one will adjust one’s favorite theory as the data come in, and simply ignore successful predictions are made outside the paradigm.
              I do not believe that science is a mere social construct, but like any human endeavor, it is subject to being shanghaied into becoming one. That’s where the issues of grants and career ambitions kick in and bite hard. People are afraid of being perceived as non-conformant and having it affect these things. It does. Not as much as they fear, but enough to chill speech.
              Then there are the cosmic blackshirts (, marginal intellectual thugs who screech half-truths and antipathy at any nonconformity (see, e.g., the comment after Real scientists don’t want to put up with that nonsense, and shouldn’t have to.

              Liked by 1 person

              1. I have thought that if Imre Lakatos and Paul Feyerabend were still alive and engaging in debate about what scientific method is, that Lakatos would be fuming about what he would consider a degenerating research programme (ΛCDM) surviving for so long, while Feyerabend would be jubilant that his ‘anything goes’ approach was vindicated (see the Stanford Encyclopedia of Philosophy articles on both philosophers).

                Liked by 1 person

              2. That “blackshirt” as you say it, is the same person that commented on the other article related to the statistical detection of the EFE…

                P.S. Happy New Year!


              3. On the above matter, perhaps long-lives may not be the only reason for the (historically quite unique) stand-still. Having published since 2010 more than one analysis that the observational data conflict the standard dark-matter models with extremely high significance and having been on quite a few hiring and grant committees, my observation is that it may be, perhaps perplexedly, the present-day highly competitive merit system which is responsible for stopping cosmology and extragalactic physics moving ahead fundamentally: Today, a successful scientist is usually not one who makes a truly deep deduction, but one with the large grant. To obtain a large grant one needs to project success into the future. This can only be done with known theories, since it is difficult to pass peer-review by wanting to explore using unexplored theories or theories which are not taken as serious options by most. Young scientists with an excellent grant acquisition track record are more likely to achieve role-model careers. Younger scientists try to achieve the same. So the system becomes self-replicating, in a sense it moves in circles where younger generations propose to solve problems that the previous generation aimed to solve but without leaving the paradigm (a good example is the core-cusp problem). In addition, some bullying and other sociological phenomena play a role in intimidating young scientists against trying new approaches. I think Lee Smolin’s book on “The Trouble with Physics” describes this situation rather well. Historically, when doing their significant research, many of the outstanding scientists we know (e.g. Kepler, Maxwell, Einstein) were neither attached to an ivy-league institution nor to competitive large grants, but they followed their personal inquisitiveness to discover great things (e.g. neither Maxwell nor Einstein worked at universities when doing their significant research). The present-day system thus tends to reward young main stream researchers who have technical skill, but not so much the inquisitive spirit that would, by necessity, take them away from the main stream. It may therefore not be true that the paradigm will shift after funerals. To enable progress, we might need to entertain the idea of recasting our scientific system to a less award-based one allowing for a larger number of researchers similarly equipped to establish egalitarianism. Pure scientific research ought to be driven solely by the desire to learn how nature works, rather than being career- or income-oriented.

                Liked by 1 person

              4. “degenerating research programme (ΛCDM) surviving for so long”

                It’s not degenerating just because some pundit claims that it is. I’m really, really interested in drumming up more support for research into MOND and so on, but essentially calling one’s colleagues degenerate is not the way to do that. 😦


              5. Referring to Phillip Helbig’s comment on degeneracy below: It is the research programme which is degenerate. A scientific programme is defined as being degenerate according to objective criteria defined previously in the scientific literature. David Merritt merely applied these formally. I know Prof. Merritt, and he has no agenda and would have been just as happy to publish the opposite if the results had been such. A scientist can choose to work in the degenerate research programme. Fundamental progress is of course made through non-degenerate research programmes (independently of this being LCDM or some other programme and research field).


              6. ”David Merritt merely applied these formally. I know Prof. Merritt, and he has no agenda and would have been just as happy to publish the opposite if the results had been such.“

                I challenge everyone to read Merritt’s article which I criticize in my recent paper and/or his new book and say with a straight face that that is not the most biased and inaccurate description of anything they have ever read. OK, except for QAnon stuff, but it’s almost at that level.

                Merritt’s analysis might be correct if his understanding of the history of cosmology were correct, but that’s not the case. Of course, a detailed rebuttal would be too much for a comment box, which is why I wrote it up in a paper.

                As to whether he has an agenda, I don’t know, but it often comes across as such. But maybe he is just confused.


              7. Might it be the case that if a scientist finds objective evidence that LCDM is falsified with more than 5 sigma confidence that some people proclaim that this scientist is biased? Might it also be that some claim to understand the history of LCDM and that those who conclude that LCDM is not the correct model do not understand the history? At the end of the day, surely it is only the evidence as quantified with confidence levels which counts, neither opinions nor history. Thus, as a scientist it is not my opinion that the Higgs boson exists and I may be agnostic of the history of how it came into existence in our description of physics, but I can state that the evidence for the Higgs boson to exist has surpassed the 5sigma confidence threshold such that it is customary to then refer to the Higgs boson as being existent. Am I thus biased for the Higgs boson?


              8. I completely agree that the history is irrelevant as far as the universe is concerned. Thus, the fact that Einstein originally introduced the cosmological constant for the wrong reason has no bearing whatsoever on what value it has. Similarly, it is irrelevant whether some overzealous particle physicists claimed in the 1980s that inflation firmly predicts the Einstein-de Sitter universe. It cuts both ways.

                Where it is relevant, though, is when people like Merritt tell that history wrongly in order to make outrageous claims such as that dark energy was invented to explain the supernova data (see my article for exact quotes of his claims).


            2. By the same token, it should be troubling if MOND proponents advocate only one theory. On the other hand, if that’s OK if the theory is correct, then it isn’t a bad thing in general.


            3. “I wonder what it is in academia that pushes us towards that. The grant system? The “publish or perish” logic? tenure tracks?”

              While those might play a role at some level in some places at some times, I doubt that they are the real reason. Most people who don’t work on MOND don’t work on it for reasons other than those above. Some might be interested, but don’t like being told that the previous ten years of their lives have been wasted. Even if true, that is not how to win converts. Most people who don’t work on MOND are not mindless drones trapped in a Kuhnian paradigm.

              It is so difficult to get any permanent job in academia, much less a high-level one (like many MOND proponents I know) that I think that the idea that people work on things because that’s where the money is is not the right explanation. If you want to work on something other than that which you think is important, that can be done much easier outside of academia, with a starting salary higher than almost all academic salaries, job security essentially from the beginning, and so on.


          2. “ Planck had reason to say “Science progresses one funeral at a time.” It may now be even worse, as the chief proponents reproduce and their progeny proliferate so that it becomes a self-sustaining social construct.”</I?

            On the other hand, Planck was in his early 1940s and one of the most important members of the establishment when he founded quantum theory.


  3. It’s always good when I awake to a blog from Tritonstation. So it was on New Year’s Day although I was suffering a hangover with it’s roots deep in 2020🥃.
    Also I was and am still trying to get my head round EFE. My problem.
    My comment is to thank you for introducing me to Prof Kroupa. In particular Vaclav Vavrycuk’s theory on dust. Thanks also to Rachel Parziale’s excellent video explaining it so well.
    Finally to Prof Kroup to pass my regards on to Kostas Migkas whose work I follow closely.
    Here’s to a new paradigm.

    Liked by 1 person

  4. I would like to be able to ask Lakatos and Feyerabend (and Einstein and Newton!) what they make of all this, but that’s not possible, so I hesitate to put words in their mouths. Merritt makes the case in his book that LCDM is a degenerating research program in the sense of Lakatos. I would go further – SCDM was a degenerating research program before that. LCDM is just an extension of the same failed paradigm: not genuinely new, just altered enough to seem so.
    I wrote about this near the beginning of this blog. Inflationary theorists clung to a cosmic density parameter of unity in the face of almost unanimous contrary evidence. All through the ’80s they aggressively told observers that we were stupid and just had to look harder until we got the right answer. This persisted in some corners well into the ’90s, when it finally became clear that the pipe dream of Omega_m = 1 just wasn’t going to happen. The big pseudo-revolution was to fill in the difference with Lambda, which went from something as sinful to speak of as MOND to a bedrock component of the paradigm – part of the hard core, as Lakatos would say. So, as far as that goes, it was an advance in the sense that presumably Lakatos would have approved. However, it wasn’t, really – it was just another theoretical fig leaf, a big stop sign in the sky telling us that we were barking up the wrong tree. I used those words at an Aspen meeting in 1997 prior to the Type Ia SN revolution. At the time, certain Inflationary theorists – the same ones who thought observers were stupid for not finding enough dark matter – were by then pushing Lambda hard in order to save Inflation. The original, genuine prediction of Inflation that sold it to me and the rest of the community was Omega_m = 1. That was wrong, so they were angling to save it with the weak-sauce version (Omega_tot = 1). I call it the weak-sauce version because, while true, the selling point of Inflation was that it solved the coincidence problem (then called the flatness problem). LCDM makes the coincidence problem worse. So we had to abandon the chief virtue of original Inflation in order to obtain a solution for the large scale problems that we faced at the time – in particular, that there was much more power on large scales than predicted by SCDM. The switch from SCDM to LCDM was not so much a paradigm shift as it was the opposite of that. Yes, we bought into Lambda, but it wasn’t really new. It was always there; it had merely been ignored as unlikable until absolutely necessary. Once that penny dropped, then all problems were considered solved, and the rest – really any oddity at all – is just menial gastrophysics to complicated for fundamentalist physicists to be bothered with understanding: the answer is Known, Khaleesi, everything else will surely work out.

    Liked by 1 person

    1. That’s not the way I remember it. Maybe your remarks made sense at the time, but the fact that the needed value of Λ, for whatever reasons it was introduced, predicted an accelerating universe, which has since been confirmed, is simply good science. That is the way it should be done: hypothesis to explain data, Ocean’s razor says use known ideas (the cosmological constant made its appearance in the very first paper on relativistic cosmology, back in 1917, despite what some pundits claim), make a definitive prediction, that prediction is confirmed. It’s harder to think of a better textbook example of the scientific method.

      The fact that the cosmological constant is now part of the standard model shows that the hidebound defenders of the orthodoxy do change their minds when there is convincing evidence. While good observations, the fact that two independent groups got the same unexpected result, and the fact that no new physics was needed certainly helped, it probably also helped that the supernova people didn’t say that people who clung to older ideas were just ignorant blockheads greedy for grant money.

      Yes, some people went overboard with their support for inflation but, with some exceptions like Allan Sandage, who somehow got sucked in, they weren’t astronomers for the most part. Also, nature is what it is, regardless of the motivations of individual people believing this or that. The motivation for the current standard model is actually independent of inflation and based on observations.

      The fact is that observations had usually indicated Ω of about 0.3 or so (Ω_matter to some), but a value of 1 couldn’t be ruled out since the observations weren’t good enough. Many lines of evidence pointed to the current standard model—-which is why it is called the concordance model—-years before the supernova data provided strong evidence. However, until the mid-1990s, a value of 1 was still not ruled out. Yes, it was difficult to understand structure formation, but that was a problem only if you believed that the simulations of the time were good enough to draw firm conclusions from.

      Some particle-physics types are strong believers in inflation and tell the story from their point of view, but the loudest aren’t always right, otherwise we should all vote for Trump.


      1. My fear is that certain people are severely distorting the history of cosmology in order to make MOND look good and ΛCDM bad, whereas a more objective approach on both sides would go a long way to making real progress. That was the motivation for my piece rebutting an article of Merritt from a few years back (many claims are repeated in his recent book; more on that anon). I had two goals, in conflict only for cynics: demonstrate that many of Merritt’s arguments are wrong, and plead for a more civil debate.

        As Stacy wrote when I posted that link in another thread, read both Merritt and me and draw your own conclusions.


    1. Well, as major supporters of MOND, of course they wood. Axes can be ground on both sides of the fence, which makes it difficult to bury the hatchet. :-). (I’ve let the unintentional typo stand as it adds a pun to my mixed metaphor.)

      Some confusion arises because different people mean different things by ΛCDM. Yes, the Hubble-constant tension might be a real problem, and is not something expected in ΛCDM. But neither is it a prediction of MOND, so one could just as well say that it is evidence that MOND is wrong. Saying that MOND doesn’t have a different prediction here is damning with faint praise.


      1. MOND in fact did predict the Hubble tension. Nobody cared to do the detailed calculations though. The earlier cosmological simulations that were done in MOND (predating the Hubble tension era) predicted large density contrasts, i.e. present-day large-scale underdensities (as well as massive galaxy clusters). Observers in such an underdensity then _automatically_ observe a larger Hubble constant. The details are to be found here: “The KBC void and Hubble tension contradict ΛCDM on a Gpc scale – Milgromian dynamics as a possible solution” by Haslbauer et al. (


        1. Let’s say that they can be retroactively interpreted that way. No-one stood up at the time and said “MOND predicts tension in the Hubble constant”, perhaps even with at least a ballpark number.

          The main point, though, is that such a prediction is not unique to MOND. Many, many papers discuss observational effects of large-scale inhomogeneities and have used them to try to explain not just the Hubble-constant tension, but even to interpret the observational evidence for the cosmological constant as the result of the backreaction of such large-scale inhomogeneities.

          There might very well be something to it as far as the former goes. As to the latter, yes, it is possible, but of all the things it could do, I find it strange that it just happens to predict something which not only can be explained with 1920s cosmology, but even the values of the parameters are those which have been derived from independent lines of investigation. Not that anyone has a theory which actually makes a prediction, what I mean is that if large-scale inhomogeneities can trick us into believing in the cosmological constant, then it seems strange that they don’t produce something not explicable within the standard framework at all, as in principle they could produce an arbitrary magnitude—redshift relation.


        2. Dear Prof. Kroupa,
          In your comment from 04.January (which is no longer possible to reply) you mention that:

          “neither Maxwell nor Einstein worked at universities when doing their significant research”

          and also that:

          “To enable progress, we might need to entertain the idea of recasting our scientific system to a less award-based one allowing for a larger number of researchers similarly equipped to establish egalitarianism. Pure scientific research ought to be driven solely by the desire to learn how nature works, rather than being career- or income-oriented.”

          In my opinion, what is not 100% clear in your proposal is whether scientific research shall still remain under the control of universities (e.g. universities shall introduce more research positions with less or even no funding), or if you are actually proposing to start considering ideas from researchers outside of academia (i.e. not working at universities).
          In the latter case, seriously, could you recommend a serious peer-reviewed journal where people without academic affiliation are also allowed to at least submit a paper?


          1. Scientific research is not under the control of universities. Anyone can do research, and anyone can fund it. Much research is done outside of universities.

            By definition, a peer-reviewed journal which does not allow people without an academic affiliation to submit a paper is not a serious journal.


          2. The current scientific system, in terms of finding out how the fundamental physics of nature works, seems to be dysfunctional. How can one remedy this problem? That is, how can one improve the chances that true intellectual breakthroughs (equivalent of Einstein’s or Maxwell’s) can be achieved, published and recognised? How can, for example, the work of some brilliant young physicist, who develops a theory which is not string theory, be made possible, if everyone today were to claim that only string theory is the correct approach to understand nature? My thinking is that if the scientists were forced to be equal (i.e., get rid of the major chairs or directorships who control large amount of funding and thus also who determine who can have a career and which mode of thinking is allowed – see the scandals around Garching and Zurich for example; or, for example, any young researcher who would want to investigate MOND would not be able to do so in Cambridge; or for example, W3 professors in Germany being able to bully W2 professors) then the chance would be larger to achieve this. Universities are the good places to do so, since this is also where students appear who need to learn the current state of knowledge but should do so in an environment which allows them to freely develop further ideas and follow their own hints without fearing to be looked down upon by established over-awarded groups and individuals. While individual intellectual misconduct will always happen, it is essential to take the power away for such individuals to stop them causing undue harm. Any new ideas still need to pass a peer-review system, since someone, who is independent, needs to check if the work appears to be sound, so publishing in the way it is done today will be unavoidable. The journals must ensure that the refereeing process is fair and not biased, and so it is good that there are a number of journals which can be contacted if one fails. According to my experience, the journals do take submissions from people not affiliated to a research institution, but yes, there is a bias. People working from home do stand a chance to get their work published as long as it is written in the correct scientific style, and this requires training. The text needs to be as dry as the desert but deep and of course all deductions and steps need to follow logically from one another. Still, can a revolutionary new idea submitted from someone not affiliated to a professional institution be recognised as such? Yes, there are issues and open questions and probably no simple single solution exists, apart perhaps from building up journal institutions which check the submitted research paper for technical correctness and are forbidden to judge on opinion. But who would fund such an institution, if it even makes sense? Perhaps seeking to present ones ideas in front of a professional group might be a way to start the process of getting ones ideas published when not affiliated.


            1. “any young researcher who would want to investigate MOND would not be able to do so in Cambridge“

              Whether that‘s true I don‘t know, but perhaps they should go to Oxford. James Binney, professor of physics at Oxford, who quite literally wrote the book on galactic dynamics, is a MOND supporter.

              You‘re a professor. OK, perhaps some rank higher, but still. Stacy is head of department. Bob Sanders was a professor.

              On the dusk jacket of Halton Arp‘s book Quasars, Redshifts, and Controversies, in which he goes on and on about how he has been oppressed by the establishment, one can read that he worked at the Mount Wilson and Palomar Observatories for 29 years and then moved to the Max Planck Institute for Astrophysics in Garching. That‘s a better career path than 99% of all astronomers.

              To be sure, I think that he had some sort of fellowship or stipend at Garching, but the point is a) he got money to be there and b) the reason to have him there was to have an unorthodox person there. IIRC it was one of the directors who organized his coming to Garching.


              1. There are good cases, indeed. There are bright people around who do see the problems and sometimes help is offered (the the end of my last contribution). But, there is a difference to being allowed to enter an institute and sit at a desk subject to strict rules of what one puts out vs having full access to funding and being allowed to build up research and collaborations using the institute resources.


            2. “My thinking is that if the scientists were forced to be equal“

              One would then have the problem that someone would have to determine who is a scientist with a corresponding job and who isn‘t. One can‘t have a system where anyone can say that they are a scientist and thus someone has to pay them a salary and give them tenure.

              I take your points about bullying, which certainly does happen. ({Interestingly, why was there much more media coverage of Zurich than of Garching?). But I don‘t think that that is the main reason why MOND isn‘t more accepted.

              In the end, it‘s a market of ideas, whether we are talking about science itself or the organization of academia, and the only path to success is to convince people that you are right.


              1. Well, to convince is the name of the game: but many know for a fact that dark matter exists, that MOND does not work or that string theory holds the absolute truth.


              2. To be a “scientist” one needs to go through the training which means some involved studies for many years. An important reason why the specific case of MOND has not been worked on more is precisely because of bullying. My own students have reported as much (and I have my own experience), and many would indeed shy away from this field. The Garching cases are protected by the very power-full MP society.


              3. To be fair, many know for a fact that dark matter doesn‘t exit, that MOND explains everything and that string theory is nonsense. 🙂

                The question is how to make a dialogue more fruitful than just stating positions.

                Of course, there are people who are pretty sure that some forms of dark matter exist, that MOND works surprisingly well, and that string theory is probably not the path to enlightenment. It‘s not just us and them.


              4. Phillip, you have hit the nail on its head: Your reply is the typical one one gets in this debate which is why this part of science is not moving forward. Your claim “To be fair, many know for a fact that dark matter doesn‘t exit, that MOND explains everything and that string theory is nonsense.” is false because no-one I know (in terms of serious active scientists) make such statements. I am agnostic about string theory, but do note that it has not made testable predictions and thus it is unclear which role it plays in physics, even though it is mathematically very rich and interesting. MOND is the current only existing formulation of dynamics beyond Newtonian dynamics in which we can make full-scale self-consistent calculations and simulations and so it is being tested just as hard and unbiased as the SMoC. Despite inventing the harshest tests in my research group in Bonn, MOND has not been able to be falsified, given the data. This is the same conclusion also reached by Stacy independently. We all know that MOND is not the final theory, but it is clearly showing the direction we need to explore. On the other hand, the SMoC has been falsified with independent tests. These have been published in leading astrophysical research journals. One proper way forward would be to show where these falsifications at more than five sigma confidence are in error. This has not happened. Publications which show MOND to be ruled out have, however, been shown to be in error. One needs to know the literature. On the other hand, if a physicist or astronomer claims that the existence of SMoC dark matter is well established, then this is a statement of believe beyond science, since SMoC dark matter particles have not been discovered and they lead to Chandrasekhar dynamical friction which has been shown to not be evident in the data. In terms of history, up until the early 2000 I had no reason to doubt the existence of SMoC dark matter. E.g., in 1997 I wrote, as a result of my discoveries, my first paper on this topic….2..139K/abstract
                and assumed the Milky Way is embedded in the SMoC dark matter halo. However, the subsequent research showed extremely significant mismatch between the SMoC models and many types of independent observational data spanning vastly different scales, and only a non-scientist would retain that the SMoC is the correct theory. I had no choice but to accept the significance of the results and move onwards. If scientists do not react to five sigma evidence, then the research field is broken and the scientific method becomes dysfunctional and not worth of funding through the tax-payer. .


              5. I think that the important point is to specify exactly what you mean by the standard model when you say that it has been falsified. That might not be what most people mean by the term standard model. And variations to the standard model, or extensions, or more-realistic versions, or whatever are not necessarily epicycles. It‘s certainly not the case that reasonable extensions can fit anything. Merritt, for example, criticizes standard cosmology because it has free parameters which are determined by observation and claims that it can fit anything, but elsewhere claims that it has been ruled out because it can‘t fit the observations. He can‘t have it both ways.

                It‘s certainly the case that some scenarios have been falsified, which is why people work on others. I don‘t like the word, but falsifying a specific example does not falsify the entire paradigm.

                I don‘t think that anyone claims to fully understand galaxy formation. (If they did, there would no longer be a reason to work on it.). But that is rather different from traditional cosmology.


              6. My first statement was tongue-in-cheek; the last is more realistic.

                But it cuts both ways. I know of know cosmologist who corresponds to the caricature which Merritt paints. I think that such caricatures are what is preventing progress and a healthier debate.


            3. “Perhaps seeking to present ones ideas in front of a professional group might be a way to start the process of getting ones ideas published when not affiliated.“

              Anyone who wants to write a scientific paper should have read some, and can learn the style. The best way to get a journal to consider one‘s submission is to submit something to it. (Yes, there are journals who don‘t accept submissions from people without affiliations. If you think that that is wrong and want to do some small part to perhaps change it or at least call attention to it, then stop publishing in Astronomy and Astropohysics and if anyone asks why, tell them why.)

              It‘s relatively easy to go to conferences, and the threshold for poster presentations isn‘t that high. (By the same token, one often finds low quality stuff among the posters.). If it isn‘t a really high-profile conference, try to give a talk. It isn‘t just help in publishing, though, it is also constructive criticism if you‘re barking up the wrong tree.

              Fortunately, there are several good journals and in the case of some disagreement, one can go to another journal.

              A much bigger problem is that many people rely only on arXiv, which makes its own rules, is accountable to no-one, and doesn‘t even accept all papers from the main journals.


  5. I am an amateur fascinated by this issue and have been a follower of your blog for many years. I hope that you are willing to answer some questions about an issue not covered in your 25 year summary.
    It is my understanding that the gravitational bending of light by galaxies (lensing, etc.) is consistent with the predicted dark matter mass. On its face, this would seem to be a strong argument against MOND since modifying the acceleration equation would not be expected to act on light photons. However, this is not the case because I understand there are relativistic modifications of MOND that can account for the observed light bending. I have two questions: 1) Could you provide a qualitative mechanistic explanation of how light bending is incorporated into MOND. 2) In your opinion, do these modifications follow directly from the simple idea of modification of the acceleration, or is it more of Rube Goldberg explanation that eliminates the simplicity of the “one free parameter” model that you find so attractive?


    1. Indeed, gravitational lensing indicates a discrepancy in the same sense that dynamics does. This is inherently a relativistic phenomenon that MOND itself is mute on. Many early attempts to construct a relativistic version of MOND floundered on exactly this point – usually, there was no additional light bending, and one could legitimately worry that it was impossible. This was the great breakthrough of Bekenstein’s TeVeS, in which light bending is amplified in the same way as dynamics. The mechanism was to separate the physical and Einstein metrics, which are identical in GR. I wouldn’t go so far as to describe that as Rube Goldberg – it doesn’t add that much additional complication – but it does add some (3 parameters instead of one). But it is definitely headed in an undesirable direction.
      At this point, TeVeS has been ruled out for a variety of reasons. I still consider it an important contribution, because it did show that it was possible to do something that we thought might be impossible. I feel the same way about RelMOND. It seems unlikely be the final word, but it does show that it is possible to do something that was thought to be impossible – in this case, the acoustic power spectrum of the cosmic microwave background – and do it well.
      These are normal steps and missteps in developing a new theory. Progress is slow because relatively little effort is being made in these directions – the community of people working on this is small and lacks critical mass. Simultaneously, a tremendous amount of intellectual capital is being wasted on obviously unworkable dark matter models. No one wants to invest their time into gravitational theory (which is hard) until they exhaust all possibilities in dark matter theory, which is inexhaustible.


      1. PS – just for clarity, this post wasn’t a summary. It was a paper I wrote 25 years ago but never published in this form. The journal I wrote it for has a strict word limit, and there really wasn’t anything to say about gravitational lensing at that time.


  6. Naive questions from an undergrad student: how do you know that a galaxy is low surface brightness one? Is this somehow verified? Could the brightness come from another luminous(or non-luminous) source? can a galaxy be characterised low-surface brightness one if most of its light is absorbed by (for example) its galactic halo? why most of them are dwarf galaxies? how many of them do you use for this study? do we discover more of them with new telescopes/techniques?


    1. Like so many terms in astronomy, “low surface brightness” is not well-defined. I tried to address this quantitatively in another paper from long ago – see Table 1 of This nomenclature didn’t get adopted; too soon I guess. The term “ultra diffuse galaxy” matches exactly the definition of “very low surface brightness” in that table, once you account for the difference in bandpass. In current usage, the definition of UDG also appends a criterion for size, which is redundant, since surface brightness already measures the diffuseness of the stars. Both definitions refer to the central surface brightness of an exponential fit to the observed surface brightness profile. One could also measure the surface brightness at the effective radius that contains half the total light. Other definitions are possible, but can all be related mathematically. The trick is agreeing what we mean by “low” in surface brightness. Back then, I adopted the definition that the central (peak) surface brightness was no brighter than the dark sky. That’s a practical, observational definition that is specific to the terrestrial environment. For the purposes MOND (which I did not set out to do) you might pick a surface density of stars S such that G*S < a0 so that a low surface density galaxy is in the MOND regime. As it happens, all galaxies that fit the terrestrial definition are well into the MOND regime, which is what got me entangled in this business.
      A galaxy is only truly low surface brightness if its stars are spread diffusely enough to satisfy the definitions in that table. It doesn't count if it is simply dimmed by foreground dust extinction. We can correct for that. There were a couple of dozen then-new LSBs that we used in the 1995 study. One can discover more by looking harder (the paper with the table and a companion paper describe how). Lot's of that has been done since then, so the current sample is much larger. The limiting factor is obtaining quality data for them once they're discovered.
      And finally, "dwarf" is another ill-defined term. Not all LSB galaxies are dwarfs, and how many of them are depends on the choice of definitions of the terms. There is a correlation (with a lot of intrinsic scatter) between luminosity and surface brightness such that lower surface brightness galaxies are more likely to be dwarfs – depending of course on how you define that term. Why this is so depends on how galaxies form and evolve, a topic that has been studied and muddied by many, many papers.


  7. Thanks for that.

    Every decade brings better observations. 40 years ago the Standard Model was baked, explaining virtually everything known. Now, due to astronomical observations, the Standard Model explains 5% of all stuff.

    LCDM Cosmology will be even more embarrasing with the James Webb scope and other observations.

    Liked by 1 person

    1. 40 Years ago, the standard model had no cosmological constant, and Ω was often assumed to be 1. How did that explain everything?

      By 5%, you apparently mean baryonic matter. Observations indicate the cosmological constant, making up 70%, and about 25% dark matter (cosmological dark matter, perhaps nothing to do with flat rotation curves). That is explained by the standard model, unless by explain you mean that baryons are the only explanation for anything.

      How, specifically, do you expect the JWST to be difficult for ΛCDM.

      The bigger problem is that the standard model of 40 years ago was not like the standard model of today. Not just that the parameters were different, but the whole idea. Today, it means a model which explains essentially all data by tightly constrained parameters. Back then, it was just a fiducial model for back-of-the-envelope calculations. It perhaps fit all the data, depending on the data, but people knew that the uncertainties were large. Also, people routinely looked at a wide range of models.

      Note also that particle physicists are usually not cosmologists, and almost never astronomers.


      1. Standard model – Ignorance was bliss. I’m talking about the particle physics standard model, which explained everything known – all matter and radiation – in 1980. Dark matter was ~bricks, Omega was 1, basically a done deal.

        In the 70s and 80s particle physicists told astronomers what could exist in the sky. In the next few decades, the astronomers showed up the physicists. 95% of what astronomers found was not in the standard model of particle physics, an utter failure for physics.

        If physics was doing its job it would have predicted the nature and perhaps even the amounts of dark matter and dark energy.


        1. One really needs to distinguish between dark matter as used by astronomers, by cosmologists, by particle physicists, and by structure-formation people.

          It is not an utter failure for physics. At best, it is a failure for some speculative extensions to the standard model of particle physics.

          Note that all three of the non-particle-physics dark matter could be primordial black holes, not really in conflict with the standard model.

          Note that no theory predicts the acceleration constant in MOND.

          The idea that one has a theory of everything which predicts, well, everything is not the way science usually works. Physics can‘t even predict the matter we know. Why three generations of particles? Why the masses they have? And so on.

          Should biology predict all species on Earth?


  8. Just to make sure everyone is on the same page:

    The “standard model of particle physics” (the SMoPP) is based on quantum field theory which is, to my knowledge, the most successful physics theory. It explains the properties of ordinary matter and radiation and one can, at least in principle, calculate, by starting from a few building blocks and the rules of how to connect them, the DNA of a tree etc. This is a most impressive cultural feat, but we know the SMoPP is not complete because neutrinos have mass. So there must be another theory which goes deeper.

    The “standard model of cosmology” (the SMoC) rests on 5% of the Universe being described by the SMoPP and the rest with ad-hoc additions, dark matter and dark energy, both of which are fundamentally not understood at any level. The core of the problem lies in the nature of gravitation. By using a description of gravitation constructed from Solar System data only and before galaxies were known for what they are, and by extrapolating (Newtonian/Einsteinian) this description by many orders of magnitude to galaxies and beyond, the mismatch between model and real data is “solved” by postulating a number of new auxiliary hypothesis (Big Bang and inflation, dark matter and dark energy) for which there is no independent evidence. The SMoC has meanwhile been falsified with much more than 5sigma confidence and is not the correct description, not even approximately, of the Universe.


    1. The standard model of particle physics has about 17 parameters.

      Dark matter is ad hoc only if you believe that all matter must be visible. Obviously, a theory can describe only what is known, but is not falsified by the unknown. The binomial classification system of Linnaeus is not wrong because he didn‘t know about all living things.

      Many would argue that one would need an explanation if the cosmological constant were zero, as that would imply an unknown symmetry or conservation law. It‘s a constant of integration; setting it to zero would be ad hoc.

      Calling the big bang an auxiliary hypothesis is really bizarre, unless you are using the term in a non-standard way. The standard meaning is that the Universe has been expanding from a very hot and very dense state.

      There are many who would not put inflation as part of the standard model.

      What would be independent evidence for dark matter and dark energy? Are astronomical observations not good enough? And where is the independent evidence for MOND?

      You can define a caricature of the standard model and say that it is ruled out at 5 sigma or whatever, but that doesn‘t mean that roughly similar models are ruled out, and it certainly doesn‘t mean that MOND is true.


    2. Pavel,
      You say “we know … neutrinos have mass” in much the same way that other people say “we know … there is dark matter”. The fact is that there is no experimental evidence for the existence of neutrino mss. It is a hypothesis adopted in order to “explain” neutrino oscillations. There is no other evidence for neutrino mass. I believe in neutrino mass in exactly the same way that I believe in dark matter – that is to say, not at all – and for exactly the same reasons.


      1. Robert,

        We know that neutrinos have rest mass because they oscillate between the different flavours. If neutrinos had zero rest mass they would travel at the speed of light and their proper time would be zero and hence they would not oscillate. For a long time neutrinos were thought to have zero rest mass and it was only the discovery of the ‘missing’ neutrinos from the Sun that led to the experiments that revealed neutrino oscillations. Just because we have yet to devise an experiment sensitive enough to measure the rest mass of a neutrino directly doesn’t mean that it doesn’t exist.


        1. I agree with you, but what Wilson means is that you have not measured the mass, you merely conclude that it must exist on the basis of some theory—just like dark matter. I don‘t agree with Wilson, but I think that you are missing his point. Just because we have yet to devise an experiment sensitive enough to measure dark matter in the lab doesn‘t mean that it doesn‘t exist. :-). For that matter, no-one has measured the MOND acceleration in the lab. MOND folks will tell you that that is not possible, at least not here, because of the external field effect. Now I think that dark matter might be in primordial black holes or in some macroscopic form, neither of which would be detected in the lab by current experiments. (Maybe people should listen more to astronomers than to particle physicists with regard to dark-matter candidates.). However, some forms of dark matter are at least potentially detectable in the lab. What would MOND supporters make of a claim that there is dark matter but it is in principle not detectable in the lab? Keep in mind that that is the status of the MOND acceleration: claimed to exist, but will never be detected in the lab.


          1. I am persuaded that the oscillations mean neutrinos must have a non-zero rest mass. I am very reluctant to use the word “must” for something that has not yet been measured, but it would be more revolutionary than anything we’re discussing here if this somehow turns out not to be the case. Same for the cosmic neutrino background: like the CMB, it has to be there, even if not yet observed. If not, lots of things break that otherwise work. That could be the case, but I deem it to be exceptionally unlikely.
            Speaking of unlikely: microlensing results have reduced the parameter space available for primordial black holes to next to nothing (e.g., There are very few things left that we can get away with.
            I don’t think the comparison you make between WIMPs and a0 is fair. There are experiments that could detect a0; Milgrom talked about them in his original papers. That it can’t readily be done locally sucks, but it is built into the theory. In contrast, WIMPs had a specific interaction cross-section that was predicted and excluded. The goal posts were moved, but the experiments caught up, and the revised prediction has been excluded. The goalposts have now been moved into the neutrino background, where it will be challenging to distinguish a WIMP signal: these experiments will finally have a detection of weakly interacting particles, but mostly (and perhaps entirely) these detections will be the background of all neutrinos from all the supernovae that have gone off over time. I’m sure we’ll see claims of experimental WIMP detections that turn out to be neutrinos, because the sociology has evolved to be “make the exciting claim first to mark your place in line for the Nobel prize, retract later.”
            At any rate, if the dark matter paradigm is correct, then there really has to be a physical substance with mass that goes unseen. We can imagine that whatever it is has zero cross section with any force besides gravity, in which case it is utterly undetectable and we must decide if we agree to take the existence of this invisible substance entirely on faith in the equations that lead to its inference. I’ll never be comfortable with that myself, but much of the community seems to have decided it is OK. But what if we’re wrong? How do we ever tell?
            That’s to think about, not answer. I don’t care to hear an answer to the unanswerable.


            1. Regarding this – “We can imagine that whatever it is has zero cross section with any force besides gravity, in which case it is utterly undetectable and we must decide if we agree to take the existence of this invisible substance entirely on faith in the equations that lead to its inference”, I’m really not sure if you can distinguish this situation (i.e. something that interacts only gravitationally) from just a modification of the gravitation law.


              1. It seems to me that would depend on the initial distribution of this “matter”, and it would most likely just evolve to black holes.


              2. @dlb
                The formation of black holes requires matter clumping and if this exotic matter interacts only through gravity, this is not possible. Two particles that accelerate towards each other cannot stick as there is no repulsive force (on short scales) to cancel the gravitational one. For baryonic matter, this is the job for the electromagnetic interaction (most of the times).


            2. Obviously, microlensing is sensitive to objects which cause a detectable amplification over a practical time scale. Very small objects (think asteroid size) violate the first constraint, very large ones (think really heavy black holes) violate the second constraint. Also, generically one would not expect a delta function, but rather a range of masses. With that in mind, dark matter could still be in primordial black holes. See, for example, many papers by Bernard Carr and co-authors: which explore essentially the whole parameter space. And it is not just dark matter. If primordial black holes exist, they could provide an economic explanation for a variety of astrophysical puzzles.

              Many were surprised by the masses of the LIGO black holes. If we see mergers of black holes of less than a solar mass, they will almost certainly be primordial. Increasing sensitivity of current and planned detectors on Earth as well as LISA might allow us to say something one way or the other about primoridial black holes within our lifetimes. Even if they don‘t make up a significant fraction of the dark matter, it would still be interesting to know that they exist.

              Assuming that 5 times as much matter is non-baryonic than baryonic, isn’t it strange to think that it all must be in the form of one particle. There is more than one type of baryon, but the fact that they combine into all manner of nuclei, atoms, molecules, and macroscopic objects makes for a very rich world. Maybe the dark sector is just as rich. By the same token, it would be hard to figure out realistic cross sections for detection.


              1. @Phillip Helbig
                I assume we’re talking here about the cosmological dark matter (as inferred with the CMB)? Not the dark matter hypothesis for explaining galaxy rotation curves? I’ll just mention I’m in the “camp” of people that find the MOND explanation much more convincing for the later.

                Sorry, it seems we’re exceeding the max depth for responding to comments.
                I’ll just mention two effects:
                . Normal matter would create gravitational seeds, so this hypothetic dark matter could not be completely isotropic.
                . From this we can deduce it would dissipate energy as gravitational waves, and eventually turn into black holes. But that could take a lot of time, of course, depending on initial conditions.


              2. Unless I say otherwise, I always mean cosmological dark matter. I also think that MOND has a lot going for it on galaxy scales, and am open to the possibility that both are correct. 🙂


    3. Pavel,

      I fully agree with your general assessment of the SMoC. I would add only two points:

      1) In addition to the four “invisible (undetectable) pillars” of the model you cite (the big bang, inflation, dark matter and dark energy), I would add the physically-interacting spacetime of Wheeler, et. al.

      2) Only under the foundational assumptions of the standard model, that the Cosmos is a unitary entity and the cause of the cosmological redshift is a recessional velocity of some type, can it be said that the SMoC is the best available model. Absent those assumptions far more plausible models based on observed phenomena become possible. There is, in other words, no way out of the cosmological mess that is the SMoC, without jettisoning its foundational assumptions.


      1. budrap,

        I think you are quite right that it is necessary to jettison the foundational assumptions. People don’t do that, because we cannot work without assumptions. But it is absolutely necessary to question one’s assumptions continually. I always used to think that it was a defining characteristic of an academic, that they questioned everybody’s assumptions, including their own. In mathematics, philosophy, the arts and (some of) the humanities, and most applied sciences, this seems to be largely true, In physics, strangely, the foundational assumptions seem never to be questioned, even (or, perhaps, especially) when they are incomprehensible.


  9. From one heretic to another, I say with tremendous respect, you really haven’t challenged your intellect yet Dr. McGaugh. Allow your mind the freedom to at least imagine the conserved point charge universe. It is a closed form solution to nature. It is tremendously relaxing and fun to imagine things in the sensible universe that emerges.


  10. @dlb
    Ah, OK. I thought you said we would see the matter in the form of black holes at present times. But the process to form those black holes requires ages upon ages and then some time… Currently, they would be unobservable.


    1. @dlb again.
      On further thoughts – isn’t this also valid for the current model of dark matter, as it mostly interacts only gravitationally?


      1. I just wanted to outline how different matter that interacts only through gravitation would be from a modification of gravitation laws.
        If we consider the theory, as I explained, my opinion is that eventually that matter would condensate into black holes, so has to be different than a modification of gravitation.
        If we consider the practice, and assuming initial conditions in the early universe with this black matter very isotropically distributed, and very hot, then it would still be undistinguishable from modified gravitation for us (I guess it would look like a cosmological constant). So everything is connected to initial conditions.

        While I’m no expert, hot dark matter is excluded, right? It has to be cold, and has to at least self-interact in ΛCDM. I’m just repeating what I read, and don’t really understand.

        A dark matter than doesn’t interact with baryonic matter at all and is undetectable in the lab is a perfectly valid hypothesis for me, I don’t see why that would be a problem. In fact, we should look at it from another angle: things would be so much easier for supporters of dark matter if it was detected!
        Alas for them, it looks like it won’t be. But in my opinion it should make no difference to the theory.


        1. No, it wouldn’t look like a cosmological constant.

          Physics Today has opened its entire archive to those who register (easy to do and no side effects). I recently came upon this, from a book review by Cecilia Payne-Gaposchkin (if that name doesn’t ring a bell, it should) :

          … modern cosmology … is a subject that the general reader has been led to believe that he can grasp
          without the essential background of mathematics and physics. The superficial ideas of such general readers (which are not unknown among college students) are the despair of the serious scientist.

          Citation: Physics Today 10, 8, 22 (1957); doi: 10.1063/1.3060458 View online:

          Liked by 1 person

          1. Regarding Cecilia Payne-Gaposchkin’s quotation “… modern cosmology … is a subject that the general reader has been led to believe that he can grasp without the essential background of mathematics and physics. The superficial ideas of such general readers (which are not unknown among college students) are the despair of the serious scientist.”

            I mean no offense to physicists by the following statement. I predict that history will show Cecilia’s statement to be laughingly out of touch with reality. The only reason physicists get so many unsolicited ideas from enthusiasts is that physicists have made a gigantic mess of their science and left so many major open unsolved mysteries. The ‘general reader’ is just as clueless on these issues as the physicists because these are all artificial problems caused by off the rails physics interpretations. I’ve written about this many times on my blog I’d love to help the community if anyone is open minded. It’s easy. Simplicity + emergence yields complexity. It really is just immutable point charges. Take the leap of faith and look into it. Why was this solution missed?


        2. Yes, I made that connection when I realized that it would be the same behavior for CDM.
          On the other hand, I’m not sure if anybody really calculated this, but typically, projections into the future about the universe are talking about black hole evaporation, even for the supermassive ones. I don’t know if these projections are considering also the CDM’s further cooling (and thus falling into the said black holes) or the time scale for the cooling through gravitational waves is orders of magnitude larger than that and you get a big rip before the first DM black hole has a chance to form.
          As for me, I’m more inclined to equate DM with, let’s say, the effect of some approximations that are no longer valid on galactic (and larger) scales. For instance, it is usually customary to assume sin(x) = x for small x and this typically yields very good results, but only as long as you make sure that your approximations are still meaningful. That is, if you want to calculate the period of a pendulum that deviates from the local vertical by only a fraction of a degree, that approximation is more than enough. However, once you have a deviation of 30 degrees, that approximation is no longer valid and what you get for the period differs substantially from the period of the actual pendulum.
          The laws for gravity that we have were derived, basically, with local data from the solar system, but between the size of the solar system and the size of the galaxy, you have many orders of magnitude – who is to say that the <> from the solar system are still valid? (I used <> in angled brackets in lack of a better term for what might elude us in the weak field conditions, if MoND was true).
          I find it somehow arrogant to assume that the current laws we have for gravity are valid for many orders of magnitude passed the local conditions on which they were derived, when history teaches us that with each increase in the searched range we had to adjust them (i.e. from constant g for Galileo on Earth to Einstein’s theory of gravity for the galactic neighborhood).


          1. hmm… don’t know the effect of double angled brackets – I wanted to use approximation in quotes but wanted to avoid the quote mark, so I chose doubled , but the entire group was rendered as only as .
            Anyway, the text reads just fine also like it was rendered so no harm in that.


            1. This is becoming spam….so closed angled brackets don’t render at all. To get a single closed angled bracket like this <> you have to double them like this each symbol.
              If you need double brackets do you need to make three of them <<>>or four <<<>>>?
              What about the text between them?


          2. @Apass,

            The laws for gravity that we have were derived, basically, with local data from the solar system, but between the size of the solar system and the size of the galaxy, you have many orders of magnitude…

            It is not merely the size differential that is many orders of magnitude between solar systems and galaxies. A galactic system is orders of magnitude larger in complexity. 98% of a typical spiral galaxy’s mass is not concentrated at the center. It is not the scale of a galaxy but its vastly different mass distribution that renders our solar system derived “law of gravity” unworkable in galaxies and galaxy clusters. The only thing surprising about this is that it is surprising.

            In another vein, I am mystified by … if MoND was true. What do you mean by true in this context? MOND works well in the context of galaxies but not galaxy clusters. It’s just some math that gets the rotation curves correct for galaxies. In that it is similar to the way Newton and Einstein work on the solar system scale but not the galactic. Are ND and GR more “true” than MOND? If so, why?

            As far as I can see, all MONDians do is insert a formalism at the point where the gravitational effect of the central bulge falls below the gravitational effect of the mass distributed in the disk. This is a necessary modification because ND and GR treat the field effectively as spherical (dropping off as 1/r^2) whereas the disk field is effectively circular (dropping off as 1/r). Neither ND, GR, or MOND offer a clear and unambiguous physical explanation for the cause of the effect they successfully calculate.


            1. Of course, by size I mean also numbers, not just distance. A galaxy is not just an n-body system with small n and 2 or 3 dominant masses…
              As for “if MoND was true” I meant if we will find a law that reduces to a MoND-like approximation in the weak field regime and to Newton’s law / GR in strong fields . Like you said, MoND works very well at galactic scales, but it is only an empirical law, just like Ohm’s.
              I know you say that MoND-like behavior in galaxies is just an emergent effect and if the galaxies would be modeled differently, we would see the effect without any modification in GR. However, I cannot see how EFE would emerge from this for, let’s say, satellite galaxies.


              1. @Apass I cannot see how EFE would emerge from this for, let’s say, satellite galaxies.

                I don’t follow you here. If the hypothetical quantitative model that gets the disk component’s contribution to the rotational velocity correct, strictly in terms of the baryonic matter – which is what the RAR and BTF relationships clearly imply, there is no reason to think it couldn’t get the EFE right also. What is always surprising to me in this type of discussion is how little interest anyone in the field seems to have in working on, or even considering such an approach.

                The problem with MOND is not that it is “only” empirical, but that it is entirely couched in terms of a second order acceleration effect, which means that MOND has nothing to say about the cause of the effect it successfully models.

                Given how deeply flawed LCDM appears when one is permitted to consider that the “universe” it describes looks nothing like the cosmos we observe, it would seem only reasonable that at least some of the effort, currently expended on the snipe hunt for dark matter, might be more fruitfully employed addressing the glaring shortcomings of the standard model itself. And by addressing the shortcomings I mean to include, specifically, reconsidering the century-old foundational assumptions on which LCDM so shakily rests.

                Liked by 1 person

        3. @dlb,

          A dark matter than doesn’t interact with baryonic matter at all and is undetectable in the lab is a perfectly valid hypothesis for me, I don’t see why that would be a problem… things would be so much easier for supporters of dark matter if it was detected! Alas for them, it looks like it won’t be. But in my opinion it should make no difference to the theory.

          It may make no difference to the theory but it makes a big difference to the physics. It means that the theory has nothing meaningful to say about the physics beyond the trivial ability to mimic observations, albeit at the expense of scientific logic and credulity.


  11. True enough, the self- correcting processes of science have not yet caught up with what is going on in cosmology, and elsewhere. Still, what may look so dismal right now is likely to be seen as a temporary setback by generations to come. Prof..Helge Kragh’s book is an excellent reminder of this : ” Higher Speculations : Grand Theories and Failed Revolutions in Physics and Cosmology”, OUP, 2011,


  12. @Budrap
    I’m not sure about what hypothetical quantitative model you’re talking about when you say this:
    ” If the hypothetical quantitative model that gets the disk component’s contribution to the rotational velocity correct […] there is no reason to think it couldn’t get the EFE right also”
    I was referring to your comments in which you complain that the used galactic models are not correct (i.e. like they discard gravitational viscosity). Let’s say that, indeed, it is possible to reproduce the MoNDian behavior in the galaxy just by using a different disk model, then through what mechanism would this new disk model give rise to the EFE felt by a satellite galaxy isolated by tens of thousands of light years where you cannot invoke anymore viscosity-like effects?
    As for MoND being couched in terms of second order acceleration effect – I don’t dispute that. But that doesn’t change the fact that it is only an empirical law, by the very definition of an empirical law that makes connections between given conditions and observed behaviors. I never said it provides meaningful insights relative to the real cause of the observed behaviors.


  13. Something which wasn’t clear 25 years ago, but is clear now, is that the magnitude—redshift relation for type Ia supernovae indicates a Universe with Ω=0.3 and λ=0.7 or so, at least within the framework of standard cosmology. Whether or not one agrees with standard cosmology, that is an additional observational fact which needs to be accounted for. The fact that this value of Ω (by which I almost always mean Ω_matter) jibes well with that from previous determinations and also later ones (e.g. the CMB), that the value of λ also jibes well with the CMB (with Planck, one can measure Ω and λ even without further assumptions), and that the resulting age of the Universe jibes well with other estimates, is why this is known as the concordance model. (For those who think that mainstream cosmology is all groupthink, the current concordance model is a big departure from that which some (but by no means all) previously thought was the correct model; people were convinced by observations).

    Presumably MOND, or some extension to it, has no explanation for the observed magnitude—redshift relation. So even if the paper mentioned here recently manages to explain the power spectrum of the CMB in a MOND context (and it would be interesting if there were predictions which differ from those of the standard model regarding aspects not yet observed), one still needs to explain the magnitude—redshift relation.

    I see two possibilities. One is that the standard cosmology is correct, so no explanation is needed. MOND could also be correct if the cosmological dark matter is very smoothly distributed or at least not associated with galaxies. As far as I know, such a scenario is not ruled out, but is perceived by some not to be in the spirit of MOND, since most matter is still non-baryonic. The other possibility is that the global value of Ω is essentially the same as the baryonic value, in which case I would be really surprised (but I am willing to be surprised) if the observed magnitude—redshift relation could be postdicted (prediction is no longer possible, unless — and that would be really interesting — there are deviations between the two theories in regions not yet observed) in a manner which makes it clear that it could have been predicted if the answer had not been known. I would even settle for a theory with no more parameters than the standard model which is even compatible (via adjustable parameters) with the observed magnitude—redshift relation.

    While it is true that mainstream cosmologists should take in what observers have empirically learned about galaxies, it is also important that MOND enthusiasts realize that getting galaxy-scale phenomena correct is not enough, but that concrete observations relevant to cosmology need to be explained quantitatively as well.

    Note that in the standard cosmological model, only 1920s cosmology is used. Despite the huge amount of high-quality data now available, classical cosmology needs nothing beyond what was known about relativistic cosmology in the 1920s.

    Liked by 1 person

Comments are closed.