I set out last time to discuss some of the tensions that persist in afflicting cosmic concordance, but didn’t get past the Hubble tension. Since then, I’ve come across more of that, e.g., Boubel et al (2024a), who use a variant of Tully-Fisher to obtain H0 = 73.3 ± 2.1(stat) ± 3.5(sys) km/s/Mpc. Having done that sort of work, their systematic uncertainty term seemed large to me. I then came across Scolnic et al. (2024) who trace this issue back to one apparently erroneous calibration amongst many, and correct the results to H0 = 76.3 ± 2.1(stat) ± 1.5(sys) km/s/Mpc. Boubel is an author of the latter paper, so apparently agrees with this revision. Fortunately they didn’t go all Sandage-de Vaucouleurs on us, but even so, this provides a good example of how fraught this field can get. It also demonstrates the opportunity for confirmation bias, as the revised numbers are almost exactly what we find ourselves. (New results coming soon!)

It’s a dang mess.

The Hubble tension is only the most prominent of many persistent tensions, so let’s wade into some of the rest.

The persistent tension in the amplitude of the power spectrum

The tension that cosmologists seem to stress about most after the Hubble tension is that in σ8. σ8 quantifies the amplitude of the power spectrum; it is a measure of the rms fluctuation in mass in spheres of 8h-1 Mpc. Historically, this scale was chosen because early work by Peebles & Yu (1970) indicated that this was the scale on which the rms contrast in galaxy numbers* is unity. This is also a handy dividing line between linear and nonlinear regimes. On much larger scales, the fluctuations are smaller (a giant sphere is closer to the average for the whole universe) so can be treated in the limit of linear perturbation theory. Individual galaxies are “small” by this standard, so can’t be treated+ so simply, which is the excuse many cosmologists use to run shrieking from discussing them.

As we progressed from wrapping our heads around an expanding universe to quantifying the large scale structure (LSS) therein, the power spectrum statistically describing LSS became part of the canonical set of cosmological parameters. I don’t myself consider it to be on par with the Big Two, the Hubble constant H0 and the density parameter Ωm, but many cosmologists do seem partial to it despite the lack of phase information. Consequently, any tension in the amplitude σ8 garners attention.

The tension in σ8 has been persistent insofar as I recall debates in the previous century where some kinds of data indicated σ8 ~ 0.5 while other data preferred σ8 ~ 1. Some of that tension was in underlying assumptions (SCDM before LCDM). Today, the difference is [mostly] between the Planck best-fit amplitude σ8 = 0.811 ± 0.006 and various local measurements that typically yield 0.7something. For example, Karim et al. (2024) find low σ8 for emission line galaxies, even after specifically pursuing corrections in a necessary dust model that pushed things in the right direction:

Fig. 16 from Karim et al. (2024): Estimates of σ8 from emission line galaxies (red and blue), luminous red galaxies (grey), and Planck (green).

As with so many cosmic parameters, there is degeneracy, in this case between σ8 and Ωm. Physically this happens because you get more power when you have more stuff (Ωm), but the different tracers are sensitive to it in different ways. Indeed, if I put on a cosmology hat, I personally am not too worried about this tension – emission line galaxies are typically lower mass than luminous red galaxies, so one expects that there may be a difference in these populations. The Planck value is clearly offset from both, but doesn’t seem too far afield. We wouldn’t fret at all if it weren’t for Planck’s damnably small error bars.

This tension is also evident as a function of redshift. Here are measures of the combination of parameters fσ8  =  Ωm(z)γσ8 measured and compiled by Boubel et al (2024b):

Fig. 16 from Boubel et al (2024b). LCDM matches the data for σ8 = 0.74 (green line); the purple line is the expectation from Planck (σ8 = 0.81). The inset shows the error ellipse, which is clearly offset from the Planck value (crossed lines), particularly for the GR& value of γ = 0.55.

The line representing the Planck value σ8 = 0.81 overshoots most of the low redshift data, particularly those with the smallest uncertainties. The green line has σ8 = 0.74, so is a tad lower than Planck in the same sense as other low redshift measures. Again, the offset is modest, but it does look significant. The tension is persistent but not a show-stopper, so we generally shrug our shoulders and proceed as if it will inevitably work out.

The persistent tension in the cosmic mass density

A persistent tension that nobody seems to worry about is that in the density parameter Ωm. Fits to the Planck CMB acoustic power spectrum currently peg Ωm = 0.315±0.007, but as we’ve seen before, this covaries with the Hubble constant. Twenty years ago, WMAP indicated Ωm = 0.24 and H0 = 73, in good agreement with the concordance region of other measurements, both then and now. As with H0, the tension is posed by the itty bitty uncertainties on the Planck fit.

Experienced cosmologists may be inclined to scoff at such tiny error bars. I was, so I’ve confirmed them myself. There is very little wiggle room to match the Planck data within the framework of the LCDM model. I emphasize that last bit because it is an assumption now so deeply ingrained that it is usually left unspoken. If we leave that part out, then the obvious interpretation is that Planck is correct and all measurements that disagree with it must suffer from some systematic error. This seems to be what most cosmologists believe at present. If we don’t leave that part out, perhaps because we’re aware of other possibilities so are not willing to grant this assumption, then the various tensions look like failures of a model that’s already broken. But let’s not go there today, and stay within the conventional framework.

There are lots of ways to estimate the gravitating mass density of the universe. Indeed, it was the persistent, early observation that the mass density Ωm exceeded that in baryons, Ωb, from big bang nucleosynthesis that got got the non-baryonic dark matter show on the road: there appears to be something out there gravitating that’s not normal matter. This was the key observation that launched non-baryonic cold dark matter: if Ωm > Ωb, there has% to be some kind of particle that is non-baryonic.

So what is Ωm? Most estimates have spanned the range 0.2 < Ωm < 0.4. In the 1980s and into the 1990s, this seemed close enough to Ωm = 1, by the standards of cosmology, that most Inflationary cosmologists presumed it would work out to what Inflation predicted, Ωm = 1 exactly. Indeed, I remember that community directing some rather vicious tongue-lashings at observers, castigating them to look harder: you will surely get Ωm = 1 if you do it right, you fools. But despite the occasional claim to get this “right” answer, the vast majority of the evidence never pointed that way. As I’ve related before, an important step on the path to LCDM – probably the most important step – was convincing everyone that really Ωm < 1.

Discerning between Ωm = 0.2 and 0.3 is a lot more challenging than determining that Ωm < 1, so we tend to treat either as acceptable. That’s not really fair in this age of precision cosmology. There are far too many estimates of the mass density to review here, so I’ll just note a couple of discrepant examples while also acknowledging that it is easy to find dynamical estimates that agree with Planck.

To give a specific example, Mohayaee & Tully (2005) obtained Ωm = 0.22 ± 0.02 by looking at peculiar velocities in the local universe. This was consistent with other constraints at the time, including WMAP, but is 4.5σ from the current Planck value. That’s not quite the 5σ we arbitrarily define to be an undeniable difference, but it’s plenty significant.

There have of course been other efforts to do this, and many of them lead to the same result, or sometimes even lower Ωm. For example, Shaya et al. (2022) use the Numerical Action Method developed by Peebles to attempt to work out the motions of nearly 10,000 galaxies – not just their Hubble expansion, but their individual trajectories under the mutual influence of each other’s gravity and whatever else may be out there. The resulting deviations from a pure Hubble flow depend on how much mass is associated with each galaxy and whatever other density there is to perturb things.

Fig. 4 from Shaya et al (2022): The gravitating mass density as a function of scale. After some local variations (hello Virgo cluster!), the data converge to Ωm = 0.12. Reaching Ωm = 0.24 requires an equal, additional amount of mass in “interhalo matter.” Even more mass would be required to reach the Planck value (red line added to original figure).

This result is in even greater tension with Planck than the earlier work by Mohayaee & Tully (2005). I find the need to invoke interhalo matter disturbing, since it acts as a pedestal in their analysis: extra mass density that is uniform everywhere. This is necessary so that it contributes to the global mass density Ωm but does not contribute to perturbing the Hubble flow.

One can imagine mass that is uniformly distributed easily enough, but what bugs me is that dark matter should not do this. There is no magic segregation between dark matter that forms into halos that contain galaxies and dark matter that just hangs out in the intergalactic medium and declines to participate in any gravitational dynamics. That’s not an option available to it: if it gravitates, it should clump. To pull this off, we’d need to live in a universe made of two distinct kinds of dark matter: cold dark matter that clumps and a fluid that gravitates globally but does not clump, sort of an anti-dark energy.

Alternatively, we might live in an underdense region such that the local Ωm is less than the global Ωm. This is an idea that comes and goes for one reason or another, but it has always been hard to sustain. The convergence to low Ωm looks pretty steady out to ~100 Mpc in the plot above; that’s a pretty big hole. Recall the non-linearity scale discussed above; this scale is a factor of ten larger so over/under-densities should typical be ±10%. This one is -60%, so I guess we’d have to accept that we’re not Copernican observers after all.

The persistent tension in bulk flows

Once we get past the basic Hubble expansion, individual galaxies each have their own peculiar motion, and beyond that we have bulk flows. These have been around a long time. We obsessed a lot about them for a while with discoveries like the Great Attractor. It was weird; I remember some pundits talking about “plate tectonics” in the universe, like there were giant continents of galaxy superclusters wandering around in random directions relative to the frame of the microwave background. Many of us, including me, couldn’t grok this, so we chose not to sweat it.

There is no single problem posed by bulk flows^, and of course you can find those that argue they pose no problem at all. We are in motion relative to the cosmic (CMB) frame$, but that’s just our Milky Way’s peculiar motion. The strange fact is that it’s not just us; the entirety of the local universe seems to have a unexpected peculiar motion. There are lots of ways to quantify this; here’s a summary table from Courtois et al (2025):

Table 1 from Courtois et al (2025): various attempts to measure the scale of dynamical homogeneity.

As we look to large scales, we expect the universe to converge to homogeneity – that’s the Cosmological Principle, which is one of those assumptions that is so fundamental that we forget we made it. The same holds for dynamics – as we look to large scales, we expect the peculiar motions to average out, and converge to a pure Hubble flow. The table above summarizes our efforts to measure the scale on which this happens – or doesn’t. It also shows what we expect on the second line, “predicted LCDM,” where you can see the expected convergence in the declining bulk velocities as the scale probed increases. The third line is for “cosmic variance;” when you see these words it usually means something is amiss so in addition to the usual uncertainties we’re going to entertain the possibility that we live in an abnormal universe.

Like most people, I was comfortably ignoring this issue until recently, when we had a visit and a talk from one of the protagonists listed above, Richard Watkins (W23). One of the problems that challenge this sort of work is the need for a large sample of galaxies with complete sky coverage. That’s observationally challenging to obtain. Real data are heterogeneous; treating this properly demands a more sophisticated treatment than the usual top-hat or Gaussian approaches. Watkins described in detail what a better way could be, and patiently endured the many questions my colleagues and I peppered him with. This is hard to do right, which gives aid and comfort to the inclination to ignore it. After hearing his talk, I don’t think we should do that.

Panel from Fig. 7 of Watkins et al. (2023): The magnitude of the bulk flow as a function of scale. The green points are the data and the red dashed line is the expectation of LCDM. The blue dotted line is an estimate of known systematic effects.

The data do not converge with increasing scale as expected. It isn’t just the local space density Ωm that’s weird, it’s also the way in which things move. And “local” isn’t at all small here, with the effect persisting out beyond 300 Mpc for any plausible h = H0/100.

This is formally a highly significant result, with the authors noting that “the probability of observing a bulk flow [this] large … is small, only about 0.015 per cent.” Looking at the figure above, I’d say that’s a fairly conservative statement. A more colloquial way of putting it would be “no way we gonna reconcile this!” That said, one always has to worry about systematics. They’ve made every effort to account for these, but there can always be unknown unknowns.

Mapping the Universe

It is only possible to talk about these things thanks to decades of effort to map the universe. One has to survey a large area of sky to identify galaxies in the first place, then do follow-up work to obtain redshifts from spectra. This has become big business, but to do what we’ve just been talking about, it is further necessary to separate peculiar velocities from the Hubble flow. To do that, we need to estimate distances by some redshift-independent method, like Tully-Fisher. Tully has been doing this his entire career, with the largest and most recent data product being Cosmicflows-4. Such data reveal not only large bulk flows, but extensive structure in velocity space:

The Laniakea supercluster of galaxies (Tully et al. 2014).

We have a long way to go to wrap our heads around all of this.

Persistent tensions persist

I’ve discussed a few of the tensions that persist in cosmic data. Whether these are mere puzzles or a mounting pile of anomalies is a matter of judgement. They’ve been around for a while, so it isn’t fair to suggest that all of the data are consistent with LCDM. Nevertheless, I hear exactly this asserted with considerable frequency. It’s as if the definition of all is perpetually shrinking to include only the data that meet the consistency criterion. Yet it’s the discrepant bits that are interesting for containing new information; we need to grapple with them if the field is to progress.

*This was well before my time, so I am probably getting some aspect of the history wrong or oversimplifying it in some gross way. Crudely speaking, if you randomly plop down spheres of this size, some will be found to contain the cosmic average number of galaxies, some twice that, some half that. That the modern value of σ8 is close to unity means that Peebles got it basically right with the data that were available back then and that galaxy light very nearly traces mass, which is not guaranteed in a universe dominated by dark matter.


+It amazes me how pervasively “galaxies are complicated” is used as an excuse++ to ignore all small scale evidence.

Not all of us are limited to working on the simplest systems. In this case, it doesn’t matter. The LCDM prediction here is that galaxies should be complicated because they are nonlinear. But the observation is that they are simple – so simple that they obey a single effective force law. That’s the contradiction right there, regardless of what flavor of complicated might come out of some high resolution simulation.

++At one KITP conference I attended, a particle-cosmologist said during a discussion session, in all seriousness and with a straight face, “We should stop talking about rotation curves.” Because scientific truth is best revealed by ignoring the inconvenient bits. David Merritt remarked on this in his book A Philosophical Approach to MOND. He surveyed the available cosmology textbooks, and found that not a single one of them mentioned the acceleration scale in the data. I guess that would go some way to explaining why statements of basic observational facts are often met with stunned silence. What’s obvious and well-established to me is a wellspring of fresh if incredible news to them. I’d probably give them the stink-eye about the cosmological constant if I hadn’t been paying the slightest attention to cosmology for the past thirty years.


&There is an elegant approach to parameterizing the growth of structure in theories that deviate modestly from GR. In this context, such theories are usually invoked as an alternative to dark energy, because it is socially acceptable to modify GR to explain dark energy but not dark matter. The curious hysteresis of that strange and seemingly self-contradictory attitude aside, this approach cannot be adapted to MOND because it assumes linearity while MOND is inherently nonlinear. My very crude, back-of-the-envelope expectation for MOND is very nearly constant γ ~ 0.4 (depending on the scale probed) out to high redshift. The bend we see in the conventional models around z ~ 0.6 will occur at z > 2 (and probably much higher) because structure forms fast in MOND. It is annoyingly difficult to put a more precise redshift on this prediction because it also depends on the unknown metric. So this is a more of a hunch than a quantitative prediction. Still, it will be interesting to see if roughly constant fσ8 persists to higher redshift.


%The inference that non-baryonic dark matter has to exist assumes that gravity is normal in the sense taught to us by Newton and Einstein. If some other theory of gravity applies, then one has to reassess the data in that context. This is one of the first considerations I made of MOND in the cosmological context, finding Ωm ≈ Ωb.


^MOND is effective at generating large bulk flows.


$Fun fact: you can type the name of a galaxy into NED (the NASA Extragalactic Database) and it will give you lots of information, including its recession velocity referenced to a variety of frames of reference and the corresponding distance from the Hubble law V = H0D. Naively, you might think that the obvious choice of reference from is the CMB. You’d be wrong. If you use this, you will get the wrong distance to the galaxy. Of all the choices available there, it consistently performs the worst as adjudicated by direct distance measurements (e.g., Cepheids).

NED used to provide a menu of choices for the value of H0 to use. It says something about the social-tyranny of precision cosmology that it now defaults to the Planck value. If you use this, you will get the wrong distance to the galaxy. Even if the Planck H0 turns out to be correct in some global sense, it does not work for real galaxies that are relatively near to us. That’s what it means to have all the “local” measurements based on direct distance measurements (e.g., Cepheids) consistently give a larger H0.

Galaxies in the local universe are closer than they appear. Photo by P.S. Pratheep, www.pratheep.com

39 thoughts on “Some more persistent cosmic tensions

  1. Assuming DM is the explanation for galaxies flat rotation curve, I guess we can measure the ratio of DM mass vs baryonic mass in them? We can then include the intergalactic baryonic mass assuming an upper bound, and obtain a “local” lower bound for the ratio of their relative density (nothing can stop us from adding more dark matter). How does this lower bound compares with the value (~5) from Planck? Is there a tension or we just don’t have good data to do this?

      1. It should be clear that General Relativity augmented with dark matter is, de facto, a phenomenological adjustment to GR rather than a purely fundamental theory.

        When comparing GR + dark matter with MOND at the level of galaxy‐scale complexity as competing phenomenological models, one finds that GR + dark matter requires several free parameters per galaxy—typically a halo scale radius, a characteristic density, and a stellar mass‐to‐light ratio—whereas MOND invokes a single universal acceleration scale without any unseen matter .

        The need for multiple parameters per galaxy in the dark‐matter paradigm leads to the well‐known “disk–halo degeneracy”: different combinations of halo and baryonic parameters can reproduce essentially the same rotation curve. This flexibility makes GR + dark matter difficult to falsify and limits its predictive power on galaxy scales .(https://tritonstation.com/2018/10/)

        From a parsimonious (Occam’s‐razor) standpoint, it is evident that MOND provides a more economical description of galaxy rotation speeds, as it succeeds with only one global constant.

        At higher hierarchical levels (e.g., galaxy clusters and cosmology), the “ugliness” of GR + dark matter only grows exponentially: fitting galaxy clusters and large‐scale structure entails introducing additional parameters or even separate phenomenological constructs (such as hot dark matter components or feedback prescriptions) .(https://www.mdpi.com/2218-1997/6/8/107)

        Over time, it will become widely recognized that any model—whether labeled “fundamental” or “phenomenological”—possesses a narrow range of applicability tied to the complexity scale of the systems it describes, and that pushing beyond that range invariably demands ad hoc modifications.

        References

        1. Triton Station blog, “It Must Be So. But which Must?,” Oct 2018. Discussion of disk–halo degeneracy in dark‐matter halo fits, noting that three parameters per galaxy are used where only one suffices in MOND .

        2. P. S. Behroozi et al., “Dark Matters on the Scale of Galaxies,” Universe 6, 107 (2020). Review of cold dark‐matter successes and challenges on galaxy scales, highlighting the phenomenological nature of dark matter fits

        1. Occam’s razor was critical to persuading me to consider MOND.

          The course of physics has been perverted over the past 40 years by the search for a theory of everything. Whether such a thing can exist is hard to assess when we remain so far from understanding the first among forces, gravity.

          1. Would you consider doing something on hybrid theories? There’s no reason why LCDM and MOND, rather than being alternatives, can’t be complementary. They cover different areas – MOND does far better at the galaxy scale, LCDM is thought to do better with the large scale structure and the CMB. Suppose they’re both true…. People think so. In the conference survey, the most popular explanation for the mass discrepancy was ‘some hybrid’.

            It can be argued that LCDM has been falsified in galaxies, as the RAR was shown by yourself and colleagues to extend well beyond the expected edge of the DM halo. That result was specific, and supports MOND directly – the other recent result about the early universe also seemed to falsify LCDM, but it only loosely supports MOND (correct me if I’m wrong on that).

            Suppose in individual galaxies, MOND has won. It still doesn’t have to be everywhere – we have a bias towards universal theories, partly because at earlier stages we found some basic patterns that were universal, or seemed so. Both camps want theories that work across the board, but perhaps there are two camps because they don’t.

            (This isn’t due to complexity, as the two-tone aspect of a galaxy reveals – nature pulls back a curtain, and Newton is alive and well on one side of it, even with all those fields combining.)

            There are things that suggest MOND is not universal – I’ve mentioned the RAR in clusters, with a different acceleration scale, and you’ve pointed out that it’s only one interpretation, and far less certain than the galaxy RAR. But there are also other things, and MOND may be just one part of the picture. If so, because of similarities between these two aspects of the mass discrepancy, it may well be that a UT will connect them somehow, and the survey shows people think a hybrid is needed. So any ideas of that kind might be of interest.

            1. First, yes – I think the parable of the blind men and the elephant applies. We are trying to build a complete picture from disparate pieces. (Judy Young, Vera Rubin’s daughter and an accomplished astronomer in her own right, was the first to point out to me the applicability of this parable).

              I dislike hybrids, but that is a philosophical bias on my part. They feel like Tycho Brahe’s compromise between geo- and heliocentrism. So far, anyway: perhaps something better will come along, or I misunderappreciate some of the existing hybrids.

              If one wants MOND to be confined to galaxies, then one needs a model that reverts to the inverse square large on some appropriate larger scale (or lower acceleration). That seems odd to me, but such models do exist.

              If one wants to have both MOND and literal dark matter, the DM has to somehow be excluded from galaxies. A stronger force law plus dynamically cold dark matter would lead to there being lots of dark matter in galaxies – it couldn’t not collect there – which would in turn ruin the virtue of MOND in galaxies. That’s one reason Angus & Banik & Kroupa have considered sterile neutrino dark matter – it is warm enough not to clump on galaxy scales but cool enough to do what’s needed in clusters and for cosmology. Kinda sorta. One still needs a more complete theory of MOND – we can’t just tack MOND onto a GR cosmology with a big dollop of sterile neutrinos and pretend this Frankenstein’s monster is in anyway satisfactory.

              I don’t know what to make of clusters. When Milgrom first considered them in his original 1983 papers, there was already a problem. He suggested then that maybe the then-recent X-ray detections meant that there were a lot more baryons that met the [optical] eye. At the time, we believed M/L ~ 300 in clusters, so there was a huge amount of dark matter. In that context, to ask for a lot more baryons sounded like wishful thinking. However, he turned out to be correct insofar as most of the baryons that we now know about were in the X-ray emitting intracluster gas rather than in the stars of the cluster galaxies. It wasn’t enough to resolve the problem for MOND, but there were indeed a lot more baryons present. So in retrospect, there were at least *two* components to Zwicky’s dunkel matter: whatever the solution to the modern dark matter problem is PLUS dark baryons, some of which we now recognize to be the X-ray emitting gas. It is commonly assumed that we now know about ALL of the baryons in clusters, but I don’t think we can be so sure of that. It has happened before that clusters harbored a LOT more baryons that we thought, so I’m reluctant to exclude that it could happen again.

              1. I see what you mean, and hadn’t thought of some of those points. I think there are assumptions that don’t have to be so – there are more ways to modify gravity at scales beyond the solar system than via changes to the underlying laws themselves.

                There might be no need to revert to the inverse square law, as it can be the underlying law throughout. At a more superficial level, MOND can be caused by a behaviour of matter under certain conditions, and some flavours of MOND hint at this, like modified inertia.

                Or (as in PSG where the field has an existence in its own right) it could be a behaviour of a gravity field itself under certain conditions, but at a more superficial level. This is suggested by the fact that MOND refers to the inner pattern of the field to derive the outer one, and what it would otherwise have been at a given point outside the boundary – you need that number. This suggests some change to the field more superficial than the underlying pattern.

                I see what you mean about how hybrids might need DM excluded from galaxies. I can only answer for PSG on that, I hadn’t thought of this point, but it might rule out a few other hybrids. In PSG the DM, as well as being ‘stuff’, also actually constitutes the Newtonian gravity field – it’s an emitted medium which matter latches onto via helical path refraction (because refraction is well understood, it has been possible to support its existence with mathematical evidence). So until one reaches much larger scales – where an excess of the medium builds up, as an enormous numbers of masses are emitting it together – DM and a gravity field are different ways of saying the same thing, and there’s no conflict having both MOND and DM in galaxies.

            2. At each hierarchical level galaxies, galaxy clusters, etc., new phenomenological models—with their own (ideally global) parameters and a minimum of them.—are the answer.

              This stratified approach is already familiar in quantum physics: one employs effective theories tailored to specific complexity regimes, rather than a single all-purpose wave function.

              The notion of a truly universal theory is a myth. Every model, whether “fundamental” or phenomenological, is built on assumptions that restrict its domain of validity to a narrow band of system complexity.

              “Fundamental” theories excel only in simple regimes; as soon as one confronts many-body or highly entangled systems, effective descriptions take over. For example, molecular spectroscopy—even molecules with a few atoms—relies on phenomenological Hamiltonians fine-tuned to reproduce observed spectra, not on first-principles.

              Theoreticians have been deluding themselves talking about the many worlds interpretation of quantum mechanics or even a universal wave function but in reality quantum behavior transition to classicality after a certain complexity threshold is surpassed, the General Relativity regime is no exception and MOND is there to show it.

              In short, nature’s hierarchical structure demands a pluralism of theories: each is valid only within its own “complexity window.” Arguing otherwise is tantamount to waging a futile war against mathematics itself.

              1. Yes, we get it, you think a theory of everything is impossible. Please stop posting the same screed over and over.

                More generally, I remind everyone focus comments on the topic of the post. I’m very tolerant of more wide-ranging discussions, but that tolerance is not infinite. If you have lots to say on topics that are only tangentially related, you should start your own blog.

  2. “The LCDM prediction here is that galaxies should be complicated because they are nonlinear. But the observation is that they are simple – so simple that they obey a single effective force law.”
    According to Professor Milgrom, “MOND … has been propounded as an alternative to dark matter in accounting for the acceleration anomalies in the Universe: The gravitational accelerations calculated in standard dynamics from the observed mass distributions in galactic systems (and the Universe at large) fall very short of the measured accelerations of test particles.”
    Milgrom, Mordehai. “MOND as manifestation of modified inertia.” arXiv preprint arXiv:2310.14334 (2023). https://arxiv.org/abs/2310.14334
    From the viewpoint of general relativity, it seems that MOND predicts an excess of gravitational redshift. What is the mathematically simplest way of modifying Einstein’s field equations?
    Let us consider Einstein’s field equations R(μ,ν) – 1/2 g(μ,ν) R = –κ T(μ,ν) (with μ & ν ranging over 1,2,3,4})
    following Einstein’s derivation as presented on pages 91–93 of “The Meaning of Relativity” by Albert Einstein, 1923
    https://archive.org/details/meaningofrelativ00eins_0 .
    Einstein gets the left-hand side by assuming that the geometric tensor satisfies 3 conditions, of which the 3rd condition is: “Its divergence must vanish identically.” From the 1st two conditions, Einstein concludes that the left-hand side must have the form R(μ,ν) + a g(μ,ν) R and then says that there is a mathematical proof that the 3rd condition implies that the constant a = –1/2 . If Einstein’s equivalence principle is slightly wrong due to the existence of MOND inertia, then it is at least somewhat plausible that one should consider the possibility that constant a = –1/2 + dark-matter-compensation-constant, where dark-matter-compensation-constant has the value (3.9±.5) * 10^–5 . Actually, one should study the data to get the best fit for dark-matter-compensation-constant at any particular astronomical scale for the data. Am I wrong here? Consider the following:
    https://www.sciencenews.org/article/newton-gravitational-constant-physics

  3. Regarding comments about hybrid theories and also what to do with Einstein’s equations, both concerns are substantially addressed in the Nariai solution. One may consider in this case whether an observer is “Janus-like” at the boundary (of chained observable universes?), or treated as a component in the bulk spacetime. The paradigm would seem to be rich in complementarity. In some sense, the dark matter can be located either beyond the observable universe, or within it.

  4. I know that Milgrom has written extensively about the cosmology connection to MOND, and that subject has been written up in multiple posts here on Triton Station. But it’s late, and I’m too lazy to look it up. So, going by what it says in the Wikipedia page “Modified Newtonian dynamics”, as a quick reference, it states: “There is a potential link Between MOND and cosmology. It has been noted that the value of a0 is within an order of magnitude of cH0 [sorry, can’t do subscripts] where c is the speed of light and H0 is the Hubble constant.” Further down it says: “It is also close to the acceleration rate of the universe through square root of Lambda times c squared [had to write it out as can’t do math symbols] where Lambda is the cosmological constant.”

    Thinking about this from a mechanistic perspective, if the rate of accelerated expansion of space is thought of like a ‘gas’, this ‘gas’ is able to penetrate deep within the flat spiral structure of spiral galaxies since their flat sides are exposed to the interstellar medium. But the penetration washes up against the higher acceleration regime ‘gas’ in the inner part of spirals that is Newtonian and stops there. In the case of clusters the outermost galaxies provide shielding from the cosmological acceleration rate ‘gas’ and thus change their dynamics from the MOND acceleration rate seen in galaxies. The problem with this simple idea is to bridge the gap between the acceleration rate of the cosmos and MOND’s a0 value.

    I know this is a pretty wacky idea, but it’s just a thought.

  5. “There are lots of ways to estimate the gravitating mass density of the universe.” In order to correctly estimate the gravitating mass density of the universe is it absolutely necessary to fully understand MOND?
    More than 20 years ago, Professor Milgrom stated, ” A relativistic extension of MOND, which we still do not have, is needed for conceptual completerion of the MOND idea.”
    Milgrom, Mordehai. “MOND—theoretical aspects.” New Astronomy Reviews 46, no. 12 (2002): 741-753. https://www.sciencedirect.com/science/article/abs/pii/S1387647302002439
    According to some members of the the Gravity Probe B science team, “”Study of the flight data revealed two unanticipated gyroscope behaviors. These two behaviors, a slowly varying readout scale factor and a specific type of Newtonian torque, are now well understood, and have been incorporated into the data analysis model.”
    “Gravity Probe B Experiment Error” by Barry Muhlfelder, G. Mac Keiser, and John Turneaure in APS April Meeting Abstracts, April 2007 Bibcode: 2007APS..APR.L1027M https://ui.adsabs.harvard.edu/abs/2007APS..APR.L1027M/abstract
    Let us assume that gravitational energy is conserved. Consider the following hypothesis: Gravity Probe B’s 4 ultra-precise gyroscopes functioned correctly according to design specifications and confirmed the existence of MOND inertia — furthermore, the results of Gravity Probe B (and similar future experiments) are essential to finding the correct relativistic extension of MOND and how MOND inertia is physically manifested.

    1. It is intriguing that to ‘solve the Gravity-B probe anomaly, relatively simple equations where used to ‘buffer out’ the unexpected behavior. The ‘well understood’ is a red herring: There is no way to verify the cause of the anomaly, other than to do what we always should do when the results do not meet expectations: Repeat the test.

      1. Hopefully they kept the raw data. Gravity probe B was an experiment well in the Newtonian regime, nowhere near the MOND regime. The four gyroscopes were the size of ping pong balls. It took a long time to build (in one of the books I’ve said they worked very hard to remove the unexpected result they spent 40 years trying to get). PSG has a generalised equation for the geodetic effect – angle per orbit – with very different terms from the GR one, but it gives the same numbers to 16 decimal places. It was derived by assuming the refractive medium slows the upper part of the gyro slightly less than the lower part, due to radial distance, turning it through an angle over time. This shows how a refractive medium can mimic curvature, and effectively extended Eddington’s RM interpretation for GR (the idea has always had problems, and goes back to Newton), to cover matter as well as light.

  6. I was groggy last night, so to condense what I suggested in the previous ‘gas’ comment, the gist of what I was driving at was the accelerated expansion of the Universe somehow influences and sets the value of a0; a theme that Milgrom had already addressed in the 80’s, as I recall reading. Assuming this accelerated expansion is uniform throughout the Cosmos that obviates the need for any instantaneous backreaction over cosmic distances, violating causality, as seen in other theories. Then, the outermost galaxies in clusters would shield the innermost galaxies, where the greatest deviation (2X) from a0 is seen.

    1. I suppose some screening mechanism could play a role in the cluster problem. It would help to know the 3D structure of clusters so we know which galaxies are at the center and hence most shielded. The 2D structure is fairly clear as projected on the sky, but the depth is usually just assigned by redshift, and that can be fooled by peculiar motions. So we need pretty good direct distance determinations. That’s hard but not impossible in nearby clusters, i.e., Virgo. Virgo is well-enough studied there might be enough direct distance measurements to start to piece something together.

      1. Would inner galaxies be shielded from the excess MOND acceleration typically found in clusters, so that the innermost galaxies might have a0 closer to more isolated galaxies, or are you thinking the other way around?
        If the outer galaxies are subject to an additional MOND acceleration component from the cluster as a whole then the first case seems more probable, but maybe I am misunderstanding the proposed mechanism.

  7. Stacy, I remember that some years ago you reported on some work you had done on the satellite galaxies of Andromeda and how MOND explained their properties. There is a recent (last month) paper on ArXiv https://arxiv.org/abs/2504.08047 Andromeda’s asymmetric satellite system as a challenge to cold dark matter cosmology by Kosuke Jamie Kanehisa, Marcel S. Pawlowski, and Noam Libeskind.

    I was wondering if you had any comments on this. I only picked this up because I watch Nasa Space News channel on YouTube.

    1. The lopsidedness of the dwarf satellites around Andromeda is weird in any paradigm. I mean, really weird. I don’t know what to make of it.

      1. I just this morning had a colleague contact me to say he thought the lopsidedness might be an effect of the quadrupole external field in MOND, similar to how MOND might explain the Planet 9 anomaly in the solar system (a subject I’ve been meaning to write about here for at least a year).

        1. There are so few explanations on the table for the behaviour of dwarf galaxies, it’s good you’ve found another. This preprint (recently updated and in its final form) https://gwwsdk1.wixsite.com/link/newpreprint-pdf addresses the planes of satellite galaxies problem, which is almost certainly related to the other questions.

        2. What are your current thoughts on whether there appears to be a universal external field?

          If there was such a field, it might have different properties from a “real” external field. It may be merely effective between observer and horizon.

          If H0 sets the size of the expanded universe at roughly say 47Gly, and the surface gravity of a “black hole” of this size is roughly a0, then we might experience the unreal external field.

          By measuring rotation curves we should make a transformation so that the galaxy is at rest with respect to the observer, but maybe in doing so we transfer our own a0 to the galaxy.

          Regardless of the galaxy’s inclination or distance from us, we measure the same a0. It is AS IF we stuck an imaginary black hole behind the galaxy, which shifted the center of mass, but affected only the velocity components pointing radially towards the observer.

          Does that make any sense, and can it be modeled?

  8. Oh my this was wonderful. Thank you. I first want to add another assumption we always make. The universe only had one big bang, and is not cyclic in some way.
    About non-clumpy DM. It could be that there is more than one type of dark matter. Say sterile neutrinos and gravitons have a tiny bit of mass. We are gathering lots of astronomy data. Why are the theorists so far behind? The unfortunate answer is that most of science has been captured by the money and new ideas are not welcome.

  9. Tully’s velocity maps might be the best view of the elephant in its entirety. To fully embrace MOND, you must check Isaac at the door and throw Albert off the bus. These somewhat compatible gravitational theories are only operative in dense environments, and most of the universe is not that. Inertial gravity works there too, but as you have chronicled for three decades, cosmic tension is overwhelming without it. Key observations in modern cosmology include pivotal works by Maxwell, Zwicky, Rubin, Tully Fisher and Milgrom, so MZRTFM? If you really want to rap your head around Tully’s 3-4d velocity maps; there are new rules: 1) Not all redshift is velocity – This is true in relativity too, but I think Albert underestimates the gravitational gradient, and this leads to the breadcrumbs trails that you see in Tully’s velocity paths. 2) Light has an energy budget traveling from place to place, strongly correlating redshift with distance, but at a price. Part of the price is a hundred years of failed BB cosmology. 3) Maxwell is out there in all his glory, bending not only the light trails, but also our lensed view of the cosmos.

  10. According to Fernández-Rañada
    ( https://en.wikipedia.org/wiki/Antonio_Fernández_Rañada ) & Tiemblo-Ramos, astronomical time might be different from atomic time.
    Rañada, Antonio F., and Alfredo Tiemblo. “Time, clocks, parametric invariance and the Pioneer Anomaly.” arXiv preprint gr-qc/0602003 (2006). https://arxiv.org/abs/gr-qc/0602003
    Is it possible that MOND inertia generates a complicated temporal variation requiring a paradigm shift that goes considerably beyond the concept of time in general relativity theory?

    1. I would argue that relativity goes considerably beyond the current paradigm. Why have we not discovered any cosmic equivalence? If E = mc^2 = DE = DMc^2 could we even wrap our heads around that?

  11. Anyone seen the paper on Unified Gravity? A possible candidate for quantum gravity incorporated in the Standard Model of elementary particles. I’m pretty excited! They showed the theory avoids infinities up to first loop order, unknown yet for higher orders.

    https://iopscience.iop.org/article/10.1088/1361-6633/adc82e
    “Gravity generated by four one-dimensional unitary gauge symmetries and the Standard Model”, Mikko Partanen and Jukka Tulkki.

    From the abstract: “The equivalence principle is formulated by requiring that the renormalized values of the inertial and gravitational masses are equal”. That seems like it is an added requirement, not fundamental to the theory, which would make space for Modified Inertia.

    1. That does sound imposed, which was the case originally – Einstein asserted it as a Principle from which many things followed. The trick is breaking this equivalence without breaking its virtues.

      1. That does seem interesting.
        Just throwing this out there as a half-thought: Maybe equivalence is a temporary condition arising from the inability to distinguish photons arriving from one or the other of the prescribed “equivalents”. So that an additional degree of freedom, in some cases, might break said equivalence? Is there any additional degree of freedom in MOND, such as a recession velocity or scale factor?

  12. “… MOND might explain the Planet 9 anomaly in the solar system …” Yes!!!
    Jones-Smith, Katherine, and Harsh Mathur. “Modified Newtonian Dynamics as an Alternative to the Planet Nine Hypothesis.” arXiv preprint arXiv:2304.00576 (2023). https://arxiv.org/abs/2304.00576
    If pro-MOND researchers push hard on the MOND vis-à-vis Planet 9 issue, then I predict that Milgrom, Tully, and Fisher will become Nobel Prize winners within 3 years.

  13. Is Mordehai Milgrom the world’s greatest living scientist? Are MOND’s empirical successes somehow related to understanding the dark energy phenomenon?
    Is Giuseppe Pipino a neglected genius?
    Pipino, Giuseppe. “Evidences for varying speed of light with time.” Journal of High Energy Physics, Gravitation and Cosmology 5, no. 2 (2019): 395-411.
    https://www.scirp.org/journal/paperinformation?paperid=91057

  14. Should pro-MOND researchers attempt to recruit some of the younger string theorists to work on the relativistic version of MOND? Green-Schwarz-Witten string theory might imply that all gravitons have spin 2, the Heisenberg uncertainty principle needs to be replaced by a stringy uncertainty principle, and there are 3 fundamental types of particles: standard bosons, standard fermions, and dark matter particles that are somehow related to supersymmetry.
    “Superstring Theory: Volume 1, Introduction” by Michael B. Green, John H. Schwarz, Edward Witten, Cambridge U. Press, 1988
    https://books.google.com/books?id=ItVsHqjJo4gC
    Consider 5 hypotheses:
    (1) There are 3 fundamental levels of physics: classical field theory, quantum field theory, and string theory.
    (2) The theory of contemporary string theory (the old string theory) needs to be replaced by a new string theory. The old string theory seems to imply that MOND is wrong, but the new string theory implies MOND is approximately correct.
    (3) Dark matter particles are the downfall of the old string theory.
    (4) The old string theory is too mathematically flexible — it can provide mathematical models of any plausible (or implausible) physics.
    (5) The new string theory needs the contemporary equivalent of Newton or Einstein — this wunderkind should strongly believe that Milgrom is the Kepler of contemporary cosmology.
    https://en.wiktionary.org/wiki/wunderkind

  15. Fact #1. MOND makes many (approximately) successful predictions.
    Fact #2. MOND’s empirical successes require a new paradigm for the foundations of physics.
    Kroupa, Pavel, Marcel Pawlowski, and Mordehai Milgrom. “The failures of the standard model of cosmology require a new paradigm.” International Journal of Modern Physics D 21, no. 14 (2012): 1230003. https://arxiv.org/abs/1301.3907
    Does the new paradigm consist of string theory with MOND inertia?
    Prediction #1. Milgrom, Tully, and Fisher will become Nobel Prize winners within 3 years.
    Prediction #2. Green, Schwarz, and Witten will become Nobel Prize winners within 5 years.

  16. My guess is that string theory depends upon what one might call “Green-Schwarz mathematical physics” or upon conjectural mathematical physics involving the monster group & the 6 pariah groups. (There might be 6 basic quarks because there are 6 pariah groups — google “6 quarks milgrom mond”.)
    Green, Michael B., and John H. Schwarz. “Anomaly cancellations in supersymmetric D = 10 gauge theory and superstring theory.” Physics Letters B 149, no. 1-3 (1984): 117-122. https://www.sciencedirect.com/science/article/abs/pii/037026938491565X
    https://en.wikipedia.org/wiki/Monster_group
    In either case, string theorists might be able to use MOND inertia and some other hypotheses to justify the following prediction: Giuseppe Pipino, Louise Riofrio, & Yves-Henri Sanejouand will become Nobel Prize winners within 10 years.

  17. Newton was highly apologetic for his lack of a mechanism that explains action-at-a-distance. Inertial gravimetric theory invades Newton’s realm, granting his action-at-a-distance a subatomic physical feature tied to the attractive nature of matter. I think MOND exposes a fundamental error in the assumption of infinity in both Isaac’s and Albert’s theories: Gravity is NOT infinite: The mass of a single particle cannot extend to infinity and therefore neither can the mass of a system. MOND exposes this limit of particle extension. Intuitively, this would have the opposite effect of MOND, but only if you assume, as we always have, that the inertial mass of a particle is constant; and not a bulk property of a system. The MOND properties of matter betray this fundamental error in our conception of matter that underlies all Newtonian and Relativistic theories. To realize this you must first acknowledge this gross error in our understanding of matter. Then a constructive model of inertial gravity can be built and tested. (I apologies if this though extends beyond the scope of Stacy’s extraordinary presentation of cosmic tension, but we are looking for solutions: The problems are defined ad nauseum.)

    1. Indeed – Newton was careful to say “everything happens AS IF…” [the effective force law were inverse square], in deference to those who complained that action at a distance was a form of magic. I have attempted to do the same – MOND is the effective force law in galaxies; that is telling us something important that we do not currently understand. It apparently seems like magic to some, as the inverse square law did to Newton’s critics: that underlines how important it is.

  18. “… MOND is the effective force law in galaxies; that is telling us something important that we do not currently understand.” YES!!! For many years, I stupidly underestimated Pipino. I think that I no longer underestimate his ideas. Think about this possibility: There are 3 fundamentally different forms of inertia: Newton-Einstein inertia, MOND inertia, and inflaton inertia. MOND inertia might explain the dark matter phenomenon, and inflaton inertia might explain the dark energy phenomenon. Form a 3-scientist team consisting of Giuseppe Pipino, Francesco Lelli, and some young, highly-talented Italian string theorist.

Comments are closed.