I took the occasion of the NEIU debate to refresh my knowledge of the status of some of the persistent tensions in cosmology. There wasn’t enough time to discuss those, so I thought I’d go through a few of them here. These issues tend to get downplayed or outright ignored when we hype LCDM’s successes.
When I teach cosmology, I like to have the students do a project in which they each track down a measurement of some cosmic parameter, and then report back on it. The idea, when I started doing this back in 1999, was to combine the different lines of evidence to see if we reach a consistent concordance cosmology. Below is an example from the 2002 graduate course at the University of Maryland. Does it all hang together? I ask the students to debate the pros and cons of the various lines of evidence.

The concordance cosmology is the small portion of this diagram that was not ruled out. This is the way in which LCDM was established. Before we had either the CMB acoustic power spectrum or Type Ia supernovae, LCDM was pretty much a done deal based on a wide array of other astronomical evidence. It was the subsequentα agreement of the Type Ia SN and the CMB that cemented the picture in place.
The implicit assumption in this approach is that we have identified the correct cosmology by process of elimination: whatever is left over must be the right answer. But what if nothing is left over?
I have long worried that we’ve painted ourselves into a corner: maybe the concordance window is merely the least unlikely spot before everything is excluded. Excluding everything would effectively falsify LCDM cosmology, if not the more basic picture of an expanding universe% emerging from a hot big bang. Once one permits oneself to think this way, then it occurs to one that perhaps the reason we have to invoke the twin tooth fairies of dark matter and dark energy is to get FLRW to approximate some deeper, underlying theory.
Most cosmologists do not appear to contemplate this frightening scenario. And indeed, before we believe something so drastic, we have to have thoroughly debunked the standard picture – something rather difficult to do when 95% of it is invisible. It also means believing all the constraints that call the standard picture into question (hence why contradictory results experience considerably more scrutiny* than conforming results). The fact is that some results are more robust than others. The trick is deciding which to trust.^
In the diagram above, the range of Ωm from cluster mass-to-light ratios comes from some particular paper. There are hundreds of papers on this topic, if not thousands. I do not recall which one this particular illustration came from, but most of the estimates I’ve seen from the same method come in somewhat higher. So if we slide those green lines up, the allowed concordance window gets larger.
The practice of modern cosmology has necessarily been an exercise in judgement: which lines of evidence should we most trust? For example, there is a line up there for rotation curves. That was my effort to ask what combination of cosmological parameters led to dark matter halo densities that were tolerable to the rotation curve data of the time. Dense cosmologies give birth to dense dark matter halos, so everything above that line was excluded because those parameters cram too much dark matter into too little space. This was a pretty conservative limit at the time, but it is predicated on the insistence of theorists that dark matter halos had to have the NFW form predicted by dark matter-only simulations. Since that time, simulations including baryons have found any number of ways to alter the initial cusp. This in turn means that the constraint no longer applies as the halo might have been altered from its original, cosmology-predicted initial form. Whether the mechanisms that might cause such alterations are themselves viable becomes a separate question.
If we believed all of the available constraints, then there is no window left and FLRW is already ruled out. But not all of those data are correct, and some contradict each other, even absent the assumption of FLRW. So which do we believe? Finding one’s path in this field is like traipsing through an intellectual mine field full of hardened positions occupied by troops dedicated to this or that combination of parameters.

It is in every way an invitation to confirmation bias. The answer we get depends on how we weigh disparate lines of evidence. We are prone to give greater weight to lines of evidence that conform to our pre-established+ beliefs.
So, with that warning, let’s plunge ahead.
The modern Hubble tension
Gone but not yet forgotten are the Hubble wars between camps Sandage (H0 = 50!) and de Vaucouleurs (H0 = 100!). These were largely resolved early this century thanks to the Hubble Space Telescope Key Project on the distance scale. Obtaining this measurement was the major motivation to launch HST in the first place. Finally, this long standing argument was resolved: nearly everyone agreed that H0 = 72 km/s/Mpc.
That agreement was long-lived by the standards of cosmology, but did not last forever. Here is an illustration of the time dependence of H0 measurements this century, from Freedman (2021):

There are many illustrations like this; I choose this one because it looks great and seems to have become the go-to for illustrating the situation. Indeed, it seems to inform the attitude of many scientists close to but not directly involved in the H0 debate. They seem to perceive this as a debate between Adam Riess and Wendy Freedman, who have become associated with the Cepheid and TRGB$ calibrations, respectively. This is a gross oversimplification, as they are not the only actors on a very big stage&. Even in this plot, the first Cepheid point is from Freedman’s HST Key Project. But this apparent dichotomy between calibrators and people seems to be how the subject is perceived by scientists who have neither time nor reason for closer scrutiny. Let’s scrutinize.
Fits to the acoustic power spectrum of the CMB agreed with astronomical measurements of H0 for the first decade of the century. Concordance was confirmed. The current tension appeared with the first CMB data from Planck. Suddenly the grey band of the CMB best-fit no longer overlapped with the blue band of astronomical measurements. This came as a shock. Then a new (red) band appears, distinguishing between the “local” H0 calibrated by the TRGB from that calibrated by Cepheids.
I think I mentioned that cosmology was an invitation to confirmation bias. If you put a lot of weight on CMB fits, as many cosmologists do, then it makes sense from that perspective that the TRGB measurement is the correct one and the Cepheid H0 must be wrong. This is easy to imagine given the history of systematic errors that plagued the subject throughout the twentieth century. This confirmation bias makes one inclined to give more credence to the new# TRGB calibration, which is only in modest tension with the CMB value. The narrative is then simplified to two astronomical methods that are subject to systematic uncertainty: one that agrees with the right answer and one that does not. Ergo, the Cepheid H0 is in systematic error.
This narrative oversimplifies that matter to the point of being actively misleading, and the plot above abets this by focusing on only two of the many local measurements. There is no perfect way to do this, but I had a go at it last year. In the plot below, I cobbled together all the data I could without going ridiculously far back, but chose to show only one point per independent group, the most recent one available from each, the idea being that the same people don’t get new votes every time they tweak their result – that’s basically what is illustrated above. The most recent points from above are labeled Cepheids & TRGB (the date of the TRGB goes to the full Chicago-Carnegie paper, not Freedman’s summary paper where the above plot can be found). See McGaugh (2024) for the references.
When I first made this plot, I discovered that many measurements of the Hubble constant are not all that precise: the plot was an indecipherable forest of error bars. So I chose to make a cut at a statistical uncertainty of 3 km/s/Mpc: worse than that, the data are shown as open symbols sans error bars; better than that, the datum gets explicit illustration of both its statistical and systematic uncertainty. One could make other choices, but the point is that this choice paints a different picture from the choice made above. One of these local measurements is not like the others, inviting a different version of confirmation bias: the TRGB point is the outlier, so perhaps it is the one that is wrong.

I highlight the measurement our group made not to note that we’ve done this too so much as to highlight an underappreciated aspect of the apparent tension between Cepheid and TRGB calibrations. There are 50 galaxies that calibrate the baryonic Tully-Fisher relation, split nearly evenly between galaxies whose distance is known through Cepheids (blue points) and TRGB (red points). They give the same answer. There is no tension between Cepheids and the TRGB here.
Chasing this up, it appears to me that what happened was that Freedman’s group reanalyzed the data that calibrate the TRGB, and wound up with a slightly different answer. This difference does not appear to be in the calibration equation (the absolute magnitude of the tip of the red giant branch didn’t change that much), but in something to do with how the tip magnitude is extracted. Maybe, I guess? I couldn’t follow it all the way, and I got bad vibes reminding me of when I tried to sort through Sandage’s many corrections in the early ’90s. That doesn’t make it wrong, but the point is that the discrepancy is not between Cepheids and TRGB calibrations so much as it is between the TRGB as implemented by Freedman’s group and the TRGB as implemented by others. The depiction of the local Hubble constant debate as being between Cepheid and TRGB calibrations is not just misleading, it is wrong.
Can we get away from Cepheids and the TRGB entirely? Yes. The black points above are for megamasers and gravitational lensing. These are geometric methods that do not require intermediate calibrators like Cepheids at all. It’s straight trigonometry. Both indicate H0 > 70. Which way is our confirmation bias leaning now?
The way these things are presented has an impact on scientific consensus. A fascinating experiment on this has been done in a recent conference report. Sometimes people poll conference attendees in an attempt to gauge consensus; this report surveys conference attendees “to take a snapshot of the attitudes of physicists working on some of the most pressing questions in modern physics.” One of the topics queried is the Hubble tension. Survey says:

First, a shout out to the 1/4 of scientists who expressed no opinion. That’s the proper thing to do when you’re not close enough to a subject to make a well-informed judgement. Whether one knows enough to do this is itself a judgement call, and we often let our arrogance override our reluctance to over-share ill-informed opinions.
Second, a shout out to the folks who did the poll for including a line for systematics in the CMB. That is a logical possibility, even if only 3 of the 72 participants took it seriously. This corroborates the impression I have that most physicists seem to think the CMB is prefect like some kind of holy scripture written in fire on the primordial sky, so must be correct and cannot be questioned, amen. That’s silly; systematics are always a possibility in any observation of the sky. In the case of the CMB, I suspect it is not some instrumental systematic but the underlying assumption of LCDM FLRW that is the issue; once one assumes that, then indeed, the best fit to the Planck data as published is H0 = 67.4, with H0 > 68 being right out. (I’ve checked.)
A red flag that the CMB is where the problem lies is the systematic variation of the best-fit parameters along the trench of minimum χ2:

I’ve shown this plot and variations for other choices of H0 before, yet it never fails to come as a surprise when I show it to people who work closely on the subject. I’m gonna guess that extends to most of the people who participated in the survey above. Some red flags prove to be false alarms, some don’t, but one should at least be aware of them and take them into consideration when making a judgement like this.
The plurality (35%) of those polled selected “systematic error in supernova data” as the most likely cause of the Hubble tension. It is indeed a common attitude, as I mentioned above, that the Hubble tension is somehow a problem of systematic errors in astronomical data like back in the bad old days** of Sandage & de Vaucouleurs.
Let’s unpack this a bit. First, the framing: systematic error in supernova data is not the issue. There may, of course, be systematic uncertainties in supernova data, but that’s not a contender for what is causing the apparent Hubble tension. The debate over the local value of H0 is in the calibrators of supernovae. This is often expressed as a tension between Cepheid and TRGB calibrators, but as we’ve seen, even that is misleading. So posing the question this way is all kinds of revealing, including of some implicit confirmation bias. It’s like putting the right answer of a multiple choice question first and then making up some random alternatives.
So what do we learn from this poll for consensus? There is no overwhelming consensus, and the most popular choice appears to be ill-informed. This could be a meme. Tell me you’re not an expert on a subject by expressing an opinion as if you were.
The kicker here is that this was a conference on black hole physics. There seems to have been some fundamental gravitational and quantum physics discussed, which is all very interesting, but this is a community that is pretty far removed from the nitty-gritty of astronomical observations. There are many other polls reported in this conference report, many of them about esoteric aspects of black holes that I find interesting but would not myself venture an opinion on: it’s not my field. It appears that a plurality of participants at this particular conference might want to consider adopting that policy for fields beyond their own expertise.
I don’t want to be too harsh, but it seems like we are repeating the same mistakes we made in the 1980s. As I’ve related before, I came to astronomy from physics with the utter assurance that H0 had to be 50. It was Known. Then I met astronomers who were actually involved in measuring H0 and they were like, “Maybe it is ~80?” This hurt my brain. It could not be so! and yet they turned out to be correct within the uncertainties of the time. Today, similar strong opinions are being expressed by the same community (and sometimes by the same people) who were wrong then, so it wouldn’t surprise me if they are wrong now. Putting how they think things should be ahead of how they are is how they roll.
There are other tensions besides the Hubble tension, but I’ll get to them in future posts. This is enough for now.
αAs I’ve related before, I date the genesis of concordance LCDM to the work of Ostriker & Steinhardt (1995), though there were many other contributions leading to it (e.g., Efstathiou et al. 1990). Certainly many of us anticipated that the Type Ia SN experiments would confirm or deny this picture. Since the issue of confirmation bias is ever-present in cosmic considerations, it is important to understand this context: the acceleration of the expansion rate that is often depicted as a novel discovery in 1998 was an expect result. So much so that at a conference in 1997 in Aspen I recall watching Michael Turner badger the SN presenters to Proclaim Lambda already. One of the representatives from the SN teams was Richard Ellis, who wasn’t having it: the SN data weren’t there yet even if the attitude was. Amusingly, I later heard Turner claim to have been completely surprised by the 1998 discovery, as if he hadn’t been pushing for it just the year before. Aspen is a good venue for discussion; I commented at the time that the need to rehabilitate the cosmological constant was a big stop sign in the sky. He glared at me, and I’ve been on his shit list ever since.
%I will not be entertaining assertions that the universe is not expanding in the comments: that’s beyond the scope of this post.
*Every time a paper corroborating a prediction of MOND is published, the usual suspects get on social media to complain that the referee(s) who reviewed the paper must be incompetent. This is a classic case of admitting you don’t understand how the process works by disparaging what happened in a process to which you weren’t privy. Anyone familiar with the practice of refereeing will appreciate that the opposite is true: claims that seem extraordinary are consistently held to a higher standard.
^Note that it is impossible to exclude the act of judgement. There are approaches to minimizing this in particular experiments, e.g., by doing a blind analysis of large scale structure data. But you’ve still assumed a paradigm in which to analyze those data; that’s a judgement call. It is also a judgement call to decide to believe only large scale data and ignore evidence below some scale.
+I felt this hard when MOND first cropped up in my data for low surface brightness galaxies. I remember thinking How can this stupid theory get any predictions right when there is so much evidence for dark matter? It took a while for me to realize that dark matter really meant mass discrepancies. The evidence merely indicates a problem, the misnomer presupposes the solution. I had been working so hard to interpret things in terms of dark matter that it came as a surprise that once I allowed myself to try interpreting things in terms of MOND I no longer had to work so hard: lots of observations suddenly made sense.
$TRGB = Tip of the Red Giant Branch. Low metallicity stars reach a consistent maximum luminosity as they evolve up the red giant branch, providing a convenient standard candle.
&Where the heck is Tully? He seldom seems to get acknowledged despite having played a crucial role in breaking the tyranny of H0 = 50 in the 1970s, having published steadily on the topic, and his group continues to provide accurate measurements to this day. Do physics-trained cosmologists even know who he is?
#The TRGB was a well-established method before it suddenly appears on this graph. That it appears this way shortly after the CMB told us what answer we should get is a more worrisome potential example of confirmation bias, reminiscent of the situation with the primordial deuterium abundance.
**Aside from the tension between the TRGB as implemented by Freedman’s group and the TRGB as implemented by others, I’m not aware of any serious hint of systematics in the calibration of the distance scale. Can it still happen? Sure! But people are well aware of the dangers and watch closely for them. At this juncture, there is ample evidence that we may indeed have gotten past this.
Ha! I knew the Riess reference off the top of my head, but lots of people have worked on this so I typed “hubble calibration not a systematic error” into Google to search for other papers only to have its AI overview confidently assert
The statement that Hubble calibration is not a systematic error is incorrect
Google AI
That gave me a good laugh. It’s bad enough when overconfident underachievers shout about this from the wrong peak of the Dunning-Kruger curve without AI adding its recycled opinion to the noise, especially since its “opinion” is constructed from the noise.
The best search engine for relevant academic papers is NASA ADS; putting the same text in the abstract box returns many hits that I’m not gonna wade through. (A well-structured ADS search doesn’t read so casually; apparently the same still applies to Google.)
There is a major recent white paper on cosmological tensions, including especially the Hubble tension:
https://arxiv.org/abs/2504.01669
The local value of the Hubble constant (more precisely the redshift gradient, or how quickly redshift rises with distance) can be made independently of Cepheids and supernovae:
https://arxiv.org/abs/2502.15935
There is little sign of a mismatch between Cepheid and TRGB distances:
https://iopscience.iop.org/article/10.3847/1538-4357/ad8c21
In general, there is compelling evidence that the Hubble tension is real. Supernova systematics cannot possibly explain the Hubble tension as it is evident without including supernova data.
A more frightening scenario—one rarely even considered—is that there simply is no deeper, underlying theory, because complexity itself erects a barrier no formal system can cross without inconsistency. In fact, formal‑mathematical results (e.g. Kolmogorov and Chaitin mathematical results ) already show this to be the case.
Even if a fully consistent theory of quantum gravity were to exist—and that’s a very big “if”—it would inherit the same complexity bounds as quantum mechanics. Such a theory must model quantum behavior, and once a certain complexity threshold is passed that behavior “collapses” into classicality. In the classical limit, it would therefore have to reduce to something like General Relativity (or exhibit MOND‑like corrections), each of which carries its own complexity‑driven limitations.
What naive reductionists fail to acknowledge is that there can be no single, universal theory—precisely for the same reasons Hilbert’s axiomatization program collapsed: you cannot evade incompleteness.
” … it occurs to one that perhaps the reason we have to invoke the twin tooth fairies of dark matter and dark energy is to get FLRW to approximate some deeper, underlying theory. ” Consider “Witten, Milgrom, Brown, and Kroupa on Modified Newtonian Dynamics” https://vixra.org/pdf/1501.0123v1.pdf
If my basic theory is empirically valid, then my basic theory should have a mathematical model, which has a MOND inertial projection into a stringy mathematical model involving MOND inertia. My guess is that some MOND inertia model, involving a continual binary cycle of MOND inertia & dark energy based upon string vibrations will cause Milgrom to win a Nobel Prize within 3 years & will cause Witten to win a Nobel Prize within 5 years.
Excellent overview of the current tension!
Supernova systematics cannot be the cause of the Hubble tension, but they could be the cure: THE cure being to blow the LCDM cosmology out of the water. Riess, Perlmutter and others did everything they could with the supernova data to tuck their Ho value to the low 70s. Unconstrained by the acknowledged goal of coinciding with the CMB Ho value; it is my studied opinion that supernovae tell us a much broader picture of fundamental errors in our cosmic baseline.
It is my experience in presenting on this topic that researchers in-and-out of the field are genuinely seeking answers, but cannot figure out when or where everything first headed south. Unconstrained by confirmation bias, I think supernova events can and will lead us there.
If somebody surveyed me I would tend to agree with the premise of this post, which is that the CMB should be interrogated further as perhaps the prime suspect.
We know the CMB represents the boundary of the observable universe, but of course this is measured at an observer. So am I correct in saying that an observer is at the boundary of the observable universe? Seems obvious, right?
How does one model that?
I wonder about the merits of modeling superluminal observers, and whether there really might be something eye-opening there. Intuitively there seems like a possible connection there between “superluminal observers” and the Nariai horizons, particulary as it relates to H0 and a0.
Looking forward to your discussion on structure development. I am curious what is meant by the term monolithic in this context as opposed to hierarchical.
Is it necessary to contest hierarchical structure, or can it still be accomodated in some underlying theory?
One thing that comes to mind is an analogy to the “Einstein tile”, which is an interesting shape that appears on a 2D hexagonal lattice and makes a non-repeating pattern that creates hierarchical structures.
I guess one question is whether the hierarchical structure is possible simply from the tiling of spacetime itself? The closest analogy I can think of in a higher dimension might be something like the K4 crystal existing along with its stable twin the Diamond. Maybe somebody will discover that hierarchical structures have more than one possible origin, and perhaps resolve the tension that way.
I have discussed structure formation in https://tritonstation.com/2024/12/20/on-the-timescale-for-galaxy-formation/ and the subsequent posts. See also http://astroweb.case.edu/ssm/mond/LSSinMOND.html and the papers cited there.
The late John Huchra (passed away 2010), best known for the CfA redshift survey, had an interesting page about the Hubble constant. I checked the CfA website, and it is still there:
https://lweb.cfa.harvard.edu/~dfabricant/huchra/hubble/index.htm
It included a table of all published values of H0 from 1929 to 2010, here:
https://lweb.cfa.harvard.edu/~dfabricant/huchra/hubble.plot.dat
I took this and loaded it into Excel. If you take the average of all values from 1929, you get H0 = 73.0 km/s/Mpc
Taking only values from 1960 onwards, you get H0 = 67.6 km/s/Mpc
Interesting coincidence.
Freedman played a game like this in her review, coming up with an average very close to her current 69.
In the mid-90s the winds were blowing in favor of 65. I made an estimate at the time based on the variation of rotation curve shape with luminosity and obtained 72 +/- 2 +/- ? I didn’t publish it because there weren’t enough calibrators to pretend to quantify the systematic uncertainty. This is what the HST Key Project did at the turn of the century. That has held up well; the sloppy mess of systematics we remember from the 20th century really aren’t in play anymore.
Reading the notes in Huchra’s data file, I see the comment “the HST Key Project on the Extragalactic Distance Scale, which measured Cepheid based relative distances to two dozen nearby spiral galaxies” to obtain H0 = 72 reminds me to point out that there are 50 calibrating galaxies in the BTFR plot above. We can do the same exercise, and simply ask those galaxies with direct distance determinations what the average H0 = V/D is. From that, we obtain H0 = 73 – exactly what Riess et al obtain. This isn’t surprising, since we’re using the same Cepheid calibration, but we’re also using an equal number of galaxies with TRGB distances and those give the same answer – for the available galaxies (see Tully’s Extragalactic Distance Database). The value we add is to apply this BTFR calibration to another ~100 still more distant galaxies; doing so gives H0 = 75. One would hope that we would reach far enough out to probe the pure Hubble flow, but alas, this is not clearly the case: the biggest systematic uncertainty (that’s known) is in the flow model, which quantifies how galaxies deviate from pure expansion due to the gravitational influence of large masses like the Virgo cluster.
I mention this because it is an interesting detail that I didn’t mention above, and a reminder that the +/- can flop either way. There seems to be a lot of assumption bias that probes to larger scales will tend towards the Planck result, but in this case it went the other way – albeit within the errors, so not meaningfully different from 73.
About Brent Tully, who you say never seems to get acknowledged these days – to what extent do you think he’s the grandfather of MOND, which came five years after the TFR? It seems to me it at least set Milgrom on the path to MOND, and was probably one of the main starting points for cracking into the mathematics, and turning an empirical relation into a gravity theory (the effective proportionality constant between GM and v^4 is a0). So maybe the TFR was a bit like Max Planck’s early work on quantization, which eventually led to QM.
The motivation for MOND, like dark matter, was flat rotation curves, but TF also played a role. Looking back at Milgrom’s original papers, the TF data at the time gave slopes anywhere from 3 to 5. It was a mess, but it excluded a length scale for the modification as that predicts a TF relation with slope 2. There were some arguments that the best TF slope was 4, which leads to an acceleration scale. There was no guarantee that would work out as well as it has, but the thing that really caught my attention was the predicted lack of surface brightness residuals: https://tritonstation.com/2022/04/08/two-hypotheses/
I discussed with the MOND community and especially Milgrom about the history of MOND and the state of evidence from galaxy rotation curves in the late 1970s and early 1980s. This is discussed in section 3.1.1 or page 14 (on Arxiv) of my MOND review:
arxiv.org/abs/2110.06936
Briefly, it was clear that the flatline level of the rotation curve scales as the stellar mass to the power of 0.2-0.4. It was not known that the power would be exactly 0.25 or that the relation was actually with the total baryonic mass and the observations until then just happened to have targeted gas-poor galaxies. Moreover, the observations often did not really extend into the asymptotically flat outer region of the rotation curve.
So no one gets confused about the slope: when I say 3 – 5, I mean x in M ~ V^x. When Indranil says 0.2 – 0.4, he means y in V ~ M^y. So the MOND prediction of x = 4 corresponds to y = 0.25.
There’s is a paper on the arxiv in which modified gravity redoves these tensions https://arxiv.org/abs/2504.14609
“The motivation for MOND, like dark matter, was flat rotation curves, but TF also played a role.” Should both Tully & Fisher have Nobel Prizes in astronomy? Let us assume that gravitational energy is conserved, all gravitons have spin 2, and MOND inertia is a physical reality. Consider Einstein’s equivalence principle.
https://en.wikiquote.org/wiki/Equivalence_principle
Do Einstein’s field equations need to be slightly modified, because there is a stringy dark force cycle that involves MOND inertia & dark energy? Consider some physical hypotheses: (1) String theory can be modified to explain MOND inertia (MI) & the Tully-Fisher relation. (2) There are 2 fundamental forms of inertia: mass-energy inertia and MOND inertia. Einstein’s equivalence principle is empirically valid but ignores the physical existence of MOND inertia. MOND inertia inhibits the occurrence of big bangs and inflaton fields associated with big bangs, but there is a stringy uncertainty principle that somehow indicates the probability that the MI inhibition can fail. (3) Dark energy and MOND inertia are universal patterns that organize string vibrations. The observable universe runs on alternating cycles of dark energy surges and MOND inertia surges justified by string theory. During each Planck time interval there is either a dark energy surge or a MOND inertia surge. Each dark energy surge causes a stringy reaction in the form of a MOND inertia surge. Each MOND inertia surge causes a stringy reaction in the form of a dark energy surge. (4) Einstein’s field equations need to be somehow modified to include a MOND inertia term.
You say the drift in the CMB value for H0 away from the LCDM window correlates with the inclusion of higher multipoles, and that as the effect of gravitational lensing is important in those adjustments, the anomalously massive galaxies that have been found recently may be doing it. How viable is the way to calculate the effect of lensing on the CMB you mention, do you think it might be done in the future?
Great question. One needs to modify one of the Boltzmann codes that is used to do calculate the CMB power spectrum. That can be done, and I’m told there is even a version of such a code out there built to do that sort of thing. To do it right, one would need a proper theory like AeST. Short of that, one could estimate the rate of growth as a function of mass from Sanders’s work and implement it as an excess over the standard calculation. That’s not as straightforward as it sounds, but to first order make the lensing perturbations grow as a^2 instead of a(t).
Having talked to a few people and thought about it more, I am currently inclined to suspect the extra lensing (if there is any) is contributed more by early massive clusters at z ~ 2 than galaxies at z ~ 10, but it depends on the underlying theory. All scales matter at some level, but the bigger masses may be more effective at blurring the surface of last scattering, which is the systematic that excess lensing would cause.
Some researchers have attempted to embed the MOND regime into a fully relativistic framework—akin to how General Relativity extends Newtonian gravity (for example, Aether–Scalar–Tensor theories like AeST)—on the premise that MOND-like corrections might resolve GR’s discrepancies at galactic and larger scales without invoking “the twin tooth fairies of dark matter and dark energy”.
By contrast, practitioners of quantum many-body physics have never sought to modify quantum mechanics itself to “add” emergent phenomena. They recognized that properties like superconductivity or collective excitations arise from the complex interactions within large assemblies of particles, not from amendments to the fundamental theory.
Perhaps MOND should be viewed similarly: since gravity is roughly 10^36 times weaker than electromagnetism, genuinely novel gravitational behaviors would only emerge at the scale of very large systems—galaxies and beyond.
If a “fundamental” theory incorporates corrections at a given hierarchical level (for example, MOND at the scale of galaxies), it will inevitably fail to capture new emergent properties at higher levels.
It is no coincidence that so-called “fundamental” theories are effectively applied only to very simple systems; in many-body modeling, phenomenological models are the absolute norm. MOND merely sets the record straight for GR.
The “in principle” argument advanced by reductionists is a rhetorical deflection that masks the fact that, in complex systems, reduction to first principles is only feasible in very simple, ideal cases—and formal results in mathematics fully support this limitation, irreducibility is the norm not the exception.
The universality of “fundamental” theories championed by many theoreticians is a myth.