I’ve been busy, and a bit exhausted, since the long series of posts on structure formation in the early universe. The thing I like about MOND is that it helps me understand – and successfully predict – the dynamics of galaxies. Specific galaxies that are real objects: one can observe this particular galaxy and predict that it should have this rotation speed or velocity dispersion. In contrast, LCDM simulations can only make statistical statements about populations of galaxy-like numerical abstractions, they can never be equated to real-universe objects. Worse, they obfuscate rather than illuminate. In MOND, the observed centripetal acceleration follows directly from that predicted by the observed distribution of stars and gas. In simulations, this fundamental observation is left unaddressed, and we are left grasping at straws trying to comprehend how the observed kinematics follow from an invisible, massive dark matter halo that starts with the NFW form but somehow gets redistributed just so by inadequately modeled feedback processes.
Simply put, I do not understand galaxy dynamics in terms of dark matter, and not for want of trying. There are plenty of people who claim to do so, but they appear to be fooling themselves. Nevertheless, what I don’t like about MOND is the same thing that they don’t like about MOND which is that I don’t understand the basics of cosmology with it.
Specifically, what I don’t understand about cosmology in modified dynamics is the expansion history and the geometry. That’s a lot, but not everything. The early universe is fine: the expanding universe went through an early hot phase that bequeathed us with the relic radiation field and the abundances of the light elements through big bang nucleosynthesis. There’s nothing about MOND that contradicts that, and arguably MOND is in better agreement with BBN than LCDM, there being no tension with the lithium abundance – this tension was not present in the 1990s, and was only imposed by the need to fit the amplitude of the second peak in the CMB.
But we’re still missing some basics that are well understood in the standard cosmology, and which are in good agreement with many (if not all) of the observations that lead us to LCDM. So I understand the reluctance to admit that maybe we don’t know as much about the universe as we think we do. Indeed, it provokes strong emotional reactions.

So, what might the expansion history be in MOND? I don’t know. There are some obvious things to consider, but I don’t find them satisfactory.
The Age of the Universe
Before I address the expansion history, I want to highlight some observations that pertain to the age of the universe. These provide some context that informs my thinking on the subject, and why I think LCDM hits pretty close to the mark in some important respects, like the time-redshift relation. That’s not to say I think we need to slavishly obey every detail of the LCDM expansion history when constructing other theories, but it does get some things right that need to be respected in any such effort.
One big thing I think we should respect are constraints on the age of the universe. The universe can’t be younger than the objects in it. It could of course be older, but it doesn’t appear to be much older, as there are multiple, independent lines of evidence that all point to pretty much the same age.
Expansion Age: The first basic is that if the universe is expanding, it has a finite age. You can imagine running the expansion in reverse, looking back in time to when the universe was progressively smaller, until you reach an incomprehensibly dense initial phase. A very long time, to be sure, but not infinite.
To put an exact number on the age of the universe, we need to know its detailed expansion history. That is something LCDM provides that MOND does not pretend to do. Setting aside theory, a good ball park age is the Hubble time, which is the inverse of the Hubble constant. This is how long it takes for a linearly expanding, “coasting” universe to get where it is today. For the measured H0 = 73 km/s/Mpc, the Hubble time is 13.4 Gyr. Keep that number in mind for later. This expansion age is the metric against which to compare the ages of measured objects, as discussed below.
Globular Clusters: The most famous of age constraints is provided by the ancient stars in globular clusters. One of the great accomplishments of 20th century astrophysics is a masterful understanding of the physics of stars as giant nuclear fusion reactors. This allows us to understand how stars of different mass and composition evolve. That, in turn, allows us to put an age on the stars in clusters. Globulars are the oldest of clusters, with a mean age of 13.5 Gyr (Valcin et al. 2021). Other estimates are similar, though I note that the age determinations depends on the distance scale, so keeping them rigorously separate from Hubble constant determinations has historically been a challenge. The covariance of age and distance renders the meaning of error bars rather suspect, but to give a flavor, the globular cluster M92 is estimated to have an age of 13.80±0.75 Gyr (Jiaqi et al. 2023).
Though globular clusters are the most famous in this regard, there are other constraints on the age of the contents of the universe.
White dwarfs: White dwarfs are the remnants of dead stars that were never massive enough to have exploded as supernova. The over/under line for that is about 8 solar mass; the oldest white dwarfs will be the remnants of the first stars that formed just below this threshold. Such stars don’t take long to evolve, around 100 Myr. That’s small compared to the age of the universe, so the first white dwarfs have just been cooling off ever since their progenitors burned out.
As the remnants of the incredibly hot cores of former stars, white dwarfs star off hot but cool quickly by radiating into space. The timescale to cool off can be crudely estimated from first principles just from the Stefan-Boltzmann law. As with so many situations in astrophysics, some detailed radiative transfer calculations are necessary to get the answer right in detail. But the ballpark of the back-of-the-envelope answer is not much different from the detailed calculation, giving some confidence in the procedure: we have a good idea of how long it takes white dwarfs to cool.
Since white dwarfs are not generating new energy but simply radiating into space, their luminosity fades over time as their surface temperature declines. This predicts that there will be a sharp drop in the numbers of white dwarfs corresponding to the oldest such objects: there simply hasn’t been enough time to cool further. The observational challenge then becomes finding the faint edge of the luminosity function for these intrinsically faint sources.
Despite the obvious challenges, people have done it, and after great effort, have found the expected edge. Translating that into an age, we get 12.5+1.4/-3.5 Gyr (Munn et al. 2017). This seems to hold up well now that we have Gaia data, which finds J1312-4728 to be the oldest known white dwarf at 12.41±0.22 Gyr (Torres et al. 2021). To get to the age of the universe, one does have to account for the time it takes to make a white dwarf in the first place, which is of order a Gyr or less, depending on the progenitor and when it formed in the early universe. This is pretty consistent with the ages of globular clusters, but comes from different physics: radiative cooling is the dominant effect rather than the hydrogen fusion budget of main sequence stars.
Radiochronometers: Some elements decay radioactively, so measuring their isotopic abundances provides a clock. Carbon-14 is a famous example: with a half-life of 5,730 years, its decay provides a great way to date the remains of prehistoric camp sites and bones. That’s great over some tens of thousands of years, but we need something with a half-life of order the age of the universe to constrain that. One such isotope is 232Thorium, with a half life of 14.05 Gyr.
Making this measurement requires that we first find stars that are both ancient and metal poor but with detectable Thorium and Europium (the latter providing a stable a reference). Then one has to obtain a high quality spectrum with which to do an abundance analysis. This is all hard work, but there are some examples known.
Sneden‘s star, CS 22892-052, fits the bill. Long story short, the measured Th/Eu ratio gives an age of 12.8±3 Gyr (Sneden et al. 2003). A similar result of ~13 Gyr (Frebel & Kratz 2009) is obtained from 238U (this “stable” isotope of uranium has a half-life of 4.5 Gyr, as opposed to the kind that can be provoked into exploding, 235U, which has a half-life of 700 Myr). While the search for the first stars and the secrets they may reveal is ongoing, the ages for individual stars estimated from radioactive decay are consistent with the ages of the oldest globular clusters indicated by stellar evolution.
Interstellar dust grains: The age of the solar system (4.56 Gyr) is well known from the analysis of isotopic abundances in meteorites. In addition to tracing the oldest material in the solar system, sometimes it is possible to identify dust grains of interstellar origin. One can do the same sort of analysis, and do the sum: how long did it take the star that made those elements to evolve, return them to the interstellar medium, get mixed in with the solar nebula, and lurk about in space until plunging to the ground as a meteorite that gets picked up by some scientifically-inclined human. This exercise has been done by Nittler et al. (2008), who estimate a total age of 13.7±1.3 Gyr
Taken in sum, all these different age indicators point to a similar, consistent age between 13 and 14 billion years. It might be 12, but not lower, nor is there reason to think it would be much higher: 15 is right out. I say that flippantly because I couldn’t resist the Monty Python reference, but the point is serious: you could in principle have a much older universe, but then why are all the oldest things pretty much the same age? Why would the universe sit around doing nothing for billions of years then suddenly decide to make lots of stars all at once? The more obvious interpretation is that the age of the universe is indeed in the ballpark of 13.something Gyr.
Expansion history
The expansion history in the standard FLRW universe is governed by the Friedmann equation, which we can write* as
H2(z) = H02 [Ωm(1+z)3+Ωk(1+z)2+ΩΛ]
where z is the redshift, H(z) is the Hubble parameter, H0 is its current value, and the various Ω are the mass-energy density of stuff relative to the critical density: the mass density Ωm, the geometry Ωk, and the cosmological constant ΩΛ. I’ve neglected radiation for clarity. One can make up other stuff X and add a term for it as ΩX which will have an associated (1+z) term that depends on the equation of state of X. For our purposes, both normal matter and non-baryonic cold dark matter (CDM) share the same equation of state (cold meaning non-relativisitic motions meaning rest-mass density but negligible pressure), so both contribute to the mass density Ωm = Ωb+ΩCDM.
Note that since H(z=0)=H0, the various Ω’s have to sum to unity. Thus a cosmology is geometrically flat with the curvature term Ωk = 0 if Ωm+ΩΛ = 1. Vanilla LCDM has Ωm = 0.3 and ΩΛ = 0.7. As a community, we’ve become very sure of this, but that the Friedmann equation is sufficient to describe the expansion history of the universe is an assumption based on (1) General Relativity providing a complete description, and (2) the cosmological principle (homogeneity and isotropy) holds. These seem like incredibly reasonable assumptions, but let’s bear in mind that we only know directly about 5% of the sum of Ω’s, the baryons. ΩCDM = 0.25 and ΩΛ = 0.7 are effectively fudge factors we need to make things works out given the stated assumptions. LCDM is viable if and only if cold dark matter actually exists.
Gravity is an attractive force, so the mass term Ωm acts to retard the expansion. Early on, we expected this to be the dominant term due to the (1+z)3 dependence. In the long-presumed+ absence of a cosmological constant, cosmology was the search for two numbers: once H0 and Ωm are specified, the entire expansion history is known. Such a universe can only decelerate, so only the region below the straight line in the graph below is accessible; an expansion history like the red one representing LCDM should be impossible. That lots of different data seemed to want this is what led us kicking and screaming to rehabilitate the cosmological constant, which acts as a form of anti-gravity to accelerate an expansion that ought to be decelerating.

The over/under between acceleration/deceleration of the cosmic expansion rate is the coasting universe. This is the conceptually useful limit of a completely empty universe with Ωm = ΩΛ = 0. It expands at a steady rate that neither accelerates nor decelerates. The Hubble time is exactly equal to the age of such a universe, i.e., 13.4 Gyr for H0 = 73.
LCDM has a more complicated expansion history. The mass density dominates early on, so there is an early phase of deceleration – the red curve bends to the right. At late times, the cosmological constant begins to dominate, reversing the deceleration and transforming it into an acceleration. The inflection point when it switches from decelerating to accelerating is not too far in the past, which is a curious coincidence given that the entire future of such a universe will be spent accelerating towards the exponential expansion of the de Sitter limit. Why do we live anywhen close to this special time?
Lots of ink has been spilled on this subject, and the answer seems to boil down to the anthropic principle. I find this lame and won’t entertain it further. I do, however, want to point out a related strange coincidence: the current age of vanilla LCDM (13.5 Gyr) is the same as that of a coasting universe with the locally measured Hubble constant (13.4 Gyr). Why should these very different models be so close in age? LCDM decelerates, then accelerates; there’s only one moment in the expansion history of LCDM when the age is equal to the Hubble time, and we happen to be living just then.
This coincidence problem holds for any viable set of LCDM parameters, as they all have nearly the same age. Planck LCDM has an age of 13.7 Gyr, still basically the same as the Hubble time for the locally measured Hubble constant. The lower Planck Hubble value is balanced by a larger amount of early-time deceleration. The universe reaches its current point after 13.something Gyr in all of these models. That’s in good agreement with the ages of the oldest observed stars, which is encouraging, but it does nothing to help us resolve the Hubble tension, much less constrain alternative cosmologies.
Cosmic expansion in MOND
There is no equivalent to the Friedmann equation is in MOND. This is not satisfactory. As an extension of Newtonian theory, MOND doesn’t claim to encompass cosmic phenomena$ – hence the search for a deeper underlying theory. Lacking this, what can we try?
Felten (1984) tried to derive an equivalent to the Friedmann equation using the same trick that can be used with Newtonian theory to recover the expansion dynamics in the absence of a cosmological constant. This did not work. The result was unsatisfactory& for application to the whole universe because the presence of a0 in the equations makes the result scale-dependent. So how big the universe is matters in a way that the standard cosmology does not; there’s no way to generalize is to describe the whole enchilada.
In retrospect, what Felten had really obtained was a solution for the evolution of a top-hat over-density: the dynamics of a spherical region embedded in an expanding universe. This result is the basis for the successful prediction of early structure formation in MOND. But once again it only tells us about the dynamics of an object within the universe, not the universe itself.
In the absence of a complete theory, one makes an ansatz to proceed. If there is a grander theory that encompasses both General Relativity and MOND, then it must approach both in the appropriate limit, so an obvious ansatz to make is that the entire universe obeys the conventional Friedmann equation while the dynamics of smaller regions in the low acceleration regime obey MOND. Both Bob Sanders and I independently adopted this approach, and explicitly showed that it was consistent with the constraints that were known at the time. The first obvious guess for the mass density of such a cosmology is Ωm = Ωb = 0.04. (This was the high end of BBN estimates at the time, so back then we also considered lower values.) The expansion history of this low density, baryon-only universe is shown as the blue line below:

As before, there is not much to choose between these models in terms of age. The small but non-zero mass density does cause some early deceleration before the model approaches the coasting limit, so the current age is a bit lower: 12.6 Gyr. This is on the small side, but not problematically so, or even particularly concerning given the history of the subject. (I’m old enough to remember when we were pretty sure that globular clusters were 18 Gyr old.)
The time-redshift relation for the no-CDM, baryon-only universe is somewhat different from that of LCDM. If we adopt it, then we find that MOND-driven structure forms at somewhat higher redshift than in with the LCDM time-redshift relation. The benchmark time of 500 Myr for L* galaxy formation is reached at z = 15 rather than z = 9.5 as in LCDM. This isn’t a huge difference, but it does mean that an L* galaxy could in principle appear even earlier than so far seen. I’ve stuck with LCDM as the more conservative estimate of the time-redshift relation, but the plain fact is we don’t really know what the universe is doing at those early times, or if the ansatz we’ve made holds well enough to do this. Surely it must fail at some point, and it seems likely that we’re past that point.
There is a bigger problem with the no-CDM model above. Even if it is close to the right expansion history, it has a very large negative curvature. The geometry is nowhere close to the flat Robertson-Walker metric indicated by the angular diameter distance to the surface of last scattering (the CMB).
Geometry
Much of cosmology is obsessed with geometry, so I will not attempt to do the subject justice. Each set of FLRW parameters has a specific geometry that comes hand in hand with its expansion history. The most sensitive probe we have of the geometry is the CMB. The a priori prediction of LCDM was that its flat geometry required the first acoustic peak to have a maximum near one degree on the sky. That’s exactly what we observe.

In contrast, no-CDM made the correct prediction for the first-to-second peak amplitude ratio, but it is entirely ambivalent about the geometry. FLRW cosmology and MOND dynamics care about incommensurate things in the CMB data. That said, the naive prediction of the baryon-only model outlined above is that the first peak should occur around where the third peak is observed. That is obviously wrong.
Since the geometry is not a fundamental prediction of MOND, the position of the first peak is easily fit by invoking the same fudge factor used to fit it conventionally: the cosmological constant. We need a larger ΩΛ = 0.96, but so what? This parameter merely encodes our ignorance: we make no pretense to understand it, let alone vesting deep meaning in it. It is one of the things that a deeper theory must explain, and can be considered as a clue in its development.
So instead of a baryon-only universe, our FLRW proxy becomes a Lambda-baryon universe. That fits the geometry, and for an optical depth to the surface of last scattering of τ = 0.17, matches the amplitude of the CMB power spectrum and correctly predicts the cosmic dawn signal that EDGES claimed to detect. Sounds good, right? Well, not entirely. It doesn’t fit the CMB data at L > 600, but I expected to only get so far with the no-CDM, so it doesn’t bother me that you need a better underlying theory to fit the entire CMB. Worse, to my mind, is that the Lambda-baryon proxy universe is much, much older than everything in it: 22 Gyr instead of 13.something.

This just don’t seem right. Or even close to right. Like, not even pointing in a direction that might lead to something that had a hope of being right.
Moreover, we have a weird tension between the baryon-only proxy and the Lambda-baryon proxy cosmology. The baryon-only proxy has a plausible expansion history but an unacceptable geometry. The Lambda-baryon proxy has a plausible geometry by an implausible expansion history. Technically, yes, it is OK for the universe to be much older than all of its contents, but it doesn’t make much sense. Why would the universe do nothing for 8 or 9 Gyr, then burst into a sudden frenzy of activity? It’s as if Genesis read “for the first 6 Gyr, God was a complete slacker and did nothing. In the seventh Gyr, he tried to pull an all-nighter only to discover it took a long time to build cosmic structure. Then He said ‘Screw it’ and fudged Creation with MOND.”
In the beginning the Universe was created.
Douglas Adams, The Restaurant at the End of the Universe
This has made a lot of people very angry and been widely regarded as a bad move.
So we can have a plausible geometry or we can have a plausible expansion history with a proxy FLRW model, but not both. That’s unpleasant, but not tragic: we know this approach has to fail somehow. But I had hoped for FLRW to be a more coherent first approximation to the underlying theory, whatever it may be. If there is such a theory, then both General Relativity and MOND are its limits in their respective regimes. As such, FLRW ought to be a good approximation to the underlying entity up to some point. That we have to invoke both non-baryonic dark matter and a cosmological constant is a hint that we’ve crossed that point. But I would have hoped that we crossed it in a more coherent fashion. Instead, we seem to get a little of this for the expansion history and a little of that for the geometry.
I really don’t know what the solution is here, or even if there is one. At least I’m not fooling myself into presuming it must work out.
*There are other ways to write the Friedmann equation, but this is a useful form here. For the mathematically keen, the Hubble parameter is the time derivative of the expansion factor normalized by the expansion factor, which in terms of redshift is
H(z) = -(dz/dt)/(1+z)2.
This quantity evolves, leading us to expect evolution in Milgrom’s constant if we associate it with the numerical coincidence
2π a0 = cH0
If the Hubble parameter evolves, as it appears to do, it would seem to follow that so should a(z) ~ H(z) – otherwise the coincidence is just that: a coincidence that applies only now. There is, at present, no persuasive evidence that a0 evolves with redshift.
A similar order-of-magnitude association can be made with the cosmological constant,
2π a0 = c2Λ1/2
so conceivably the MOND acceleration scale appears as the result of vacuum effects. It is a matter of judgement whether these numerical coincidences are mere coincidences or profound clues towards a deeper theory. That the proportionality constant is very nearly 2π is certainly intriguing, but the constancy of any of these parameters (including Newton’s G) depends on how they emerge from the deeper theory.
+In January 2019, I was attending a workshop at Princeton when I had a chance encounter with Jim Peebles. He was not attending the workshop, but happened to be walking across campus at the same time I was. We got to talking, and he affirmed my recollection of just how incredibly unpopular the cosmological constant used to be. Unprompted, he went on to make the analogy of how similar that seemed to how unpopular MOND is now.
Peebles was awarded a long-overdue Nobel Prize later that year.
$This is one of the things that makes it tricky to compare LCDM and MOND. MOND is a theory of dynamics in the limit of low acceleration. It makes no pretense to be a cosmological theory. LCDM starts as a cosmological theory, but it also makes predictions about the dynamics of systems within it (or at least the dark matter halos in which visible galaxies are presumed to form). So if one starts by putting on a cosmology hat, there is nothing to talk about: LCDM is the only game in town. But from the perspective of dynamics, it’s the other way around, with LCDM repeatedly failing to satisfactorily explain, much less anticipate, phenomena that MOND predicted correctly in advance.
&An intriguing thing about Felten’s MOND universe is that it eventually recollapses irrespective of the mass density. There is no critical value of Ωm, hence no coincidence problem. MOND is strong enough to eventually reverse the expansion of the universe, it just takes a very long time to do so, depending on the density.
I’m surprised this aspect of the issue was overlooked. The coincidence problem (then mostly called the flatness problem) obsessed people at the time, so much so that its solution by Cosmic Inflation led to its widespread acceptance. That only works if Ωm = 1; LCDM makes the coincidence worse. I guess the timing was off, as Inflation had already captured the community’s imagination by that time, likely making it hard to recognize that MOND was a more natural solution. We’d already accepted the craziness that was Inflation and dark matter; MOND craziness was a bridge too far.
I guess. I’m not quite that old; I was still an undergraduate at the time. I did hear about Inflation then, in glowing terms, but not a thing about MOND.
It’s very well known that Newtonian gravitational theory failed to account for the precession of Mercury’s orbit. Scientists of that time considered that perhaps an undiscovered planet could be responsible for that anomaly, but no such planet was found.
They could have postulated an unseen matter distribution or halo(dark matter) that, if carefully fine-tuned, could account for that discrepancy. However, scientists of that time recognized that this was an artificial, ad hoc introduction—in other words, cheating—which is why it was not considered a serious possibility.
But it seems that in modern times, scientific standards have lowered significantly because that’s exactly how modern day scientists use dark matter: when General Relativity fails to account for galaxies’ flat rotational speeds—a very carefully fine-tuned dark matter distribution (halo) is post hoc added to fix General Relativity’s discrepancy with observations.
Once that trick was accepted as a “serious” scientific approach it has been used over and over in gravitational lensing, or galaxy clusters, and when that was not enough to match the Webb space telescope recent findings of massive galaxies in the early universe, that MOND predicted decades ago, some exotic properties can be added to that mysterious dark matter to match observations and save General Relativity.
In any other aspect of human activity this kind of behavior would have been discarded long ago, or denounced as fraud, imagine that you use that kind of model to guide your investment decisions.
To be fair, if the only problem with Newtonian mechanics was the precession of Mercury, it is unlikely General Relativity would have been invented.
That’s besides the point. The reality is that the flexibility given by carefully fine tuned unseen mass distributions with exotic properties could account for a lot of things, including systematic modeling shortcomings, and that is far from being a high scientific standard.
But what is obvious is that General Relativity is considered a universally valid theory, and that’s the genesis behind dark matter and dark energy.
“They could have postulated an unseen matter distribution or halo (dark matter) that, if carefully fine-tuned, could account for that discrepancy. ”
Welle that’s what some did. See the so called Vulcanoids postulated by Le Verrier in T. Levenson The hunt for Vulcan Random House.
Best & thanks for the blog
Regarding “the search for a deeper underlying theory.” I think that is misguided for several reasons:
– Any theory being quantum mechanics or General Relativity were developed for simple systems, using the available empirical evidence for these simple systems. It should be obvious that the assumptions used to develop them are not longer valid in complex systems.
– In systems with very large number of discreet components, as protein molecules or galaxies are, reduction to first principles is defacto impossible, there’s no effective difference between reducibility and irreducibility, the “in principle argument” used many times by reductionist believers is misleading unless you have a “God’s computer and God’s knowledge”.
Not by coincidence protein molecules are modeled using phenomenological models, that only work in a narrow range, exactly like the phenomenological MOND model works in the narrow range around galaxies.
Foundational theories like Quantum Mechanics and General Relativity are approximations, as any other theory, that only work in a narrow complexity range.
There’s no underlying, deeper theory because whatever assumptions you make they will be approximations that will be accurate only in a narrow range, exactly as an interpolation is only accurate close to data points.
Very interesting discussion.
Just a couple of initial comments about some principles. I’m not sure we are interpreting the Cosmological Principle properly yet.
We see what happens when the universe is be considered both isotropic and homogeneous at large scales. In other words, “This is what happens Larry!”
What if instead we ask, “must the universe appear isotropic and homogeneous simultaneously?”
If not, then we may have a mechanism for which to introduce complementarity, i.e. the universe is perhaps either isotropic, or homogeneous depending on the frame. Incidentally, I think the relative successes of bimetric theories may be connected to this idea.
I also want to say something about the Anthropic Principle. This principle doesn’t seem worth talking much about if left alone in its current form, but maybe it has value if slightly redefined.
What if the Anthropic Principle is really just telling us that there exists a complementary view of the universe that is strongly isotropic? This may be the case for any view that returns our own image, or for any view that we have evolved in just the right way to see.
I guess my point is there should be ways to rotate some of the first principles puzzle pieces, and maybe we are not that far off from getting some of them to fit observations better.
“The result was unsatisfactory& for application to the whole universe …there’s no way to generalize is to describe the whole enchilada.”
This points directly to the foundational assumption of the standard model that undermines its ability to render a coherent description of the Cosmos we actually observe. The idea that the Cosmos constitutes a simultaneously interconnected entity, such that we can speak scientifically of its “current state” or its origin, is belied by standard physics. Put simply there ain’t no such animal as the Universe of the standard model.
The limitation of light speed to a finite maximum means that we do not have and most importantly, cannot have any knowledge of the current state of Andromeda (the nearest galaxy) which lies 2.5 million lightyears distant let alone any of the galaxies that lie beyond Andromeda out to our current observational range limit which is now in excess of 10 billion lightyears.
The argument that we do not and cannot observe, measure, or detect the current state of the Cosmos as we observe it from our unique location also means that the unitary Universe of the standard model does not and cannot exist. The Cosmos we observe is relativistic in nature – we are at the center of our observable Cosmos. Other observers in other galaxies would be at the center of their observable Cosmos which would only partially overlap with our own, assuming they lie within our observed range. There is no universal frame in the relativistic Cosmos, there are only local frames.
The unitary Universe of the standard model is the modern equivalent of Ptolemy’s geocentric assumption – it is simply wrong about the nature of physical reality. As with geocentrism, no further progress in our understanding of the Cosmos will be made until the unitary Universe is retired to the dustbin of history.
Isn’t there a significant difference between whether the CMB or a0 changes with local time or with cosmic time?
In the case of CMB, there is no way to measure its change with cosmic time, right? Can we instead measure its change with respect to local time? That seems realistic.
In the case of a0, is the situation reversed? We can measure change vs cosmic time, but is there any way to measure whether a0 changes with local time? I mean if we keep observing the same galaxies long enough, could we tell if a0 changed with local time?
It seems to me that if a0 doesn’t change with cosmic time, but it does change with local time, then that would be useful info.
Slavish affirmation of standard doctrine of age of Universe. There do are objects in this same Universe that are much older than so-called age of Universe. JWST has even clearly shown 100% bright sky literally packed with lit up galaxies and clusters. In my estimate, around 150000 galaxies can be seen in the background of the image of one of the so-called farthest visible galaxies. and scientists cannot see the visible things. They only can see invisible ghosts like dark matter or in their mind they can modify the dynamics through modifying the equations.
“The thing I like about MOND is that it helps me understand — and successfully predict — the dynamics of galaxies.” Is MOND essential for understanding the foundations of physics? For explaining MOND, there might be 8 basic possibilities: MONDian 5th force, and/or MOND inertia, and/or non-conservation of gravitational energy. Think about: “MOND as manifestation of modified inertia” by Mordehai Milgrom, 2023
https://arxiv.org/abs/2310.14334 in terms of a data-driven approach to MOND astrophysics.
What is the verdict on the following? If MOND inertia is physically real, then Einstein’s field equations might be modified as follows:
Replace G(μ,ν) + Λ g(μ,ν) = κ T(μ,ν) by
G(μ,ν) + Λ g(μ,ν) = κ T(μ,ν) + M(T(μ,ν)) , where the function M(T(μ,ν)) represents the MOND-data-function as a function of the energy-momentum tensor and G(μ,ν) = R(μ,ν) – 1/2 g(μ,ν) R .
In the MOND regime, put M(T(μ,ν)) = – (3.9±.5 * 10^–5) * g(μ,ν) R .
Claim: in the MOND regime, this is approximately correct in terms of empirical data.
In ‘Surely you’re joking, Mr Feynman’ he wrote (p207):
“As for the physics itself, I worked out quite a good deal, and
it was sensible. It was worked out and verified by other people
later. I decided, though, that I had so many parameters that I had
to adjust-too much ‘phenomenological adjustment of constants’
to make everything fit-thatI couldn’t be sure it was very useful.
I wanted a rather deeper understanding of the nuclei, and I was
never quite convinced it was very significant, so I never did anything
with it.”
Feynman could just as well be talking about LCDM!
Thank you for putting effort into these expositions.
I do like Sabine Hossenfelder’s latest video about the crisis in cosmology:
“Cosmology Crises are only Getting Worse” https://www.youtube.com/watch?v=WBfeKz1SG0k&t=1s
We have the Hubble tension (as expected) and early big galaxies with a nod to MOND, but also that the motion of the Milky Way against the CMB is different to its motion against distant quasars, sigma-8 telling us that the universe is less clumpy than it should be and objects like the Great Wall and Big Ring being larger than they should be if the Cosmological Principle holds.
It may be uncomfortable for the theorists, but for the rest of us watching it has the feel of the early 1920s with the advent of Quantum Mechanics and the Curtis-Shapley debate in cosmology.
That’s what happens when you try to apply a theory developed for simple systems and idealized contexts, like General Relativity, to all complexity scales/structures ignoring all possible emergent properties at each hierchical level, like MOND at galaxy level.
Everything is being seen through a distorted lens and we’re getting a distorted picture of the visible universe.
Some readers might find interesting, from the viewpoint of developing a MOND cosmology from first principles, our paper
https://arxiv.org/pdf/2312.08811
Eqns. (50-54) propose relativistic MOND, from which one can try to construct the corresponding Friedmann equations.
Here, MOND is a fifth force, which couples to square root of mass. This attractive force effectively makes gravitation stronger below the critical Milgrom acceleration, thus eliminating the need for dark matter.
This force, dubbed dark electromagnetism, is analogous to electrodynamics and is based on a new U(1) gauge symmetry coming from beyond standard model particle physics. There are two key differences from electrodynamics. Firstly, in order to recover the 1/R falloff of the deep MOND force field, one has to assume that the fifth force falls off as [R R_H]^1/2, where R_H is the Hubble radius. The second key difference is that the coupling constant is not strictly constant; it depends on the Hubble radius.
It might be that investigation of Friedmann equations in this model could help address the pressing question [what is MONDian cosmology?] raised in this blog.
If I may correct slightly regarding Valcin et al 2021, they conclude that the age of the oldest globular clusters is 13.32±0.1 Gyr and the age of the Universe on that basis ~13.5 Gyr.
The dense globularity of the GCs is remarkable. (1) What accounts for the globularity? (2) Why is the age distribution not random? Clearly conditions were favourable near the beginning but not later. (3) At that time the MW’s BH was only a fraction of its present small mass, if it existed at all – how plausible is it that the ~200 GCs all converged on the location of the proto-MW from diverse other locations in space and ended up in the MW’s stellar halo, nearly all with ages close to the oldest stars not in GCs?
Thanks, as always, for a very good post.
When you have a theory that fits extremely well in some areas, but only some, it seems to me it’s easy to make a non-sequitur assumption. The false assumption is simply that you have all the pieces of the jigsaw in front of you. So it’s only a case of rearranging them, to fix a few problems. Almost everyone is doing this in one way or another. We’ve spent decades trying to rearrange the pieces we have, and it doesn’t get us there.
But for centuries we’ve had theories that fit brilliantly in some areas, but not in others. Newton’s work should have taught us that the universe is full of equivalence – the mathematics just happens to be like that. There are tens of ways to throw a mathematical description over something, and find that it fits amazingly well in most places. So you think you’re only a few tweaks away from solving it. But a Rubik’s cube can be got to a point where it looks like you’re only a few moves away – the actual solution requires far more turns of the cube than that.
This aspect is now becoming hard to avoid due to better data. There’s a lot more to fit our theories to, and they inevitably don’t fit everywhere. If the theory is no more than a mathematical description, it’s not going to be enough. To find a UT, going from the present situation, you need what the word ‘underlying’ implies – a conceptual basis. The key point is to realise the jigsaw must have missing conceptual pieces.
Our ‘best theory’ at present in a way would be some combination of MOND and LCDM. It’s hard to remove either completely. Hybrid theories are needed – in a conference survey that came out in March, the most popular explanation for the mass discrepancy was ‘some hybrid’.
A theory in its infancy, like PSG, might seem uninteresting – it doesn’t do this, it doesn’t do that, we can’t immediately apply it to some areas where we want solutions now. But it has (and this is the title of a short new paper on it) ‘direct mathematical evidence supporting an explanation for the connection between dark matter and visible matter’.
This leads to a hybrid theory, with a new explanation for the RAR. It’s the best paper so far, and has in it a lot I’ve learned here – it quotes Stacy in the introduction, also the conference survey. It doesn’t mention the cosmological time rate, or changes to the overall mass (those were published last year in a paper on time). The recent https://gwwsdk1.wixsite.com/link/newpreprint-pdf preprint is five times shorter than the time paper. I hope it’s of interest, and if anyone has any suggestions for publishing it I’d be grateful, I’m hoping to get it into a good journal, or onto the arXiv. More explanation is what’s needed now, and any explanation with mathematical confirmation is a rare place from which one can move forward, in what has become a less certain landscape these days.
Here are some other “coincidences” that are very curious.
If we take the age of the universe at 13.8 Gly, then the oldest photon path is D = 13.8 Gly = 2*R where R = 6.9 Gly. If we treat this as the radius of a “black hole”, we get an internal mass of about 4.4×10^52 kg and a surface gravity of 6.88×10^-10 m/s^2. This is the value you get when converting H0 to an acceleration!
If you swap units for the above between mass and acceleration (with the product m*a remaining equivalent), then you would find that a “black hole” of mass 6.88×10^-10 kg has a surface gravity of 4.4×10^52 m/s^2, which of course evaporates almost instantly, but not quite. The interesting coincidence here is that the temperature of such a black hole is about 1.8×10^32 K, which is a fine approximation to the temperature estimate for the Big Bang!
The age of a photon crossing this little sphere is just below the Planck time, and its radius is just below the Planck length.
These are just simple estimates using basic black hole calculators.
I earlier pointed out that the surface gravity of a black hole the size of the observable universe is approximately a0, i.e. about 1.0×10-10 m/s^2 for the radius R=47 Gly.
The interesting thing is that this radius could be mapped back to an observer here, but some 13+ Gyr ago. The turning point in that mapping may be the age about 6.9 Gyr ago, and the place where the angular diameter vs distance is at a minimum. Is there some reflection? Are we balanced between the universe at large and the Planck scale? Is the universe periodic in both time and space?
And here is one more intriguing coincidence: An observable universe with radius 47Gly has a “black hole” mass of about 3×10^53 kg, and surface gravity about a0. The radius 6.9Gly has a surface gravity of about H0, and a “black hole” mass of about 4.4×10^52 kg, which is about 14.7% of the expanded universe mass. This is similar to the predicted ratio of normal matter to the total matter of the universe (including dark matter).
Also of note is what follows:
a0*47Gly=c^2, and H0*6.9Gly=c^2.
It doesn’t take long to arrive at DE = DM*c^2, where we have a Dark Einstein Equivalence equation relating Dark Energy to Dark Matter.
I can never talk about cosmology without thinking of this scene from Buckaroo Banzai:
PENNY PRIDDY:
Oh, oh, I get it! What you’re saying is that oppositely charged
particles collide and blow each other up in a burst of energy. Like
a tiny Big Bang, like a… a… a… b-b-Baby Bang!
The audience laughs indulgently.
PENNY PRIDDY:
Well, I’m, uh, probably just, uh, stating the very obvious.
(angrily, to herself)
Shut UP… shut UP…
BUCKAROO BANZAI:
No, no, it’s not obvious at all. If it was obvious, everybody
would be doing it every day.
https://kumo.swcp.com/synth/text/buckaroo_banzai_script/
I’m pretty sure that Eric Weinstein (“Geometric Unity”) is a fan too. His PhD thesis, which eventually gave rise to Geometric Unity, was called “Extension of Self-Dual Yang-Mills Equations Across the Eighth Dimension”. I spent a few hours once trying to get bimetric MOND from his theory…
Hi Stacy,
You have spilled thousands of words presuming/giving benefit of the doubt to “expanding space” theoretical models, and a paltry few to non-“expanding space” models.
It’s disappointing how you exposit on every ridiculous variation of “expanding space”, and then when earnest commenters ask you to consider photon continuous decay for the cosmic redshift, you retreat to “not worth my time” or “too much change”. Then a month goes by and you write another thousand-word blog post about more farcical minutiae of “expanding space”.
‘Photon continuous decay’ astrophysicists are waiting for MOND astrophysicists to drop that last delusion, so that forces can be joined to knock out the enormous fraud of LCDM, a task which maybe cannot be done by either group alone. Again — a task which cannot be done by either group alone.
If it’s not clear already, most ‘photon continuous decay’ astrophysicists reject “dark matter” models. But most MOND astrophysicists glibly believe the hearsay straw man criticism of ‘photon continuous decay’ and the slanted papers that deceive with “expanding-space”-calibrated luminosities.
Get angry and think about the real plausibility that you’ve got misconceptions about ‘photon continuous decay’.
Respectfully and urgently,
Sahil Gupta
Happy to disappoint. What’s ridiculous is your obvious bias against this well-established aspect of cosmology. Maybe it will ultimately prove to be wrong, but the bar to demonstrate that is much much much much higher than the dark matter/MOND issues that I discuss most frequently here. Pretending they are on par is the mother of false equivalencies.
“expanding space” is as poorly established as “dark matter”. It’s just a little older fraud.
Ask questions about ‘photon continuous decay’ if you have a curious mindset.
https://cosmology.info/redshift/rebut/errorswright.html
The discrepancy between what we see (in luminous mass) and what we get (in dynamics) is well established. The matter of debate is whether the appropriate interpretation of this discrepancy is a modification of dynamical laws or invisible mass (dark matter).
When most people say “dark matter” all they really mean is the well-established discrepancy, and it is all too common to conflate that with literal dark matter. So I, of all people, do appreciate that the interpretation of phenomena can go awry. That doesn’t make an automatic equivalency between every such instance.
@tritonstation well done. I skimmed this and started following you when I read the line that referred to the cosmological constant as "incredibly unpopular."
As of today I'm proudly embracing my research along the same lines, Information Dynamics, as "metaphysics" and I'm curious if you see any alignment with your research? If you search for "cosmological constant" on QNFO.org you'll see that arrived at some of the same conclusions.
Thanks for this wonderful review of the current state of cosmology! For me the lines of evidence for ~13,5 Gyr were an eye-opener. Also it’s nice to see an honest summary of not only what we indeed can deduce but also the big problems remaining. You’re spot on the right way of presenting things.
I’ve earlier asked Milgrom by email what his opinion is on the beginning of the universe. His answer was that he has no opinion, because there was little to know for sure on the subject. That lines up well with your post: there are too many problems left to defend any opinion.
Many of my earlier posts were too rash, also on time dilation and my criticism of inflation. It’s good that cosmologists such as you investigate the matter from different perspectives anyhow. But good theories stem from stubborn beliefs of good quality combined with the willingness to (be) judge(d) by experiment and data, and much perseverance. My main belief on cosmology is strongly that there is something wrong with the timescales, although not motivated by the data (other than a flunked Tolman test which you provided, and doubts that we can calculate so many years of history from just a few thousand years of observation and clearly incomplete theories). My motivation is a bit like your rejection of the 22 Gyr: why have the universe around for so long without life on it? While of course in the case of stars/content your argument is way more scientific (no deus ex machina at 8-9 Gyr). I don’t pretend to be scientifically motivated, but asking why is what children do – and from the problems and results you describe I do consider myself about as knowledgable as a child. In doing science, the “how things happen” is extremely important, but the question why still might give good ideas?
It is surely interesting to ask why. That is perhaps the most interesting question of all, but also the hardest to answer without prejudice.
It is fun to speculate on why life appears now in the universe. We have only ourselves as an example (so far). On Earth I’m puzzled by the long period (~3 Gyr) with only microbial life, with the appearance of a vibrant variety of multicellular organisms appearing only as “recently” as the Cambrian explosion a “mere” 0.6 Gyr ago. Why? Why did life develop so early, remain comparatively boring (from our anthropic perspective) for so long, then suddenly take off?
More generally, we don’t know that the universe waited this long to develop life; we only know our very immediate neighborhood. It could well have developed elsewhere. Indeed, there has been plenty of time for entire biosystems to come and go, some with their own civilizations of modestly intelligent beings like ourselves. Indeed, I suspect this has happened over and over again.
So why haven’t we been visited by aliens? Well, space is big and time is deep. It is really hard to travel to other stars, and I do not expect most civilizations to develop this capacity – if it is even possible. But if one were to do so, and it were to happen to visit the Earth out of the multiple planets orbiting the 100s of billions of stars that make up our Galaxy, what would these aliens find? Boring microbes, most likely, as that is most of the history of the Earth. There is zero reason to expect that that would arrive now any more than at any random time in the past.
Space is big, time periods that connect us to what is out there are very long, and we are looking into the past. Not an easy place to find advanced alien civilizations, even though life seems inevitable there.
But “advanced alien civilizations” are here already – they’re just not of the little green men variety, or of any men variety. They come from the other direction, where space is small and time periods are short.
Fascinating microbiomes like “alien civilizations” with a universe of bacterial and viral diversity. Some are invaders, marauders and lay waste to the realms they overtake. Some are harmonious civilizations lasting countless generations, connected in networks and interwoven cycles. Unfortunately this microcosm just doesn’t make very good science fiction. So I’ll keep dreaming and imagining of advanced civilizations in outer space with mind bending technology and faces we might recognize.
I don’t think the argument that the universe wouldn’t sit around for billions of years before doing anything is particularly conclusive. Sure it wouldn’t, but there are many ways in which it wouldn’t.
Time is relative and maleable. Two observers need not agree on the order of events, nor on the duration of time between them. So IMO that argument a bit like saying the twin paradox doesn’t allow the man left on Earth to be any older than his twin in the returning rocket, because the Earth twin wouldn’t all of a sudden age 50 years, let’s say. It’s not a very good argument, because it presupposes there is only one preferred age in the universe regardless of how we measure it. This is inconsistent with both general and special relativity. It’s just a minor point. Maybe there are better arguments that would achieve the same result, I just don’t buy that particular one – not yet at least.
It is certainly true that the universe could sit around doing nothing; all I’m arguing is that it seems unlikely. However, the order-of-events from a relativistic perspective does not apply here. We’re talking about the arrow of time for the whole universe, not clocks in different frames of reference.
If we go back to first principles, what we agree on in any frame is the velocity of light. Matter and metric curvature are themselves dependent on the frame.
ok, so what? there is no question of the order of cosmic events being maleable by the choice of frame, especially that the obvious choice of frame is provided by the CMB. we have a strangely large peculiar velocity wrt the CMB frame, about 600 km/s if memory serves. this may itself be a problem for the standard cosmology, but it is only z = v/c = 0.002. This does nothing to affect our perception of the order of events at z > 10.
Yes. I see what you mean in that context. Slim to no chance of any ambiguity arising from one point in the bulk spacetime to another. The boundaries are not so clear cut to me though. Photon paths can get pretty confusing at universe distances of R and R/2.
On a side note about the arrow of time, that is something that could be anthropic. I found it interesting that the “ingredients for life” or amino acids brought back from an asteroid showed no preferred chirality. Had they been of biological origin, they should have shown a homochirality (as far as we know). I remembered back to a signal processing course where it was demonstrated that a choice had to be made when processing time series signals that was akin to an arrow of time with respect to memory and expectation (at least that’show I interpreted it). So life certainly seems capable of creating an arrow of time.
Intriguing. I seldom wondered about chirality, but what you say makes sense to me. I was never a fan of panspermia anyway. Life is a physical process; I expect it can happen wherever conditions permit it.
As for the arrow of time, another way to look at that is the direction in which entropy increases.
Even if other civilisations are rare, there’s room for a lot. So the puzzles that are universal, like QM, are particularly interesting. The ones that ‘everyone’s looking at’ might tell us a few things it’s harder to find out from long distance travel: about some of the big questions, and if the puzzles look interesting enough to be set ones – and that’s an avenue to look at – then even possibly about the general mentality of others they’d be set for if so.
I should probably say, I think consciousness and the observer is nothing to do with QM, and a red herring – I think it’s interactions, not measurements. But some questions, for instance about the red herring, started to become interesting.
but the point is serious: you could in principle have a much older universe, but then why are all the oldest things pretty much the same age? Why would the universe sit around doing nothing for billions of years then suddenly decide to make lots of stars all at once?
how do you know that light from stars and galaxies with red shift older than 15 billion years ago haven’t been found or are forever beyond our cosmic horizon ?
and how do you date the age of super massive black hole which could be older than the universe
https://aeon.co/essays/why-the-hunt-for-reality-is-an-impossible-burden-for-physics
It usually takes a few days for me to ingest Stacy’s posts, and even then I am in deep water. The concordance on the universal age has always been a sliding scale, and within the standard context, it has run out of room, as Khuram Rafique pointed out: The maturity of the ‘early’ universe is inconsistent with the both the age and geometry.
Supernova researchers have known for decades how thin the ice is that they are walking on. Specifically, to gain consistency with a concordance Hubble value, they had to normalize supernova curve shapes about an arbitrary ‘stretch factor’ while ignoring both dust attenuation and selection effects. When the new Planck values were published, the supernova community had no more wiggle room and they have been standing pat on the tension (really the failure) within the LCDM model for almost a decade. Good on them!
Admitting failure is the first step in advancing scientific theories. At present no one is certain where this failure is. When Gravity B probe scientists first analyzed their signal, the signature of General Relativity was missing. Years later, they concluded that an unexpected drag caused by imperfections in their spheres hid the signal – they were even able to create a single harmonic that they used to compensate for this ‘drag’ and reduce the data. Likewise, the MONDish accelerations of the Pioneer Probes was ‘modeled out’ by assuming an unpredicted thermal vector: No real science: Just ruler bending.
You can see where I am going here: A theory that is not allowed to fail grows barnacles and tentacles that hide the root cause of the failure. Even with AI, this could take decades to untangle.
Yes, the number of tentacles and barnacles have multiplied over the past few decades. I was already concerned that there were too many before the turn of the century; there are many more now. AI can’t help sort out what people refuse to contemplate.
I have suggested that Gravity Probe B’s 4 ultra-precise gyroscopes did not malfunction but instead functioned correctly within design specifications & confirmed the hypothesis dark-matter-compensation-constant =
(3.9±.5) * 10^–5. What empirical evidence convincingly demonstrates the hypothesis dark-matter-compensation-constant = 0 (to 5 decimal places) & gravitational energy is conserved?
If the MOND paradigm does not lend itself to determination of a discrete age of the universe, maybe it is because it represents an aspect of a static universe (Einstein would breathe a sigh of relief about that 😉).
Coincidence problems might be much easier to understand in a complementary universe that could be described as static or expanding.
Along these lines one can show that the Nariai horizons of an expanding universe and “black hole” universe are equivalent. One can then show that the Dark Energy and Dark Matter share an equivalence described by DE = DM*c^2
I think this is a very promising approach to pursue. One thing that really opened my eyes about the need for a complementary description was encountering “the worst prediction in all of physics”. How does a reasonable idea miss the mark by up to 120 orders of magnitude? It seems we need a drastic shift in thinking, but we also need to incorporate and extend all the accepted standard physics which continually holds up to scrutiny. As usual and at some point we will leave this crisis, and still not understand everything. So I guess there is no rush to do so.
Complementary or dual views on objects are always useful! Although I don’t really understand Nariai horizons, the general direction of your thoughts looks good.
It’s not that MOND is any less good (at determining the age of the universe) than GR. It’s that they are both theories of dynamics instead of history. That makes their evidence so strong (where MOND is the upgrade of GR) compared to LCDM: dynamics are in every subsystem, its subsubsystems and whatever accelerates the right amount to verify the dynamics over and over again.
LCDM on the other hand leans heavily on a history that happened once everywhere and now we scramble to find remnants of evidence of it. The CDM part is closest to dynamics but relies on invisible forces and actors.
Thus I’m not really interested in viewing DM as dual to DE, since DM doesn’t really seem to be there. But how do you identify the Nariai solution with approximately GR in the observable universe?
So the Nariai solution is GR. The basic idea, as I currently understand it, is to consider what is the largest black hole that could fit in an otherwise de Sitter universe? So it is an exact solution for the Einstein Field Equations in Schwarzschild-de Sitter.
An observer can generally be located at a pode, simultaneously external to the “black hole” horizon and internal to the de Sitter horizon or “Big Bang” horizon. And possibly vice versa as well, which is what I speculate. None of this in my opinion is absolutely real, whatever that means, it is distinct from our classical domain, but it’s perhaps a way of accounting for photon paths of observables.
Since the horizons are the same, I assume there is no local test you can do to distinguish whether you are looking at a Big Bang or Nariai Black Hole horizon. That’s just the tip of the iceberg. You can try imagining it on a clear night – go out and see for yourself, it’s no more mathematical than any other phenomena we experience.
For the Nariai solution, inside the coincident spherical horizons, the average energy density associated with M is twice that associated with Λ – and their sum is equal to the average energy density of the deSitter solution (M=0).
What I take away from recent work including the Nariai solution https://arxiv.org/abs/2309.15897
is that the algebra of observables is a little more complex than the equations appear. The observer is assigned an extra degree of freedom, distinct from the gravitationally dressed observables in the bulk spacetime.
Does the solution you are referencing make sense in that context? I have my own interpretations of what that could mean, but would love to hear other’s thoughts on it.
Just to be transparent here’s my crazy idea: what I like most in a 4+1 dimensional spacetime taken almost to the Nariai limit is that it gives 3+1 dimensional spacetime with a tiny Planck-scale 4th space dimension that could dilate time a lot where (in spacetime) this dimension has been tiny compared to where it was larger but still Planck-scale. Time dilation depends on distance to the Schwarzschild radius like a square root around 0, so this could have a large effect.
“Along these lines one can show that the Nariai horizons of an expanding universe and “black hole” universe are equivalent. One can then show that the Dark Energy and Dark Matter share an equivalence described by DE = DM*c^2”
Math isn’t physics. There is math that demonstrates the Earth is the center of the Universe. That math (Ptolemaic cosmology) was useful for more than a millennium – it got the right answers on the tests. The math was, however, just completely wrong about the physics.
Somehow that cautionary tale was set aside in theoretical physics about 50 years ago and cosmology became a maintenance project for a continuously flailing model based on now century old assumptions that have no basis in physics but are rather simplistic, empirically baseless metaphysical conceits. The crisis in cosmology is caused by the consensus belief that there is only One True Model, the Expanding Universe of the FLRW equations. But math isn’t physics and if a model is based solely on metaphysical beliefs, it isn’t science no matter how many epicycles or undetectable entities you add to “fit” physical reality to the model.
Also about to my 4+1 dimensional almost-Nariai idea: this is precisely the right setting to introduce Kaluza-Klein theory, since the extra dimension in 4+1 Nariai spacetime has the shape of a circle, just as the extra dimension in KK.
I am holding out hope that added symmetries arise so that we need no more dimensions than we do for understanding light.
@tritonstation my take (as a non-specialist) on it is that looking at galaxy rotation curves makes you a believer in MOND, yet those galaxy rotation curves weren't the first evidence for "dark matter"; I think it is all up in the air now that JWST has increased the tension between how old cosmology says the universe is and how long it took everything from stars, galaxies and supermassive black holes to develop.
Proposing solutions without first addressing the ‘bent ruler’ problems is not productive.
Several months ago, Stacy referenced this https://arxiv.org/pdf/2406.05050 paper by White et al as a defining work on cosmic time dilution. The authors find that the light curves of more distant supernova events are longer than the local sample prior to correction for time dilution. After the correction, the distant sample overlays the local sample and there is order in Einstein’s universe. Except that there isn’t.
This paper does not allow for selection effects: Our propensity to find the largest, brightest events with increasing distance. After correction for time dilation, White et als most distant supernovae bin (z=0.74) have significantly shorter light curves than the punitive local sample; as highlighted here: https://arxiv.org/abs/2005.09441 “Redshift evolution of the underlying type Ia supernova stretch distribution”
N. Nicolas et al. plots the ‘stretch factors’ used to normalize the distribution, and notes that the most distant events have the longest ‘stretch factors’. (figure 6). The stretch factor has a negative correlation with lightcurve width, indicative of more diminutive events. This runs completely counter to known selection effects, and could only be construed as a physical solution if type Ia supernovae are evolving to become brighter events in the latent universe at about the same rate that the universe is expanding. Mature galaxies in the infant universe gut this evolutionary solution that is critical to the White et al assertion that they have confirmed time dilation is a universal constant. White et al further states in their paper that they avoid using the ‘stretch factor’ in their calculations, thus ignoring the fact that evolution of the the ‘stretch factor’ – indicative of smaller supernova events with increasing distance – is critical to their thesis. Since supernovae are such an important step in the distance ladder, any conclusions the supernova community foster on the rest of us must address this apparent supernova size evolution. This is a bent ruler.