There are many tensions in the era of precision cosmology. The most prominent, at present, is the Hubble tension – the difference between traditional measurements, which consistently obtain H0 = 73 km/s/Mpc, and best fit* to the acoustic power spectrum of the cosmic microwave background (CMB) observed by Planck, H0 = 67 km/s/Mpc. There are others of varying severity that are less widely discussed. In this post, I want to talk about a persistent tension in the baryon density implied by the measured primordial abundances of deuterium and lithium+. Unlike the tension in H0, this problem is not nearly as widely discussed as it should be.
Framing
Part of the reason that this problem is not seen as an important tension has to do with the way in which it is commonly framed. In most discussions, it is simply the primordial lithium problem. Deuterium agrees with the CMB, so those must be right and lithium must be wrong. Once framed that way, it becomes a trivial matter specific to one untrustworthy (to cosmologists) observation. It’s a problem for specialists to sort out what went wrong with lithium: the “right” answer is otherwise known, so this tension is not real, making it unworthy of wider discussion. However, as we shall see, this might not be the right way to look at it.
It’s a bit like calling the acceleration discrepancy the dark matter problem. Once we frame it this way, it biases how we see the entire problem. Solving this problem becomes a matter of finding the dark matter. It precludes consideration of the logical possibility that the observed discrepancies occur because the force law changes on the relevant scales. This is the mental block I struggled mightily with when MOND first cropped up in my data; this experience makes it easy to see when other scientists succumb to it sans struggle.
Big Bang Nucleosynthesis (BBN)
I’ve talked about the cosmic baryon density here a lot, but I’ve never given an overview of BBN itself. That’s because it is well-established, and has been for a long time – I assume you, the reader, already know about it or are competent to look it up. There are many good resources for that, so I’ll only give enough of a sketch necessary to the subsequent narrative – a sketch that will be both too little for the experts and too much for the subsequent narrative that most experts are unaware of.
Primordial nucleosynthesis occurs in the first few minutes after the Big Bang when the universe is the right temperature and density to be one big fusion reactor. The protons and available neutrons fuse to form helium and other isotopes of the light elements. Neutrons are slightly more massive and less numerous than protons to begin with. In addition, free neutrons decay with a half-life of roughly ten minutes, so are outnumbered by protons when nucleosynthesis happens. The vast majority of the available neutrons pair up with protons and wind up in 4He while most of the protons remain on their own as the most common isotope of hydrogen, 1H. The resulting abundance ratio is one alpha particle for every dozen protons, or in terms of mass fractions&, Xp = 3/4 hydrogen and Yp = 1/4 helium. That is the basic composition with which the universe starts; heavy elements are produced subsequently in stars and supernova explosions.
Though 1H and 4He are by far the most common products of BBN, there are traces of other isotopes that emerge from BBN:

After hydrogen and helium, the next most common isotope to emerge from BBN is deuterium, 2H. It is the first thing made (one proton plus one neutron) but most of it gets processed into 4He, so after a brief peak, its abundance declines. How much it declines is very sensitive to Ωb: the higher the baryon density, the more deuterium gets gobbled up by helium before freeze-out. The following figure illustrates how the abundance of each isotope depends on Ωb:

If we can go out and measure the primordial abundances of these various isotopes, we can constrain the baryon density.
The Baryon Density
It works! Each isotope provides an independent estimate of Ωbh2, and they agree pretty well. This was the first and for a long time the only over-constrained quantity in cosmology. So while I am going to quibble about the exact value of Ωbh2, I don’t doubt that the basic picture is correct. There are too many details we have to get right in the complex nuclear reaction chains coupled to the decreasing temperature of a universe expanding at the rate required during radiation domination for this to be an accident. It is an exquisite success of the standard Hot Big Bang cosmology, albeit not one specific to LCDM.
Getting at primordial, rather than current, abundances is an interesting observational challenge too involved to go into much detail here. Suffice it to say that it can be done, albeit to varying degrees of satisfaction. We can then compare the measured abundances to the theoretical BBN abundance predictions to infer the baryon density.

Deuterium is considered the best baryometer because its relic abundance is very sensitive to Ωbh2: a small change in baryon density corresponds to a large change in D/H. In contrast, 4He is a great confirmation of the basic picture – the primordial mass fraction has to come in very close to 1/4 – but the precise value is not very sensitive to Ωbh2. Most of the neutrons end up in helium no matter what, so it is hard to distinguish# a few more from a few less. (Note the huge zoom on the linear scale for 4He. If we plotted it logarithmically with decades of range as we do the other isotopes, it would be a nearly flat line.) Lithium is annoying for being double-valued right around the interesting baryon density so that the observed lithium abundance can correspond to two values of Ωbh2. This behavior stems from the trade off with 7Be which is produced at a higher rate but decays to 7Li after a few months. For this discussion the double-valued ambiguity of lithium doesn’t matter, as the problem is that the deuterium abundance indicates Ωbh2 that is even higher than the higher branch of lithium.
BBN pre-CMB
The diagrams above and below show the situation in the 1990s before CMB estimates became available. Consideration of all the available data in the review of Walker et al. led to the value Ωbh2 = 0.0125 ± 0.0025. This value** was so famous that it was Known. It formed the basis of my predictions for the CMB for both LCDM and no-CDM. This prediction hinged on BBN being correct, and that we understood the experimental bounds on the baryon density. A few years after Walker’s work, Copi et al. provided the estimate++ 0.009 < Ωbh2 < 0.02. Those were the extreme limits of the time, as illustrated by the green box below:

Up until this point, the constraints on BBN had come mostly from helium observations in nearby galaxies and lithium measurements in metal poor stars. It was only just then becoming possible to obtain high quality spectra of sufficiently high redshift quasars to see weak deuterium lines associated with strongly damped primary hydrogen absorption in intergalactic gas along the line of sight. This is great: deuterium is the most sensitive baryometer, the redshifts were high enough to be early in the history of the universe close to primordial times, and the gas was in the middle of intergalactic nowhere so shouldn’t be altered by astrophysical processes. These are ideal conditions, at least in principle.
First results were binary. Craig Hogan obtained a high deuterium abundance, corresponding to a low baryon density. Really low. From my Walker et al.-informed confirmation bias, too low. It was a a brand new result, so promising but probably wrong. Then Tytler and his collaborators came up with the opposite result: low deuterium abundance corresponding to a high baryon density: Ωbh2 = 0.019 ± 0.001. That seemed pretty high at the time, but at least it was within the bound Ωbh2 < 0.02 set by Copi et al. There was a debate between these high/low deuterium camps that ended in a rare act of intellectual honesty by a cosmologist when Hogan&& conceded. We seemed to have settled on the high-end of the allowed range, just under Ωbh2 = 0.02.
Enter the CMB
CMB data started to be useful for constraining the baryon density in 2000 and improved rapidly. By that point, LCDM was already well-established, and I had published predictions for both LCDM and no-CDM. In the absences of cold dark matter, one expects a damping spectrum, with each peak lower than the one before it. For the narrow (factor of two) Known range of possible baryon densities, all the no-CDM models run together to essentially the same first-to-second peak ratio.

Adding CDM into the mix adds a driver to the oscillations. This fights the baryonic damping: the CDM is like a parent pushing a swing while the baryons are the kid dragging his feet. This combination makes just about any pattern of peaks possible. Not all free parameters are made equal: the addition of a single free parameter, ΩCDM, makes it possible to fit any plausible pattern of peaks. Without it (no-CDM means ΩCDM = 0), only the damping spectrum is allowed.
For BBN as it was known at the time, the clear difference was in the relative amplitude$$ of the first and second peaks. As can be seen above, the prediction for no-CDM was correct and that for LCDM was not. So we were done, right?
Of course not. To the CMB community, the only thing that mattered was the fit to the CMB power spectrum, not some obscure prediction based on BBN. Whatever the fit said was True; too bad for BBN if it didn’t agree.
The way to fit the unexpectedly small## second peak was to crank up the baryon density. To do that, Tegmark & Zaldarriaga (2000) needed 0.022 < Ωbh2 < 0.040. That’s what the first blue point below. This was the first time that I heard it suggested that the baryon density could be so high.

The astute reader will note that the CMB-fit 0.022 < Ωbh2 < 0.040 sits entirely outside the BBN bounds 0.009 < Ωbh2 < 0.02. So we’re done, right? Well, no – the community simply ignored the successful a priori prediction of the no-CDM scenario. That was certainly easier than wrestling with its implications, and no one seems to have paused to contemplate why the observed peak ratio came in exactly at the one unique value that it could obtain in the case of no-CDM.
For a few years, the attitude seemed to be that BBN was close but not quite right. As the CMB data improved, the baryon density came down, ultimately settling on Ωbh2 = 0.0224 ± 0.0001. Part of the reason for this decline from the high initial estimate is covariance. In this case, the tilt plays a role: the baryon density declined as ns = 1 → 0.965 ± 0.004. Getting the second peak amplitude right takes a combination of both.
Now we’re back in the ballpark, almost: Ωbh2 = 0.0224 is not ridiculously far above the BBN limit Ωbh2 < 0.02. Close enough for Spergel et al. (2003) to say “The remarkable agreement between the baryon density inferred from D/H values and our [WMAP] measurements is an important triumph for the basic big bang model.” This was certainly true given the size of the error bars on both deuterium and the CMB at the time. It also elides*** any mention of either helium or lithium or the fact that the new Known was not consistent with the previous Known. Ωbh2 = 0.0224 was always the ally; Ωbh2 = 0.0125 was always the enemy.
Note, however, that deuterium made a leap from below Ωbh2 = 0.02 to above 0.02 exactly when the CMB indicated that it should do so. They iterated to better agreement and pretty much stayed there. Hopefully that is the correct answer, but given the history of the field, I can’t help worrying about confirmation bias. I don’t know if that is what’s going on, but if it were, this convergence over time is what it would look like.
Lithium does not concur
Taking the deuterium results at face value, there really is excellent agreement with the LCDM fit to the CMB, so I have some sympathy for the desire to stop there. Deuterium is the best baryometer, after all. Helium is hard to get right at a precise enough level to provide a comparable constraint, and lithium, well, lithium is measured in stars. Stars are tiny, much smaller than galaxies, and we know those are too puny to simulate.
Spite & Spite (1982) [those are names, pronounced “speet”; we’re not talking about spiteful stars] discovered what is now known as the Spite plateau, a level of constant lithium abundance in metal poor stars, apparently indicative of the primordial lithium abundance. Lithium is a fragile nucleus; it can be destroyed in stellar interiors. It can also be formed as the fragmentation product of cosmic ray collisions with heavier nuclei. Both of these things go on in nature, making some people distrustful of any lithium abundance. However, the Spite plateau is a sort of safe zone where neither effect appears to dominate. The abundance of lithium observed there is indeed very much in the right ballpark to be a primordial abundance, so that’s the most obvious interpretation.
Lithium indicates a lowish baryon density. Modern estimates are in the same range as BBN of old; they have not varied systematically with time. There is no tension between lithium and pre-CMB deuterium, but it disagrees with LCDM fits to the CMB and with post-CMB deuterium. This tension is both persistent and statistically significant (Fields 2011 describes it as “4–5σ”).

I’ve seen many models that attempt to fix the lithium abundance, e.g., by invoking enhanced convective mixing via <<mumble mumble>> so that lithium on the surface of stars is subject to destruction deep in the stellar interior in a previously unexpected way. This isn’t exactly satisfactory – it should result in a mess, not a well-defined plateau – and other attempts I’ve seen to explain away the problem do so with at least as much contrivance. All of these models appeared after lithium became a problem; they’re clearly motivated by the assumption bias that the CMB is correct so the discrepancy is specific to lithium so there must be something weird about stars that explains it.
Another way to illustrate the tension is to use Ωbh2 from the Planck fit to predict what the primordial lithium abundance should be. The Planck-predicted band is clearly higher than and offset from the stars of the Spite plateau. There should be a plateau, sure, but it’s in the wrong place.

An important recent observation is that a similar lithium abundance is obtained in the metal poor interstellar gas of the Small Magellanic Cloud. That would seem to obviate any explanation based on stellar physics.

We can also illustrate the tension on the Schramm diagram. This version adds the best-fit CMB value and the modern deuterium abundance. These are indeed in excellent agreement, but they don’t intersect with lithium. The deuterium-lithium tension appears to be real, and comparable in significance to the H0 tension.
So what’s the answer?
I don’t know. The logical options are
- A systematic error in the primordial lithium abundance
- A systematic error in the primordial deuterium abundance
- Physics beyond standard BBN
I don’t like any of these solutions. The data for both lithium and deuterium are what they are. As astronomical observations, both are subject to the potential for systematic errors and/or physical effects that complicate their interpretation. I am also extremely reluctant to consider modifications to BBN. There are occasional suggestions to this effect, but it is a lot easier to break than it is to fix, especially for what is a fairly small disagreement in the absolute value of Ωbh2.
I have left the CMB off the list because it isn’t part of BBN: it’s constraint on the baryon density is real, but involves completely different physics. It also involves different assumptions, i.e., the LCDM model and all its invisible baggage, while BBN is just what happens to ordinary nucleons during radiation domination in the early universe. CMB fits are corroborative of deuterium only if we assume LCDM, which I am not inclined to accept: deuterium disagreed with the subsequent CMB data before it agreed. Whether that’s just progress or a sign of confirmation bias, I also don’t know. But I do know confirmation bias has bedeviled the history of cosmology, and as the H0 debate shows, we clearly have not outgrown it.
The appearance of confirmation bias is augmented by the response time of each measured elemental abundance. Deuterium is measured using high redshift quasars; the community that does that work is necessarily tightly coupled to cosmology. It’s response was practically instantaneous: as soon as the CMB suggested that the baryon density needed to be higher, conforming D/H measurements appeared. Indeed, I recall when that first high red triangle appeared in the literature, a colleague snarked to me “we can do that too!” In those days, those of us who had been paying attention were all shocked at how quickly Ωbh2 = 0.0125 ± 0.0025 was abandoned for literally double that value, ΩBh2 = 0.025 ± 0.001. That’s 4.6 sigma for those keeping score.
The primordial helium abundance is measured in nearby dwarf galaxies. That community is aware of cosmology, but not as strongly coupled to it. Estimates of the primordial helium abundance have drifted upwards over time, corresponding to higher implied baryon densities. It’s as if confirmation bias is driving things towards the same result, but on a timescale that depends on the sociological pressure of the CMB imperative.

I am not accusing anyone of trying to obtain a particular result. Confirmation bias can be a lot more subtle than that. There is an entire field of study of it in psychology. We “humans actively sample evidence to support prior beliefs” – none of us are immune to it.
In this case, how we sample evidence depends on the field we’re active in. Lithium is measured in stars. One can have a productive career in stellar physics while entirely ignoring cosmology; it is the least likely to be perturbed by edicts from the CMB community. The inferred primordial lithium abundance has not budged over time.
What’s your confirmation bias?
I try not to succumb to confirmation bias, but I know that’s impossible. The best I can do is change my mind when confronted with new evidence. This is why I went from being sure that non-baryonic dark matter had to exist to taking seriously MOND as the theory that predicted what I observed.
I do try to look at things from all perspectives. Here, the CMB has been a roller coaster. Putting on an LCDM hat, the location of the first peak came in exactly where it was predicted: this was strong corroboration of a flat FLRW geometry. What does it mean in MOND? No idea – MOND doesn’t make a prediction about that. The amplitude of the second peak came in precisely as predicted for the case of no-CDM. This was corroboration of the ansatz inspired by MOND, and the strongest possible CMB-based hint that we might be barking up the wrong tree with LCDM.
As an exercise, I went back and maxed out the baryon density as it was known before the second peak was observed. We already thought we knew LCDM parameters well enough to do this. We couldn’t. The amplitude of the second peak came as a huge surprise to LCDM; everyone acknowledged that at the time (if pressed; many simply ignored it). Nowadays this is forgotten, or people have gaslit themselves into believing this was expected all along. It was not.

From the perspective of no-CDM, we don’t really care whether deuterium or lithium hits closer to the right baryon density. All plausible baryon densities predict essentially the same A1:2 amplitude ratio. Once we admit CDM as a possibility, then the second peak amplitude becomes very sensitive to the mix of CDM and baryons. From this perspective, the lithium-indicated baryon density is unacceptable. That’s why it is important to have a test that is independent of the CMB. Both deuterium and lithium provide that, but they disagree about the answer.
Once we broke BBN to fit the second peak in LCDM, we were admitting (if not to ourselves) that the a priori prediction of LCDM had failed. Everything after that is a fitting exercise. There are enough free parameters in LCDM to fit any plausible power spectrum. Cosmologists are fond of saying there are thousands of independent multipoles, but that overstates the case: it doesn’t matter how finely we sample the wave pattern, it matters what the wave pattern is. That is not as over-constrained as it is made to sound. LCDM is, nevertheless, an excellent fit to the CMB data; the test then is whether the parameters of this fit are consistent with independent measurements. It was until it wasn’t; that’s why we face all these tensions now.
Despite the success of the prediction of the second peak, no-CDM gets the third peak wrong. It does so in a way that is impossible to fix short of invoking new physics. We knew that had to happen at some level; empirically that level occurs at L = 600. After that, it becomes a fitting exercise, just as it is in LCDM – only now, one has to invent a new theory of gravity in which to make the fit. That seems like a lot to ask, so while it remained as a logical possibility, LCDM seemed the more plausible explanation for the CMB if not dynamical data. From this perspective, that A1:2 came out bang on the value predicted by no-CDM must just be one heck of a cosmic fluke. That’s easy to accept if you were unaware of the prediction or scornful of its motivation; less so if you were the one who made it.
Either way, the CMB is now beyond our ability to predict. It has become a fitting exercise, the chief issue being what paradigm in which to fit it. In LCDM, the fit follows easily enough; the question is whether the result agrees with other data: are these tensions mere hiccups in the great tradition of observational cosmology? Or are they real, demanding some new physics?
The widespread attitude among cosmologists is that it will be impossible to fit the CMB in any way other than LCDM. That is a comforting thought (it has to be CDM!) and for a long time seemed reasonable. However, it has been contradicted by the success of Skordis & Zlosnik (2021) using AeST, which can fit the CMB as well as LCDM.

AeST is a very important demonstration that one does not need dark matter to fit the CMB. One does need other fields+++, so now the reality of those have to be examined. Where this show stops, nobody knows.
I’ll close by noting that the uniqueness claimed by the LCDM fit to the CMB is a property more correctly attributed to MOND in galaxies. It is less obvious that this is true because it is always possible to fit a dark matter model to data once presented with the data. That’s not science, that’s fitting French curves. To succeed, a dark matter model must “look like” MOND. It obviously shouldn’t do that, so modelers refuse to go there, and we continue to spin our wheels and dig the rut of our field deeper.
Note added in proof, as it were: I’ve been meaning to write about this subject for a long time, but hadn’t, in part because I knew it would be long and arduous. Being deeply interested in the subject, I had to slap myself repeatedly to refrain from spending even more time updating the plots with publication date as an axis: nothing has changed, so that would serve only to feed my OCD. Even so, it has taken a long time to write, which I mention because I had completed the vast majority of this post before the IAU announced on May 15 that Cooke & Pettini have been awarded the Gruber prize for their precision deuterium abundance. This is excellent work (it is one of the deuterium points in the relevant plot above), and I’m glad to see this kind of hard, real-astronomy work recognized.
The award of a prize is a recognition of meritorious work but is not a guarantee that it is correct. So this does not alter any of the concerns that I express here, concerns that I’ve expressed for a long time. It does make my OCD feels obliged to comment at least a little on the relevant observations, which is itself considerably involved, but I will tack on some brief discussion below, after the footnotes.
*These methods were in agreement before they were in tension, e.g., Spergel et al. (2003) state: “The agreement between the HST Key Project value and our [WMAP CMB] value, h = 0.72 ±0.05, is striking, given that the two methods rely on different observables, different underlying physics, and different model assumptions.”
+Here I mean the abundance of the primary isotope of lithium, 7Li. There is a different problem involving the apparent overabundance of 6Li. I’m not talking about that here; I’m talking about the different baryon densities inferred separately from the abundances of D/H and 7Li/H.
&By convention, X, Y, and Z are the mass fractions of hydrogen, helium, and everything else. Since the universe starts from a primordial abundance of Xp = 3/4 and Yp = 1/4, and stars are seen to have approximately that composition plus a small sprinkling of everything else (for the sun, Z ≈ 0.02), and since iron lines are commonly measured in stars to trace Z, astronomers fell into the habit of calling Z the metallicity even though oxygen is the third most common element in the universe today (by both number and mass). Since everything in the periodic table that isn’t hydrogen and helium is a small fraction of the mass, all the heavier elements are often referred to collectively as metals despite the unintentional offense to chemistry.
$The factor of h2 appears because of the definition of the critical density ρc = (3H02)/(8πG): Ωb = ρb/ρc. The physics cares about the actual density ρb but Ωbh2 = 0.02 is a lot more convenient to write than ρb,now = 3.75 x 10-31 g/cm3.
#I’ve worked on helium myself, but was never able to do better than Yp = 0.25 ± 0.01. This corroborates the basic BBN picture, but does not suffice as a precise measure of the baryon density. To do that, one must obtain a result accurate to the third place of decimals, as discussed in the exquisite works of Kris Davidson, Bernie Pagel, Evan Skillman, and their collaborators. It’s hard to do for both observational reasons and because a wealth of subtle atomic physics effects come into play at that level of precision – helium has multiple lines; their parent population levels depend on the ionization mechanism, the plasma temperature, its density, and fluorescence effects as well as abundance.
**The value reported by Walker et al. was phrased as Ωbh502 = 0.05 ± 0.01, where h50 = H0/(50 km/s/Mpc); translating this to the more conventional h = H0/(100 km/s/Mpc) decreases these numbers by a factor of four and leads to the impression of more significant digits than were claimed. It is interesting to consider the psychological effect of this numerology. For example, the modern CMB best-fit value in this phrasing is Ωbh502 = 0.09, four sigma higher than the value Known from the combined assessment of the light isotope abundances. That seems like a tension – not just involving lithium, but the CMB vs. all of BBN. Amusingly, the higher baryon density needed to obtain a CMB fit assuming LCDM is close to the threshold where we might have gotten away without the dynamical need (Ωm > Ωb) for non-baryonic dark matter that motivated non-baryonic dark matter in the first place. (For further perspective at a critical juncture in the development of the field, see Peebles 1999).
The use of h50 itself is an example of the confirmation bias I’ve mentioned before as prevalent at the time, that Ωm = 1 and H0 = 50 km/s/Mpc. I would love to be able to do the experiment of sending the older cosmologists who are now certain of LCDM back in time to share the news with their younger selves who were then equally certain of SCDM. I suspect their younger selves would ask their older selves at what age they went insane, if they didn’t simply beat themselves up.
++Craig Copi is a colleague here at CWRU, so I’ve asked him about the history of this. He seemed almost apologetic, since the current “right” baryon density from the CMB now is higher than his upper limit, but that’s what the data said at the time. The CMB gives a more accurate value only once you assume LCDM, so perhaps BBN was correct in the first place.
&&Or succumbed to peer pressure, as that does happen. I didn’t witness it myself, so don’t know.
$$The absolute amplitude of the no-CDM model is too high in a transparent universe. Part of the prediction of MOND is that reionization happens early, causing the universe to be a tiny bit opaque. This combination came out just right for τ = 0.17, which was the original WMAP measurement. It also happens to be consistent with the EDGES cosmic dawn signal and the growing body of evidence from JWST.
##The second peak was unexpectedly small from the perspective of CDM; it was both natural and expected in no-CDM. At the time, it was computationally expensive to calculate power spectra, so people had pre-computed coarse grids within which to hunt for best fits. The range covered by the grids was informed by extant knowledge, of which BBN was only one element. From a dynamical perspective, Ωm > 0.2 was adopted as a hard limit that imposed an edge in the grids of the time. There was no possibility of finding no-CDM as the best fit because it had been excluded as a possibility from the start.
***Spergel et al. (2003) also say “the best-fit Ωbh2 value for our fits is relatively insensitive to cosmological model and dataset combination as it depends primarily on the ratio of the first to second peak heights (Page et al. 2003b)” which is of course the basis of the prediction I made using the baryon density as it was Known at the time. They make no attempt to test that prediction, nor do they cite it.
+++I’ve heard some people assert that this is dark matter by a different name, so is a success of the traditional dark matter picture rather than of modified gravity. That’s not at all correct. It’s just stage three in the list of reactions to surprising results identified by Louis Agassiz.
All of the figures below are from Cooke & Pettini (2018), which I employ here to briefly illustrate how D/H is measured. This is the level of detail I didn’t want to get into for either deuterium or helium or lithium, which are comparably involved.
First, here is a spectrum of the quasar they observe, Q1243+307. The quasar itself is not the object of interest here, though quasars are certainly interesting! Instead, we’re looking at the absorption lines along the line of sight; the quasar is being used as a spotlight to illuminate the gas between it and us.

The big hump around 4330 Å is Lyman α emission from the quasar itself. Lyα is the n = 2 to 1 transition of hydrogen, Lyβ is the n = 3 to 1 transition, and so on. The rest frame wavelength of Lyα is far into the ultraviolet at 1216 Å; we see it redshifted to z = 2.558. The rest of the spectrum is continuum and emission lines from the quasar with absorption lines from stuff along the line of sight. Note that the red end of the spectrum at wavelengths longer than 4400 Å is mostly smooth with only the occasional absorption line. Blueward of 4300 Å, there is a huge jumble. This is not noise, this is the Lyα forest. Each of those lines is absorption from hydrogen in clouds at different distances, hence different redshifts, along the line of sight.
Most of the clouds in the Lyα forest are ephemeral. The cross section for Lyα is huge so It takes very little hydrogen to gobble it up. Most of these lines represent very low column densities of neutral hydrogen gas. Once in a while though, one encounters a higher column density cloud that has enough hydrogen to be completely opaque to Lyα. These are damped Lyα systems. In damped systems, one can often spot the higher order Lyman lines (these are marked in red in the figure). It also means that there is enough hydrogen present to have a shot at detecting the slightly shifted version of Lyα of deuterium. This is where the abundance ratio D/H is measured.
To measure D/H, one has not only to detect the lines, but also to model and subtract the continuum. This is a tricky business in the best of times, but here its importance is magnified by the huge difference between the primary Lyα line which is so strong that it is completely black and the deuterium Lyα line which is incredibly weak. A small error in the continuum placement will not matter to the measurement of the absorption by the primary line, but it could make a huge difference to that of the weak line. I won’t even venture to discuss the nonlinear difference between these limits due to the curve of growth.

The above examples look pretty good. The authors make the necessary correction for the varying spectral sensitivity of the instrument, and take great care to simultaneously fit the emission of the quasar and the absorption. I don’t think they’ve done anything wrong; indeed, it looks like they did everything right – just as the people measuring lithium in stars have.
Still, as an experienced spectroscopist, there are some subtle details that make me queasy. There are two independent observations, which is awesome, and the data look almost exactly the same, a triumph of repeatability. The fitted models are nearly identical, but if you look closely, you can see the model cuts slightly differently along the left edge of the damped absorption around 4278 Å in the two versions of the spectrum, and again along the continuum towards the right edge.
These differences are small, so hopefully don’t matter. But what is the continuum, really? The model line goes through the data, because what else could one possibly do? But there is so much Lyα absorption, is that really continuum? Should the continuum perhaps trace the upper envelope of the data? A physical effect that I worry about is that weak Lyα is so ubiquitous, we never see the true continuum but rather continuum minus a tiny bit of extraordinarily weak (Gunn-Peterson) absorption. If the true continuum from the quasar is just a little higher, then the primary hydrogen absorption is unaffected but the weak deuterium absorption would go up a little. That means slightly higher D/H, which means lower Ωbh2, which is the direction in which the measurement would need to move to come into closer agreement with lithium.
Is the D/H measurement in error? I don’t know. I certainly hope not, and I see no reason to think it is. I do worry that it could be. The continuum level is one thing that could go wrong; there are others. My point is merely that we shouldn’t assume it has to be lithium that is in error.
An important check is whether the measured D/H ratio depends on metallicity or column density. It does not. There is no variation with metallicity as measured by the logarithmic oxygen abundance relative to solar (left panel below). Nor does it appear to depend on the amount of hydrogen in the absorbing cloud (right panel). In the early days of this kind of work there appeared to be a correlation, raising the specter of a systematic. That is not indicated here.

I’ll close by noting that Ωbh2 from this D/H measurement is indeed in very good agreement with the best-fit Planck CMB value. The question remains whether the physics assumed by that fit, baryons+non-baryonic cold dark mater+dark energy in a strictly FLRW cosmology, is the correct assumption to make.
This post is the last in a string of amazing, in depth analyses of the controversial issues related to the LCDM/Mond controversy. They are closer to chapters in an authoritative textbook, than a blog post. (I assume you are probably considering gathering them into a textbook, which should be a best seller). However, I have been disappointed by the comments generated by these posts. I had hoped that your status as a highly regarded recognized expert in the field combined with the effort, care and thoughtfulness of your posts, would have stimulated an interesting discussion by other professional astrophysicists working on these subjects. Certainly, there must be important contentious issues associated with the various physical “facts”, theories, etc. that you discuss that could be carefully debated by others involved in research in these areas. In other fields, I have found that this kind of comment discussion provides valuable checks on the viewpoint of the original post. Unfortunately, that has not been the case for your previous posts, in which the comment section has been filled primarily by amateurs (like me) that are not capable of adding anything cogent to the discussion. I am writing this comment in the hope that it might stimulate other working astrophysicists to provide serious alternative viewpoints in this discussion.
Do you think the lack of such contributions in the comments may be an indication of the mentality of LCDM scientific community. Just from my amateur viewpoint, they seem to be primarily occupied in burying their head in the sand and trying their hardest not to engage in the issues you discuss. If that is the case, it is a sad comment on the state of science.
Thanks. I appreciate it.
A couple of years ago, I considered giving up writing these posts. They’re time consuming, and feel like howling at the moon – which is why I refrained from starting a blog 20+ years ago. Then I attended a scientific conference, one of the first it was possible to attend in person post-covid. I was encouraged to keep doing it by the number of (mostly junior) scientists who thanked me in person, saying how helpful it was.
I think that they felt obliged to say this sotto voce, privately, in person, answers your question.
The consensus is the crisis in theoretical physics.
If we’re living in a local void (as suggested at the link below, shared before on this blog iirc) would that be affecting e.g. lithium concentrations in local stars? Because it implies an under-density in the primordial plasma?
More generally, how does large-scale structure affect the primordial density calculation, if at all?
https://academic.oup.com/mnras/article/536/4/3232/7924235
Good question. The horizon for causal contact at the time of BBN is about the size of a planet, so there is no opportunity for the baryon density to vary on larger scales.
Thanks!
Failing a definitive list of posts, is there a post which marks in your own mind a time when this project began?
This blog I started in 2016. The project started in the mid-90s with the MOND pages – http://astroweb.case.edu/ssm/mond/
Just the other day, I was trying to get WordPress to build a table of contents for the posts. It does not seem inclined to do this. There are plugins to build TOCs for individual posts, but not for the collection of posts themselves so far as I can tell. If anyone knows of such a thing, please let me know.
A somewhat more organized collection is provided by Rogue Scholar – https://rogue-scholar.org/communities/tritonstation/records?q=&l=list&p=1&s=10&sort=newest
Here Here! I entered this field as a cosmic gadfly, questioning the ‘result driven’ data pipelines. I was challenged in turn, to come up with anything better. I tried; but I gave up when the attack on good science in general (mostly within the US) became so damaging. (OK, I also lost funding….). I missed the 2017 Planck data release. It absolutely thrills me that tension is now a feature that has the various analytical teams digging in and young astrophysicists looking in new directions. Researchers like Stacy have made everyone aware of confirmational bias. I am optimistic that as new directions are explored, errors will be uncovered and the cosmos will become knowable on a grand scale.
I know what you mean, but you shouldn’t say ‘professional physicists’, you should say ‘serious physicists’. Einstein wasn’t a professional physicist when he published SR, but he was a serious one. SR was an amateur theory, and during his annus mirabilis, he published some very good amateur papers. Outsiders have done great things in physics (Gallileo ‘dropped out of college’, changed subject, and taught himself – Leibnitz, Faraday, and many others took unconventional routes). In Einstein’s early years there was a crisis in physics, and an outsider came in and solved it – we’ve heard of him because in those days it was easier to publish in a respectable journal without an academic affiliation. Nowadays even though we don’t like discrimination in the 21st century – and have made progress removing other forms of it such as against women physicists, black physicists and others – outsiders, although their predecessors have shaped physics in the past, are often forced to publish in lowly journals, and then excluded because they did, even if they find something that can be shown to be true.
You work in the field of astronomy and astrophysics, and hopefully you can answer a question I’ve had for a long time.
My own expertise is programming, that is, automating processes. I’m wondering how much sense it would make to create a sort of “community” analysis pipeline, where the instrumentalist provide their expertise of the raw data, you provide your astronomy expertise, someone in nuclear physics does the same and so on.
I may be wrong, but it seems to me you rely on the result of papers to choose your input (what’s the uncertainty on raw data, what’s the consensus value on such and such constants and so on) and then do some custom analysis where you choose what to neglect. A form of art that kinds of define the work of the physicist.
If there was something already established by the community of scientists, one could hope that new data could be processed overnight to provide new values for Ωbh2. Obviously everyone should be able to tune the system according to their own appreciation of what the correct analysis is. However you could profit from the latest advances in fields where you lack expertise, and maybe get a result you trust more because you can see all the steps. Like, a step that assumes the existence of Dark Matter buried deep in the calibration of spectra (I just making this up, you get the idea).
In the old days, we had to do everything ourselves. That’s not really true of course – there was an infrastructure of telescopes and instruments, the software to do basic data manipulation, and the experts who built all that. But individual astronomers like myself would take it from there, observing, writing custom analysis software to the problem at hand, reducing and analyzing the data, applying the appropriate theory to interpret it, etc. We originated from a culture of self-reliance that runs counter to what you suggest.
More recently, the community has moved in the direction you suggest, with it being more widely adopted and formalized in some subfields and less in others. I don’t know of anything quite as comprehensive as what you suggest; in the past the equivalent has usually been accomplished through collaboration.
It’s a good idea, and very much in the spirit of open science. People tend to want credit for what they do, so are unlikely to contribute pieces to such a system if the rewards are unclear. It would also tend to mitigate against people learning outside their chosen lane, which would preclude the development of people like me who have worked across enough of these necessary elements to be able to intelligently question the black boxes built by others.
It makes sense. Thank you!
So, big bang nucleosynthesis predicted no cold dark matter?
Somehow I have a feeling that if the CMB data was around in the early 1980s, cosmologists would have taken a look at the CMB data and said that cold dark matter was absolutely falsified by big bang nucleosynthesis, and with other observations later in the decade showing the hot dark matter paradigm unviable for our universe, cosmology would have moved on from dark matter entirely to a different paradigm for structure formation, and would have simply ignored the acceleration discrepancy in galaxies and clusters like they ignore the deuterium-lithium discrepancy in our timeline.
I’m not old enough to have lived through this, but one strand of cosmology in the ’70s was a very low density (baryon only) open universe, e.g., https://ui.adsabs.harvard.edu/abs/1974ApJ…194..543G/abstract
Though we didn’t yet have detections of fluctuations in the CMB, we did have upper limits that ruled out the baryon-only prediction, assuming structure grew gravitationally from those as-yet undetected perturbations. That is now frequently portrayed as the reason to have cold dark matter, and it is, but that only became accepted in the ’80s with the wealth of evidence for acceleration discrepancies, which we mislabelled dark matter. That linguistic mistake gave credence to the CDM interpretation where perhaps it should have done the opposite, as you say.
In addition to the paper by Gott, Gunn, Schramm, & Tinsley, in 1975 Sandage & Tammann published: “Steps toward the Hubble constant. VI. The Hubble constant determined from redshifts and magnitudes of remote Sc I galaxies: the value of q0” – in which they stated q0 could be as low as 0.03 (which would mean Ω = 0.06, which is the same value that Gott et al. considered as most likely).
https://articles.adsabs.harvard.edu/pdf/1975ApJ…197..265S
Indeed. When I did start learning about cosmology as an undergraduate in the early ’80s, an open universe seemed quite plausible, even the most likely option. Then Inflation, with its mandate that Omega = 1.00000, captured the imaginations of cosmologists, and we had a[nother] theoretical motivation for why there had to be dark matter. That’s why Omega = 1 appears as one of the roots of the problem on the dark matter tree.
Thank you for these last several posts on tensions. I certainly can add nothing to David’s acclamation.
I do struggle with the material. But, that is fine. I received my first subscription to Scientific American in 1972 — when all, or nearly all, of the articles had been written by working scientists. I would rather struggle than read “canned” science promotion funneled to the public by decent, well-meaning science communicators. So, I certainly appreciate your efforts!
(I eventually abandoned Scientific American for Science Magazine over this change.)
Even though some of the effort to do this may seem senseless at times, I hope that the exercise of writing it out for a blog helps to organize thoughts. It may not help for published papers. Perhaps it helps for more informal contexts.
People who know me ask me why I work on mathematics no one will read. The answer is simple: it helps me to sort out the issues which none of the professionals will address. Sometimes insight only comes from revisiting issues in many different ways.
Your blog is greatly appreciated. I hope you are receiving some sort of intangible benefit for your efforts.
Hi Stacy. Thank you for this post on the primordial abundances of Deuterium and Lithium.
With regard to Confirmation Bias, I would prefer to plot the Abundances, (D/H) & (Li/H), rather than the Baryon Density. A conversion factor is involved in getting to the baryon density and this varies between papers. It would be interesting to see how the measured abundances have changed with time and whether they show the same confirmation bias. Getting the abundances means measuring absorption lines in spectra and one has to place the continuum somewhere, which is often done by eye. So, some subjective judgement (bias) is involved. In many cases the raw observations are available for anyone to go back and repeat the data analysis, perhaps with more advanced data reduction tools.
With regard to Big Bang Nucleosynthesis (BBN), Cooke & Pettini (2018) note a small discrepancy between a calculated and measured reaction rate that feeds into the BBN calculations. So, changes in BBN work are still possible, which would lead to changes in the baryon density.
For numerologists I notice a coincidence in quasar Q1243+307 used by Cooke & Pettini (2018). The redshift is 2.526 and the (D/H) abundance is 2.527 x 10^(-5).
Yes, it is hard to choose what to plot. The plots with the long time baseline are from a version I published a long time ago with everything on it; to do that, the baryon density is useful for placing everything on the same scale. Here I’ve made two versions to isolate D and Li. I think I tabulated the measured values as well as the inferred baryon density, but probably not in all cases. This risks feeding the OCD in a way I’ve explicitly tried to avoid, because it doesn’t matter: yes, the conversions change, but that only matters to the third decimal; it isn’t going to reconcile the two.
Your concern about the continuum level is the same as mine. Reading C&P, I am somewhat mollified on this point; they’re fitting everything, not just setting the continuum by eye. But is there any continuum to fit? So much has been taken out by intervening absorption, how do we know where it started? Is any automated procedure, which inevitably has a set of human-made assumptions built-in, actually better than the eye-brain?
“… an exquisite success of the standard Hot Big Bang cosmology, albeit not one specific to LCDM …” Conjecture: LCDM needs to be replaced by LMI (“MI” for “MOND inertia”). All of LCDM’s successes can be explained in terms of LMI — but many (perhaps all) of LCDM’s problems can be resolved in terms of LMI. LMI needs string theory with the hypothesis that string vibrations have an approximate lattice structure involving MOND inertia & inflaton inertia.
“… it is always possible to fit a dark matter model to data once presented with the data …” Inertial forces from alternate universes might create incredibly numerous and incredibly complicated splashes of dark matter halos with primarily halos, secondary halos, tertiary halos, and so on. With 7 or 8 levels of dark matter halos, computer simulations might fairly well approximate MOND’s successes in the MOND regime fairly well. MOND inertia and inflaton inertia might provide only 2 opportunities for fudge factors instead of infinitely many opportunities for fudge factors. I am amazed that Professor Milgrom has not yet won the Gruber Prize in Cosmology.
Prizes are intended to award achievement, but in practice that achievement must not only be meritorious, it also has to be widely recognized as such. A lot of meritorious work goes unrecognized; scientific prizes are markers of those that are most popular.
Very informative article, thank you. (There is a recuring typo lower bound Ωbh2 0.009 not 0.09)
Thanks. Too many zeros to keep track of: the baryon density is very low!
Regarding “because the force law changes on the relevant scales.”
MOND’s success at the galaxy scale shows two critical points:
That dark matter is superfluous at the galaxy scale because baryonic mass is enough to account for galaxies’ rotational speed. But if there’s no dark matter in galaxies, then any dark matter in larger structures containing galaxies should be well outside galaxies’ outer boundaries, which is obviously meaningless, as baryonic mass is supposed to cluster around dark matter. So, no dark matter in galaxies implies no dark matter in larger structures containing galaxies.
2. MOND also shows that the effective gravitational potential is scale-dependent, and any “missing mass” anomalies are the result of using a scale-unaware effective gravitational potential. The persistent mass discrepancies at the cluster scale are the result of not having the right description for the effective potential at that scale. Hoping that MOND may work at the cluster scale too is assuming that the effective gravitational potential at the galaxy scale is the same at the cluster scale. At the cluster scale, a different effective gravitational potential is needed.
Note that scale-aware effective potentials are the norm in quantum many-body systems modeling, but nobody has postulated “dark electrons” when the used framework fails to match empirical data. When that happens, the effective potential description is changed accordingly.
Perhaps, but we don’t shouldn’t go around making up effective force laws willy-nilly. The [MOND] residual cluster discrepancy looks like MOND with a shift, so it really isn’t obvious if that shift is some new scale (e.g., potential well depth as envisioned in eMOND) or simply unaccounted [baryonic] mass. We’ve failed to account for all the baryons before; it could happen again.
A suggestion of this missing baryonic mass is given here: https://continentalhotspot.com/2024/08/20/27-what-is-the-mond-cluster-conundrum/
Good point – also that some of the discrepancy may be due to non-equilibrium/thermal effects.
Although it may be missing baryons, if it was indeed MOND with a shift in clusters – whatever the underlying cause – could that remove the need for DM in clusters? (putting to one side the issues relating to cluster collision data, of which there are different views).
Sure. The issue here is what causes the shift: traditional dark matter, unseen baryons, or some new aspect of the dynamics (e.g., eMOND or AeST). Any of those are logical possibilities.
“… I don’t know. … * Physics beyond standard BBN …” Is MOND inertia a concept that can be pitched to string theorists as part of supersymmetric inertia? Witten raises the questions, “What caused cosmic inflation? And what, exactly, are dark matter and dark energy?” Witten, Edward. “Universe on a String.” Astronomy 30, (2002): pages 42–47 https://www.ias.edu/sites/default/files/sns/string(3).pdf
Many of the string theorists are deeply committed to supersymmetry.
Replace LCDM by LMI. Explain the dark matter phenomenon by using MOND inertia related to stringy vibrations on an approximate lattice supporting the standard particles & their superpartners. Explain the dark energy phenomenon by using inflaton inertia. Explain away any remaining discrepancies by using inflatino inertia as a fudge factor. How plausible is the preceding scenario?
“The Inflatino Problem in Supergravity Inflationary Models” by Hans Peter Nilles, Keith A. Olive, & Marco Peloso, 2001 https://arxiv.org/abs/hep-ph/0107212
I stopped paying attention to string theory in the ’80s. It is clear that I haven’t missed much.
On the other hand, serious theorists, string or otherwise, are obliged to pay attention to data. They don’t seem to be – and when they do, they typically reprocess it into something that is tenable in their own paradigm before engaging with it.
For example, a couple of recent papers from simulators appeared attempting to explain the diversity of rotation curves. That’s fine – as far as it goes. But they don’t even address the crux of the problem, which is that the distribution of the baryons is predictive of the total gravitational potential. In rephrasing the issue in terms they can grapple with, they discard essential parts of the phenomena, and succeed in completely explaining an irrelevant subset of it.
“I’ll close by noting that Ωbh2 from this D/H measurement is indeed in very good agreement with the best-fit Planck CMB value. The question remains whether the physics assumed by that fit, baryons+non-baryonic cold dark mater+dark energy in a strictly FLRW cosmology, is the correct assumption to make.”
That this question keeps coming up in your analysis is very interesting, and I would think is motivation for a “hybrid theory”. I assume what is meant by hybrid theory is MOND with LCDM.
In your last post you commented that you were not yet comfortable with any hybrid theory approach because it seemed like mixing geocentric and Copernican models. In other words, does the universe have a center?
When talking about the observable universe, it seems obvious that an observer is at the center, but also obvious that the observer must be at the boundary. Both can be true in a complementary relationship.
The discomfort can be alleviated somewhat once we understand that we are actually just accounting for perceptions of light. It may be difficult to distinguish multiple physical models that project the same image. In other words, LCDM and MOND may coexist as models describing a subjective experience of the universe, or as Wheeler would put it “The Participatory Universe”.
I think we should be more uncomfortable with the absence of complementarity in the universe. That should make us cringe.
I referred to Tycho’s model as an historical example, not to suggest a modern hybrid would have some sort of spatial center. That’d be silly.
By hybrid, I mean a theory that gives MOND-like behavior via a property of the dark matter. E.g., the dipolar dark matter of Blanchet or the superfluid dark matter of Khoury. The hypothesized answer is “dark matter” but not just non-baryonic dark matter, some new entity that reproduces all of the phenomenology by construction. While I am not a big fan of this approach philosophically, it does at least address the observational prompt in a way that simulations with traditional cold dark matter do not (feedback+magical thinking).
There are other kinds of hybrids, including literal combinations of dark matter and MOND. E.g., the sterile neutrinos+MOND Angus suggested, as pursued by Banik and Kroupa and others (nuHDM). I have mixed reactions to these mixes. On the one hand, it is a logical possibility that one could have new[ish] particles with mass in addition to MOND. But that seems doubly unlikely. Worse, one can’t just graft MOND onto LCDM; cosmology can’t be exactly the same with MOND happening here and there willy-nilly.
Of course, other theories are possible, including some one might describe as hybrids.
If you have a medium that is by definition collisionless
(frictionless) and undetectable directly, it’s interesting if it can
also closely mimic curvature, just by being emitted by a mass and
dissipating (so that the transmission speed is the same as the expression that gives the gravitational redshift). If there’s also mathematical evidence for its existence, that helps. And to boost gravity as in MOND, all it has to do is start dissipating faster at the MOND radius for some reason. That makes for a hybrid theory, with some of the mathematics in place – a set of gravity equations. At larger scales it makes DM (I’ve rewritten the paper to put it in context, Eddington’s refractive medium gravity extended to matter as well as light, by a few lateral assumptions about matter at a small scale, it’ll be up at that link today.) It’s a radical approach, but some aspects of the puzzle we’re looking at suggest there are missing conceptual pieces, which means we might need a few more radical approaches. Incidentally, all I meant in the comment about outsiders was that a physicist should be judged by their work alone, and many people think that nowadays.
Sorry, that was written hurriedly while packing to travel. What I meant was that because it’s a very all-purpose medium, which can potentially explain a lot, making for a hybrid theory that arises naturally (unlike the geocentric/heliocentric compromise you mentioned), the simple mathematics supporting it could be of interest. The new paper, which arose from the discussion here (GP-B etc), is still about ‘evidence supporting an explanation for the connection between DM and visible matter’, but I’ve rewritten it to put it in context, the introduction has the PSG equation for the geodetic effect, angle per orbit, approximated in GR by 360 GM/rc^2, so you have two equations that agree to 14 decimal places, and it’s pointed out how this narrows down the possibilities, for one of them. https://gwwsdk1.wixsite.com/link/newpreprint-pdf . In the other bit of mathematics you take two random points on any orbit around any spherical mass, and find they’re connected via the law of refraction, to 16 decimal places (very small scale physics, so it can be very accurate). this directly supports the existence of a medium that potentially explains several things, in a way that’s economical in assumptions.
A well-motivated, natural hybrid would be welcome.
Thanks, I’d say it’s a natural hybrid. I guess by well-motivated you mean that the theory is realistic (not as in realism, just as in ‘living in the real world’), and that the concepts fit with what we have already – observations, existing theory. Well I hope so. In my view of it, the theory is good in places, incomplete in others. But the concepts seem to me to work well, that was the starting point, I’ve believed since the ’90s that it needs to be. I think with the right picture, good mathematics will follow sooner or later, and some has, but it’s incomplete. I’ve been doing my best with it for ages, and frequently reaching out for help. The picture itself is also incomplete in places, it went into a lot of different areas, I couldn’t follow all of them, and in some it only fits loosely so far.
No-one has said the main bit of mathematics is wrong – a few have said it works, most say nothing. Several journals have assured me they’ll assess it, as it’s what that paper is about, but they end up saying nothing instead. I think it’s a bit like your experience, trying to tell people something they don’t want to know, even if you can show it to be true. You may occasionally feel it’s a bit like howling at the moon, well I have that in spades….
I’m not sure I understood it – what did you mean by well-motivated? I know a lot about the theory does, in quite a few areas.
When I found the basic idea, I thought everyone would be very nice to me. I imagined some schoolmasterly old fellow banging me on the back, and saying ‘well done!’ So I worked very hard on it, and a well known physicist said to me, there’s only one thing that talks in physics, nothing else cuts any ice at all – mathematics. Show them some mathematics, then they’ll listen. So I looked for years for a way to prove it. Since then everyone has been just the opposite. Btw, I didn’t know what you meant – I still don’t, but the other thing they do when you show it to them, as well as saying nothing, is make excuses for not looking into it.
We should be working together, with different people in different roles. Conceptual thinkers like me do one side, mathematics people do another, then there are observers, experimenters, and others – all working together. We could get somewhere, instead of going round in circles.
You’ve used up the depth of replies, so I can’t reply directly to you, but apparently I can still reply to myself.
By well-motivated, I mean some compelling reason, like the inverse square law – diminution with distance by that amount being dictated by geometry.
Well I’m sorry, I guess I initially understood the kind of thing you meant after all. To me there are a number of compelling reasons, the inverse square law is one of them. It might not be called geometry, it’s more just a known mathematical structure – a rate of change. The strength of the force of gravity (also the acceleration) is set by a rate of change in PSG, which relates to the ‘density’ of the medium, it’s the rate of change with radius of the transmission speed of space. In a graded refractive medium that speed changes locally in the field. And like a lot of rates of change with radius, it’s an expression with r^2 underneath it, and it turns out to be closely proportional to 1/r^2. That’s why I thought a change to the dissipation pattern when the waves get weakened to a certain point (I think probably from self-interaction), changes the derivative, and so the effective force law.
I did not imply that your idea wasn’t well-motivated. I made no value judgement at all. It takes some time for other people to wrap their heads around things that are new to them even if they are familiar to you.
Yes, I understand that. It might be thought of as geometry, in the sense that it’s about the shape of the cloud – the way the cloud thins out.
I’d be interested in whether you believe the lithium problem can be resolved purely by stellar depletion, which seems to have gained some observational support (partially because claimed detections of Li-6 in stellar atmospheres are on much shakier ground) — e.g., https://www.epj-conferences.org/articles/epjconf/abs/2024/07/epjconf_isna2023_01007/epjconf_isna2023_01007.html and https://ui.adsabs.harvard.edu/abs/2022JCAP…10..078F/abstract.
No matter what the answer to this question, it feels like far too few people are working on it.
Maybe. The observation of the same lithium abundance in the ISM of the SMC seems to argue against stellar physics solutions. The metalliticy of the SMC, though low, is higher than that of the stars of the Spite plateau, so maybe? Seems like threading a needle.
Hi Stacy, apologies for posting off-topic here — I couldn’t find another open comment thread on Triton Station.
I’ve been independently developing a geometric model that predicts galaxy rotation curves using no dark matter or tuning — just a breathing curvature field in GR. It replicates SPARC curves (100% pass rate) and additionally the Milky Way and M31. The model also has predictive power – even predicts the high-z “Little Red Dots” velocity structure seen by JWST.
While working through the SPARC dataset we noticed a growing trend in the residuals. We didn’t see them as errors, more a chance of understanding the galaxies environmental interference influenced by companion galaxies or parts of clusters. We think we have been reading harmonic waves disrupting spacetime in galaxies with warped rotation curves like UGC 06787. Understanding this and seeing the individual waves from its companions enabled us to build a model that is reading spacetime tension like a cosmic seismograph and passing the most testing fits in the database.
I’d love to share these graphs with you if you’d be open to checking this out for me!
That sounds interesting, but first a word of caution: some data in SPARC are more equal than other data. Which is to say, some of the galaxies are rather dodgy and are probably wrong at some level, so you shouldn’t have a 100% pass rate.
Thanks for the swift and honest reply, Stacy — really appreciated.
Just to clarify what we do at DCT: we don’t aim to predict rotation curves in the traditional sense. Our approach is to reverse-engineer what spacetime curvature must be doing in order to produce the observed anomalous stellar velocities — no dark matter or parameter tuning required.
We build on GR by adding a geometric tension model:
gravity = curvature = tension.
From this, we derive breathing field equations that replicate velocity boosts naturally.
We did notice potential duplicate entries in the SPARC dataset — perhaps the same galaxies catalogued by different teams — and a few galaxies with erratic early rise points (up-down-up-down behavior). These stood out from the harmonic frequency patterns we’ve been seeing in the residuals. Some of those erratic segments may simply reflect observational noise or structures we haven’t yet tuned the model to detect. Once those curves stabilize into their harmonic arcs, they fit remarkably well.
We started with two core models: a 4-parameter and a 7-parameter version. These alone gave us a 91% pass rate — no tuning, no particles — and helped us uncover deeper structure in the data. It led to our latest development: a geometric seismograph model that seems to track spacetime tension and even environmental harmonic influence from companion galaxies and was this model that ‘completed’ the rest.
If you’re open to it, I’d be stoked to share a few graphs and fits with you. I genuinely think we may be onto something that deserves scrutiny — and it would be an honour to hear your thoughts on it.
The up-down-up behavior is often real – it is the signature of the transition between a compact central bulge and the stellar disk. NGC 6946 and UGC 2885 are two galaxies that I can think of offhand that do this. There are others; you can see the importance of the bulge component in Vbulge (zero for most but not all SPARC galaxies).
Would you mind having a look at a few of our fits if I sent them to your email?
I think around 50-60% we’re all passed with our 4 perimeter model (mainly all the ones with perfect or near perfect arcs) which is our gaussion shell model.. interestingly though when we checked the overwhelming vast majority were isolated galaxies. All the warped ones were clusters or had a dominant companion.. the statistics jumped out at us. We think someone with imperial knowledge of sparc needs to check this out if possible.. even if you put us onto someone I’d be massively grateful