My colleague Jim Schombert pointed out a nifty new result published in Nature Astronomy which you probably can’t access so here is a link to what looks to be the preliminary version. The authors use the Deep Synoptic Array (DSA) to discover some new Fast Radio Bursts (FRBs), many of which are apparently in galaxies at large enough distances to provide an interesting probe of the intervening intergalactic medium (IGM).
There is lots that’s new and cool here. The DSA-110 is able to localize FRBs well enough to figure out where they are, which is an interesting challenge and impressive technological accomplishment. FRBs themselves remain something of a mystery. The are observed as short (typically millisecond), high intensity pulses of very low frequency radio emission, typically 1,400 MHz or less. What causes these pulses isn’t entirely clear, but they might be produced in the absurdly intense magnetic fields around some neutron stars.
FRBs are intrinsically luminous – lots of energy packed into a short burst – so can be detected from cosmological distances. The trick is to find them (blink and miss it!) and also to localize them on the sky. That’s challenging to do at these frequencies well enough to uniquely associate them with optical sources like candidate host galaxies. To quote from their website, “DSA-110 is a radio interferometer purpose-built for fast radio burst (FRB) detection and direct localization.” It was literally made to do this.
Connor et al. analyze dozens of known and report nine new FRBs covering enough of the sky to probe an interesting cosmological volume. Host galaxies with known redshifts define a web of pencil-beam probes – the paths that the radio waves have to traverse to get here. Low frequency radio waves are incredibly useful as a probe of the intervening space because they are sensitive to the density of intervening electrons, providing a measure of how many there are between us and each FRB.
Most of intergalactic space is so empty that the average density of matter is orders of magnitude lower than the best vacuum we can achieve in the laboratory. But there is some matter there, and of course intergalactic space is huge, so even low densities might add up to a lot. This provides a good way to find out how much.
The speed of light is the ultimate speed limit, in a vacuum. When propagating through a medium like glass or water, the effective speed of light is reduced by the index of refraction. For low frequency radio waves, the exceedingly low density of free electrons of the IGM suffice to slow them down a bit. This effect, called the dispersion measure, is frequency dependent. It usually comes up in the context of pulsars for which the width of their pulses is spread by the effect, but it works for any radio source with appropriate observable frequencies, like FRBs. The dispersion measure tells us the product of the distance and the density traversed along the line of sight to the source, so is usually expressed in typical obscure astronomical fashion as pc cm-3. This is really a column density, the number per square cm, but with host galaxies of known redshift the distance in known independently and we get a measure of the average electron volume density along the line of sight.
That’s it. That by itself provides a good measure of the density of intergalactic matter. The IGM is highly ionized, with a neutral fraction < 10-4, so counting electrons is the same as counting atoms. (Not every nucleus is hydrogen, so they adopt 0.875 electrons per baryon to account for the neutrons in helium and heavier elements. We know the neutral fraction is low in the IGM because hydrogen is incredibly opaque to ultraviolet radiation: absorption would easily be seen, yet there is no Gunn-Peterson trough until z > 6.) This leads to a baryon density of ΩBh2 = 0.025 ± 0.003, which is 5% of the critical density for a reasonable Hubble parameter of h = 0.7.
This solves the cosmic missing baryon problem. There had been an order of magnitude discrepancy when most of the baryons we knew about were in stars. It gradually became clear that many of the baryons were in various forms of tenuous plasma in the space between galaxies, for example in the Lyman alpha forest, but these didn’t account for everything so a decade ago a third of the baryons expected from BBN were still unaccounted for in the overall baryon budget. Now that checksum is complete. Indeed, if anything, we now have a small (if not statistically significant) baryon surplus+.
Here is a graphic representing the distribution of baryons among the various reservoirs. Connor et al. find that the fraction in the intergalactic medium is fIGM = 0.76 +0.10/-0.11. Three quarters of the baryons are Out There, spread incredibly thin throughout the vastness of cosmic space, with an absolute density of a few x 10-31 g cm-3, which is about one atom per cubic meter. Most of the atoms are hydrogen, so “normal” for most of the universe is one proton and one electron in a box a meter across rather than the 10-10 m occupied by a bound hydrogen atom. That’s a whole lot of empty.

The other reservoirs of baryons pale in comparison to the IGM. Most are still in some form of diffuse space plasma, like the intracluster media of clusters of galaxies and groups of galaxies, or associated with but not in individual galaxies (the circumgalactic medium). These distinctions are a bit fuzzy, as are the uncertainties on each component, especially the CGM (fCGM = 0.08 +0.07/-0.06). This leaves some room for a lower overall baryon density, but not much.
Connor et al. get some constraint on the CGM by looking at the increase in the dispersion measure for FRBs with sight-lines that pass close to intervening galaxies vs. those that don’t. This shows that there does seem to be some extra gas associated with such galaxies, but not enough to account for all the baryons that should be associated with their dark matter halos. So the object-by-object checksum of how the baryons are partitioned remains problematic, and I hope to have more to say about it in the near future. Connor et al. argue that some of the baryons have to have been blown entirely out of their original dark matter halos by feedback; they can’t all be lurking there or there would be less dispersion measure from the general IGM between us and relatively nearby galaxies where there is no intervening CGM*.
The baryonic content of visible galaxies – the building blocks of the universe that most readily meet the eye – is less than 10% of the total baryon density. Most of that is in stars and their remnants, which contain about 5% of the baryons, give or take a few percent stemming from the uncertainty in the stellar initial mass function. The cold gas – both neutral atomic gas and the denser molecular gas from which stars form, only add up to about 1% of all baryons. What we see most readily is only a fraction of what’s out there, even when restricting our consideration to normal matter: mostly the baryons are in the IGM. Mostly.
The new baryon inventory is now in good agreement with big bang nucleosynthesis: ΩBh2 = 0.025 ± 0.003 is consistent with Ωbh2 = 0.0224 ± 0.0001 from Planck CMB fits. It is more consistent with this and the higher baryon density favored by deuterium than it is with lithium, but isn’t accurate enough to exclude the latter. Irrespective of this important detail, I feel better that the third of the baryons that used to be missing (or perhaps not there at all) are now accounted for. The agreement with the checksum of the baryon inventory with the density of baryons consistent with BBN is as encouraging success of this deeply fundamental aspect of the hot big bang cosmology.
+Looking at their equation 2, there is some degeneracy between the baryon density Ωb and the fraction of ionized baryons Out There. Lower Ωb would mean a higher baryon fraction in the diffuse ionized state. This is already large, so there is only a little room to trade off between the two.
*What counts as CGM is a bit dicey. Putting on a cosmology hat, the definition Connor et al. adopt involving a range of masses of dark matter halos appropriate for individual galaxies is a reasonable one, and it makes sense to talk about the baryon fraction of those objects relative to the cosmic value, of which they fall short (fgas = 0.35 +0.30/-0.25 in individual galaxies where f* < 0.35: these don’t add up to unity). Switching to MOND, the notional association of the CGM with the virial radii of a host dark matter halos is meaningless, so it doesn’t matter if the gas in the vicinity of galaxies was once part of them and got blown out or simply never accreted in the first place. In LCDM we require at least some blow out to explain the sub-cosmic baryon fractions, while in MOND I’m inclined to suspect that the dominant process is non-accretion due to inefficient galaxy formation. Of course, the universe may indulge in a mix of both physical effects, in either paradigm!
%Unlike FLRW cosmology, there is no special scale defined by the critical density; a universe experiencing the MOND force-law will ultimately recollapse whatever its density, at least in the absence of something that acts like anti-gravity (i.e., dark energy). In retrospect, this is a more satisfactory solution of the flatness problem than Inflation, as there is nothing surprising about the observed density being what it is. There is no worry about it being close to but not quite equal to the critical density since the critical density is no longer a special scale.
Really exciting times — and kind of wild to think the missing baryons might not be missing after all.
But with most of the baryons now potentially accounted for, does that change how we interpret the remaining anomalies? Things like the RAR curve, the emergence (or absence) of bars in different galaxy types, or the persistent diversity in rotation profiles — they still beg for an explanation that doesn’t seem to track directly with baryonic mass alone.
I’ve been working on a more geometric perspective — not replacing baryons, but looking at how curvature tension or stored spacetime structure might influence dynamics. Especially in cases where galaxies with nearly identical baryonic content behave dramatically differently.
Just curious — do you think finally pinning down the baryon census opens the door for explanations rooted more in geometry than in matter content? We’ve been exploring something recently that seems to connect RAR, bar structure, and diversity under one framework… and I wonder if these “found” baryons might actually help make sense of the leftover gravitational weirdness?
Accounting for all the baryons has no bearing on the other anomalies you mention.
Dear Stacy,
Thanks again for an excellent post. The result you are reviewing is remarkably ingenious
and goes a long way towards clearing up one of the nagging loose ends in cosmology. The
solution in this case turned out to be fairly garden variety! I am sure that some of the
other remaining loose ends will require substantially more radical solutions, like modified
gravity.
I just want to comment on a common misconception which appears in the last of your asterisks.
It turns out that in a MOND cosmology the ultimate fate of the universe is not a re-collapse
as one would naively expect, but actually accelerated expansion. The reason is that although
at galactic scales MOND appears as extra attractive gravity, while cosmological expansion is
often described as requiring some sort of anti-gravity or repulsive force, within a covariant
framing of the problem, you actually get accelerated expansion for a MOND universe. The reason
is that the symmetry of the problem is exactly the reverse in a FLRW scenario than what you have
at galactic scales.
At galactic scales, the metric is to first approximation spherically symmetric and static, there
is no explicit time dependence of the potential, time does not appear in the metric, and the
main factor is distance, r, to the clearly defined centre of the problem. Extra gravity, MOND,
implies more attraction towards the centre. In a FLRW scenario on the other hand, the problem
is homogeneous and isotropic, all places are identical, and hence distance, r, plays
no role. The solution is inherently dynamic, and time becomes the dominant factor, through the
a(t) factor. Extra gravity, from the point of view of any observer, implies more pull outwards
by the infinite amounts of matter beyond any given distance, and hence an accelerated
expansion.
You can check the details in e.g. Hernandez et al. (2019) MNRAS, 483, 147
(https://academic.oup.com/mnras/article/483/1/147/5173107?login=true), where we wrote MOND
at galactic level in a fully covariant formulation in terms of the curvature scalars of the
problem. Then, introducing the same covariant geometric restriction into a FLRW metric, one
obtains for free an asymptotically de Sitter solution which is consistent with LCDM fits to
the observed expansion rate. In a way, this is a more detailed statement of
the equivalence between a0 and cH0. In other extensions to GR, e.g. AeST, you see the same
consistency of a model which under spherically symmetric and static metrics yields MOND, with a
FLRW model which is overall consistent with LCDM cosmology. Results in GR schemes are often not obvious from a purely Newtonian perspective.
Best as always,
X.
Intriguing. It would be most satisfactory if the excess acceleration in bound objects and that of cosmic expansion were two sides of the same coin.
I can’t entirely follow the argument that attraction in a homogeneously distributed expanding density leads to acceleration of the expansion. Is it because at a position x at radius r between a radius r- and r+ the amount of mass near x in the direction of r+ is slightly larger than in the direction of r-?
Thanks for this post. An answer to an old question ! Are there serious reasons for questioning other accepted answers?
The impact of early massive galaxy formation on the cosmic microwave background — Gjergo, Kroupa https://ui.adsabs.harvard.edu/abs/2025NuPhB101716931G/abstract
https://arxiv.org/abs/2505.04687
Yes and no. I myself see no reason to doubt the cosmic origin of the CMB. I do see reason to doubt its interpretation in LCDM. It is conceivable that the dust in early galaxies discussed in the paper you cite is enough to muddy the interpretation by being an unaccounted foreground source. I don’t think it is energetically reasonable to create the entirety of the CMB that way.
Another systematic of concern to me is the lensing of the CMB. The best-fit LCDM parameters depend on getting that right, but the lensing potentials assume the expected linear growth, which is repeatedly not what we see. If large masses like clusters of galaxies are in place earlier than they should be, then they’ll be a source of unaccounted smearing that will systematically alter the fit.
I also worry about the large angular scale optical depth, which is expected if structure forms early; this will also perturb the best fit. There is already a challenge in reconciling the number of UV photons allowed by the Planck best fit with the number directly observed by JWST at z > 10.
Those are physical effects that happen after the CMB that may have an impact on its interpretation. It can also be that the assumptions that go into the CMB fits (GR+CDM+DE) are inadequate, and the add-on features like dark matter are just there to approximate some deeper underlying theory.
Does this discovery have any implications for the ability of undetected baryons to help with mond’s problems in galaxy clusters?
In principle, if we have identified all of the baryons, then there are none left over to be the missing baryons in clusters. However, each of the baryon reservoirs has some uncertainty to their size. There is enough slop in there to still allow for enough extra baryons to be in clusters. They can’t be in the hot plasma, but we already knew that.
“It would be most satisfactory if the excess acceleration in bound objects and that of cosmic expansion were two sides of the same coin.” What do string theorists think about the following? There might now exist overwhelming empirical evidence supporting string theory (with 3 new hypotheses). Milgrom’s Modified Newtonian Dynamics (MOND) makes many predictions that are approximately correct. There might be a form of inertia (MOND inertia) which is empirically significant in terms of inertial influences, but empirically insignificant in terms of mass-energy. The dark matter phenomenon might be partially, or wholly, explained by MOND inertia.
Assume that gravitational energy is conserved, all gravitons have spin 2, & supersymmetry (SUSY) occurs in nature. Hypothesis 1. There is an equation that relates virtual inflatons to virtual inflatinos in the quantum vacuum. Hypothesis 2. There are 2 fundamental forms of inertia: Newton-Einstein inertia & SUSY inertia. SUSY inertia has 2 components: inflaton inertia & inflatino inertia. Inflaton inertia explains the dark energy phenomenon. Inflatino inertia explains MOND inertia & the relativistic version of MOND inertia — MOND inertia explains the dark matter phenomenon & there are no dark matter particles.
Hypothesis 3. String vibrations are confined to an approximate lattice structure. Virtual inflatons destabilize the approximate lattice structure & allow new big bangs to occur. Virtual inflatinos stabilize the approximate lattice structure & inhibit the formation of new big bangs.
Is the preceding speculation wrong? Please google “milgrom mond inertia” & “pavel kroupa dark matter”.
Is there any sign of higher temperature of particles in the IGM depending on redshift like assumed in https://academic.oup.com/mnras/article/478/1/283/4975800 ? I found that study earlier in my life to be just what I wanted, having started an ideological war with the BB (don’t worry, I’ve recently found peace with it), so I wonder if Vavrycuk’s analysis stands up to the new facts. It crossed my mind after seeing that paper from Kroupa above.
By the way, would his idea of opacity of the universe increasing with redshift change anything for the Tolman test?
The gas in the IGM was reheated at some early time and stayed how, but I don’t think that’s what Vavrycuk is talking about; he’s talking about dust. Dust may behave that way, but I don’t think there is anywhere near as much as he’s talking about. If, as he says, the universe “becomes considerably opaque at redshifts z > 2–3” then we wouldn’t see galaxies at z = 10.
If there is a lot of dust in the IGM, then it will act as a screen and make everything lower surface brightness than expected from geometric effects. That’d certainly mess with the Tolman test. The problem we have with that, if anything, is that sources at high redshift appear compact (assuming the LCDM metric) so their surface brightnesses are, if anything, too high, not too low.
Is there enough IGM in clusters such that the baryons within it would help explain the discrepancy for clusters? And the larger scatter compared to galaxies (if, say, the baryon density in the IGM varies by proximity of galaxies within a cluster and that proximity differs widely across clusters)?
I wonder if this FRB technique could be used to measure density in between galaxies in clusters… possibly that could address these questions.
So first, let me make an annoying distinction: the IGM is the material outside of both galaxies and clusters; the plasma between galaxies in clusters is the ICM. People have measured the ICM in clusters, and it is pretty well constrained. Whatever the residual discrepancy is for clusters in MOND, I don’t think it can be the ICM itself; it needs to be some extra baryonic component that is yet to be identified. There is enough uncertainty in each of the known baryonic reservoirs to admit this possibility, but it would have to be a new reservoir, like a whole boatload of brown dwarfs. I don’t much like that idea, but we do not know all the slices of the baryonic pie well enough to exclude it.
As for the density between galaxies in clusters, yes, in principle FRBs could be used to do that, but you’d need to know which galaxies were on the near and far side of each cluster, which is tricky. For the hot ICM in clusters, X-ray observations already measure the same thing, so I doubt FRBs can help in this particular way.
Ah, got it — thanks for that clarification. That makes sense, and it certainly adds to the mystery of the cluster discrepancy.
Appreciate the helpful reply!
Hi Stacy. Thank you for this post on “The baryons are mostly in the intergalactic medium”.
Connor et al have done an excellent job in working out that 75% of the baryons lie in the intergalactic medium (IGM). We now know where the baryons lie. Do we know where dark matter lies, or is there now a dark matter problem? Connor et al comment that the baryons do not trace dark matter; if dark matter is not attracting all the baryons, then does this constitute a cosmological problem? (Confusingly, in their paper DM refers to dispersion measure and not dark matter!)
Yes, DM is here is different from the DM we usually discuss. Welcome to the nightmare that is astronomical terminology.
I refrained from commenting on where the dark matter is because we don’t really know that independently from some fairly strong assumptions. Connor et al. do not hesitate to adopt the obvious LCDM assumption, and base their statement on integrating the notional dark matter halo mass function. Indeed, this is how they operationally distinguish the ICM from the IGroupM from the CGM, by assigning each a paradigm-appropriate range of halo masses. When they do this, they find that the amount of dark matter in bound DM halos containing galaxies through clusters contains 50% of the total dark matter. So in their analysis, the universe is split 50/50 between that and DM in the IGM.
In contrast, the baryons are split 24/76 between galactic objects and the IGM, hence their conclusion that baryons don’t trace dark matter. They would have started out that way, so the implication is that something acted to segregate them. Enter the usual suspect, feedback, to drive baryons out of galaxies.
This is where the term DM gets sloppy, even when referring exclusively to unseen mass. There have been at least three missing mass problems in cosmology. The first is what we usually mean: the need for non-baryonic cold dark matter or other new physics. The second is the global missing baryon problem: that the sum of baryons we less than expected from BBN – the problem Connor et al have now laid to rest. The third is the halo-by-halo missing baryon problem.
Each dark matter halo should start as a microcosm of the whole, with its fair share of the cosmic baryon fraction. Yet if one does the checksum on an object by object basis, only the most massive clusters have approximately the right amount of baryons. As one looks to smaller objects, one comes up short. So in an ordinary galaxy like the Milky Way, there are two distinct missing mass problems: the usual cosmological one, plus a missing baryon problem. This is not a subtle problem: the detected baryons in the MW are maybe a third of what they should be, heavily dependent on how big are DM halo actually is, which is hard to really know. The problem gets progressively more severe for smaller galaxies, so while we might talk our way out of it for bright galaxies like the MW, there’s no way for dwarfs.
Again, this is one of the many “features” feedback is invoked to fix. There are many flavors of feedback; some merely reheat the baryons so they don’t collapse into a readily detected cold form, and the remain mixed in with the DM halo. Others invoke galaxy scale winds driven by SN that entirely expel some of the baryons. Connor et al interpret the mismatch between 50/50 DM and 24/76 baryons to favor the latter interpretation. The baryons started out evenly split, but some got blown out into the IGM.
Nowhere does anyone seem to notice, much less address, that the reason for the halo-by-halo missing baryon problem is that galaxies follow the mass-rotation speed relation predicted by MOND rather than that of CDM. These have different power-law dependencies (M~V^4 for MOND but V^3 for CDM) so our inference of halo mass departs from expectations in LCDM. The two lines have to cross somewhere, and that happens to be around the scale of massive clusters. So those look “right” in LCDM for having their fair share of baryons, but nothing else does.
I believe there is a typo in the sentence “very low frequency radio emission, typically 1,400 hertz or less”. That should be MHz and not Hz, since DSA-110 operates between 1280 – 1530 MHz. Coincidentally, this is near the 21 cm line frequency. Even considering CHIME at 400 – 800 MHz, this is still within the ultra high frequency (UHF) band.
As they say, I’m a “long-time listener, first-time caller”. Keep up your excellent research, and speaking of the 21cm line, my students reference your work for our project where we’ve built a small radio telescope and detect the Galactic rotation curve: https://x.com/dr_grube/status/1770559190223757385
Cool! That’s a great project.
The oral history I heard was that when van de Hulst first built such a thing, he had no idea what to expect for Galactic velocities. He was doing background subtraction by on-off frequency shifts that were too small, so was subtracting signal from signal. After talking to Oort (if I recall correctly) he made a larger shift and immediately detected the expected signal.
Thanks for pointing out the typo; the optical depth of plasma to radio waves got me to thinking about the ionospheric limit (was it 30 MHz or 30 Hz)? It is of course 30 MHz, but I guess that put Hz in my head.
Off topic. At the news conference for Vera Rubin telescope, it was stated that “The observatory’s treasure trove of data will allow astronomers to investigate dark energy, a force pushing the universe to expand ever faster, as well as dark matter…” Is there some definite way that this telescope will provide new information about the dark matter/Mond question?
Hopefully lots of ways. One thing it should do is enable the discovery of lots of low mass, low surface brightness dwarf galaxies in the field, far from the confounding tidal effects of giant galaxies like those of the Local Group. The prediction of MOND and LCDM for such objects should be easy to distinguish.
As for dark energy, the combined depth and sky/temporal coverage should enable the discovery of tens of thousands of supernovae all over the accessible part of the sky. That should provide a good test of the isotropy assumed in the Cosmological Principle: is the expansion of the universe uniform? Or might there be a dipole term such that H0 in one direction differs from that in another? The latter would be inconsistent with standard cosmology and perhaps an indication of the very large scale distortions anticipated by Felten (1984).
Just to clarify an answer I gave when you mentioned the inverse square law, in the Newtonian regime the derivative with radius for the transmission speed is closely approximated by GM/r^2c. At the MOND radius it becomes (GMa0)^[1/2]/rc, which is the change from 1/r^2 to 1/r behaviour. Both expressions are simply multiplied by c for the acceleration – in the helical refraction mechanism, matter detects that rate of change of the background space, and converts it to an acceleration. I’ve added two sections to the paper, same link, one on the inverse square law and the MOND interpretation, the other listing the gravity equations.
I should have said, the second expression is the first by r/rM (rM being the MOND radius (GM/a0)^1/2), just as accelerations in MOND are boosted by r/rM, and the Newtonian field is ‘compressed’ at any point beyond a0 by r/rM, boosting radial rates of change by the same factor.
Do theorists think that an FRB can result from the merger of a dwarf star with an inactive magnetar?
Many ideas have been suggested for what FRBs are. I haven’t heard that particular one, but it comes close to versions I have heard. I don’t think we really know. What we do know is that the source is compact, energetic, and very quick. Those properties imply higher energy densities than can be found in most environments, magnetars being on obvious source candidate.
Unrelated, but increasingly nagging away at me – how many people have you encountered who behave as though the veracity of the facts underlying their opinions don’t matter? I have encountered one individual who explicitly said so, though not in these words; and quite a few who behave so.
And if these are a non-negligible number, how does one deal with them?
That is disturbing. I couldn’t offer a number, but is does seem to be an increasingly common dysfunction.
I suppose that belief in the absence (or contradiction) of evidence is a hallmark of religion.
I recall worrying that belief in dark matter had nearly achieved the status of an unquestionable element of faith already in the ’90s. Certainly it took a profound shock to shake my faith in it. I suppose that for many it has now achieved that status.
I’m not assuming that relates to PSG, but I’ll just point out that without the very strong mathematical support – arguably a near-proof – I certainly wouldn’t mention the way I see the mass discrepancy. That bit of evidence is so simple that it would have been ripped to pieces in 45 seconds if it was flawed. People love mathematics, and they’d sure do that if they could, and quite rightly. But instead it points out a possible way forward (there’s always a little bit of cognitive dissonance, just ignore it), and I hope people will look down that avenue, at a time when a lot of avenues have led nowhere.
“… common dysfunction …” Is rejection of MOND a prevalent dysfunction?
Fact #1. MOND makes many predictions that are (approximately) correct according to the evidence.
Fact #2. General relativity theory (without a MONDian 5th force) implies that every (new, i.e. where Milgrom & Newton disagree) prediction made by MOND is wrong.
Is the alleged Fact #2 merely a dysfunction in my perception?
Yes, rejection of the evidence is a common failure mode. It falls in the category of lying by omission: when a man lies, he murders some part of the world. In this case, omitting MOND from discussion or consideration – which is certainly the common mode in the scientific community at present – murders a part of the world we should be seeking to understand.
https://astroweb.case.edu/ssm/mond/inthon.html
Crazy question here – is there any way this could be evidence that neutrinos decay to protons and electrons?
Neutrinos don’t have enough mass-energy to decay into protons and electrons, but free neutrons do exactly that, plus a neutrino to boot.
We go by what data or mathematics tells us, but sometimes only if it’s what we want to hear. If you can pick two points on the path of an apple falling from a tree, and show them to be connected via the law of refraction (in exactly the way that all points on the path of a light beam in a layered medium are known to be connected), then mathematics is telling people what they might not want to hear. But it would have been more so in the last century – these days many see GR as a large-scale approximation.
Wow. That’s worth a Nobel prize, surely?
A while ago I suggested that intergalactic space could be full of free neutrons. The experts said, what nonsense, they would decay in 20 minutes. But in fact we don’t know what the decay rate of neutrons is, even on the surface of the Earth, where beam and bottle experiments are inconsistent at the 1% level. So we have no idea, even to the order of magnitude, of neutron lifetimes in intergalactic space. This result says it doesn’t matter if they do decay, they are actually there.
In order for BBN to work, free neutrons in space have to decay at the rate expected from experiment, which is a half-life around ten minutes. It is true that there is a small but significant discrepancy between the experimental measurements you cite, and that matters. But that’s just ten seconds out of ten minutes, and there is a rather larger difference between 10 minutes and a Hubble time. Are you suggesting that free neutrons persist indefinitely in space but decay rapidly on the surface of the Earth? That’d be very important if true, but how could it be? Why would a neutron care if there were a few nitrogen molecules in the vicinity or not?