OK, basic review is over. Shit’s gonna get real. Here I give a short recounting of the primary reason I came to doubt the dark matter paradigm. This is entirely conventional – my concern about the viability of dark matter is a contradiction within its own context. It had nothing to do with MOND, which I was blissfully ignorant of when I ran head-long into this problem in 1994. Most of the community chooses to remain blissfully ignorant, which I understand: it’s way more comfortable. It is also why the field has remained mired in the ’90s, with all the apparent progress since then being nothing more than the perpetual reinvention of the same square wheel.
To make a completely generic point that does not depend on the specifics of dark matter halo profiles or the details of baryonic assembly, I discuss two basic hypotheses for the distribution of disk galaxy size at a given mass. These broad categories I label SH (Same Halo) and DD (Density begets Density) following McGaugh and de Blok (1998a). In both cases, galaxies of a given baryonic mass are assumed to reside in dark matter halos of a corresponding total mass. Hence, at a given halo mass, the baryonic mass is the same, and variations in galaxy size follow from one of two basic effects:
- SH: variations in size follow from variations in the spin of the parent dark matter halo.
- DD: variations in surface brightness follow from variations in the density of the dark matter halo.
Recall that at a given luminosity, size and surface brightness are not independent, so variation in one corresponds to variation in the other. Consequently, we have two distinct ideas for why galaxies of the same mass vary in size. In SH, the halo may have the same density profile ρ(r), and it is only variations in angular momentum that dictate variations in the disk size. In DD, variations in the surface brightness of the luminous disk are reflections of variations in the density profile ρ(r) of the dark matter halo. In principle, one could have a combination of both effects, but we will keep them separate for this discussion, and note that mixing them defeats the virtues of each without curing their ills.
The SH hypothesis traces back to at least Fall and Efstathiou (1980). The notion is simple: variations in the size of disks correspond to variations in the angular momentum of their host dark matter halos. The mass destined to become a dark matter halo initially expands with the rest of the universe, reaching some maximum radius before collapsing to form a gravitationally bound object. At the point of maximum expansion, the nascent dark matter halos torque one another, inducing a small but non-zero net spin in each, quantified by the dimensionless spin parameter λ (Peebles, 1969). One then imagines that as a disk forms within a dark matter halo, it collapses until it is centrifugally supported: λ → 1 from some initially small value (typically λ ≈ 0.05, Barnes & Efstathiou, 1987, with some modest distribution about this median value). The spin parameter thus determines the collapse factor and the extent of the disk: low spin halos harbor compact, high surface brightness disks while high spin halos produce extended, low surface brightness disks.
The distribution of primordial spins is fairly narrow, and does not correlate with environment (Barnes & Efstathiou, 1987). The narrow distribution was invoked as an explanation for Freeman’s Law: the small variation in spins from halo to halo resulted in a narrow distribution of disk central surface brightness (van der Kruit, 1987). This association, while apparently natural, proved to be incorrect: when one goes through the mathematics to transform spin into scale length, even a narrow distribution of initial spins predicts a broad distribution in surface brightness (Dalcanton, Spergel, & Summers, 1997; McGaugh and de Blok, 1998a). Indeed, it predicts too broad a distribution: to prevent the formation of galaxies much higher in surface brightness than observed, one must invoke a stability criterion (Dalcanton, Spergel, & Summers, 1997; McGaugh and de Blok, 1998a) that precludes the existence of very high surface brightness disks. While it is physically quite reasonable that such a criterion should exist (Ostriker and Peebles, 1973), the observed surface density threshold does not emerge naturally, and must be inserted by hand. It is an auxiliary hypothesis invoked to preserve SH. Once done, size variations and the trend of average size with mass work out in reasonable quantitative detail (e.g., Mo et al., 1998).
Angular momentum conservation must hold for an isolated galaxy, but the assumption made in SH is stronger: baryons conserve their share of the angular momentum independently of the dark matter. It is considered a virtue that this simple assumption leads to disk sizes that are about right. However, this assumption is not well justified. Baryons and dark matter are free to exchange angular momentum with each other, and are seen to do so in simulations that track both components (e.g., Book et al., 2011; Combes, 2013; Klypin et al., 2002). There is no guarantee that this exchange is equitable, and in general it is not: as baryons collapse to form a small galaxy within a large dark matter halo, they tend to lose angular momentum to the dark matter. This is a one-way street that runs in the wrong direction, with the final destination uncomfortably invisible with most of the angular momentum sequestered in the unobservable dark matter. Worse still, if we impose rigorous angular momentum conservation among the baryons, the result is a disk with a completely unrealistic surface density profile (van den Bosch, 2001a). It then becomes necessary to pick and choose which baryons manage to assemble into the disk and which are expelled or otherwise excluded, thereby solving one problem by creating another.
Early work on LSB disk galaxies led to a rather different picture. Compared to the previously known population of HSB galaxies around which our theories had been built, the LSB galaxy population has a younger mean stellar age (de Blok & van der Hulst, 1998; McGaugh and Bothun, 1994), a lower content of heavy elements (McGaugh, 1994), and a systematically higher gas fraction (McGaugh and de Blok, 1997; Schombert et al., 1997). These properties suggested that LSB galaxies evolve more gradually than their higher surface brightness brethren: they convert their gas into stars over a much longer timescale (McGaugh et al., 2017). The obvious culprit for this difference is surface density: lower surface brightness galaxies have less gravity, hence less ability to gather their diffuse interstellar medium into dense clumps that could form stars (Gerritsen and de Blok, 1999; Mihos et al., 1999). It seemed reasonable to ascribe the low surface density of the baryons to a correspondingly low density of their parent dark matter halos.
One way to think about a region in the early universe that will eventually collapse to form a galaxy is as a so-called top-hat over-density. The mass density Ωm → 1 at early times, irrespective of its current value, so a spherical region (the top-hat) that is somewhat over-dense early on may locally exceed the critical density. We may then consider this finite region as its own little closed universe, and follow its evolution with the Friedmann equations with Ω > 1. The top-hat will initially expand along with the rest of the universe, but will eventually reach a maximum radius and recollapse. When that happens depends on the density. The greater the over-density, the sooner the top-hat will recollapse. Conversely, a lesser over-density will take longer to reach maximum expansion before recollapsing.
Everything about LSB galaxies suggested that they were lower density, late-forming systems. It therefore seemed quite natural to imagine a distribution of over-densities and corresponding collapse times for top-hats of similar mass, and to associate LSB galaxy with the lesser over-densities (Dekel and Silk, 1986; McGaugh, 1992). More recently, some essential aspects of this idea have been revived under the monicker of “assembly bias” (e.g. Zehavi et al., 2018).
The work that informed the DD hypothesis was based largely on photometric and spectroscopic observations of LSB galaxies: their size and surface brightness, color, chemical abundance, and gas content. DD made two obvious predictions that had not yet been tested at that juncture. First, late-forming halos should reside preferentially in low density environments. This is a generic consequence of Gaussian initial conditions: big peaks defined on small (e.g., galaxy) scales are more likely to be found in big peaks defined on large (e.g., cluster) scales, and vice-versa. Second, the density of the dark matter halo of an LSB galaxy should be lower than that of an equal mass halo containing and HSB galaxy. This predicts a clear signature in their rotation speeds, which should be lower for lower density.
The prediction for the spatial distribution of LSB galaxies was tested by Bothun et al. (1993) and Mo et al. (1994). The test showed the expected effect: LSB galaxies were less strongly clustered than HSB galaxies. They are clustered: both galaxy populations follow the same large scale structure, but HSB galaxies adhere more strongly to it. In terms of the correlation function, the LSB sample available at the time had about half the amplitude r0 as comparison HSB samples (Mo et al., 1994). The effect was even more pronounced on the smallest scales (<2 Mpc: Bothun et al., 1993), leading Mo et al. (1994) to construct a model that successfully explained both small and large scale aspects of the spatial distribution of LSB galaxies simply by associating them with dark matter halos that lacked close interactions with other halos. This was strong corroboration of the DD hypothesis.
One way to test the prediction of DD that LSB galaxies should rotate more slowly than HSB galaxies was to use the Tully-Fisher relation (Tully and Fisher, 1977) as a point of reference. Originally identified as an empirical relation between optical luminosity and the observed line-width of single-dish 21 cm observations, more fundamentally it turns out to be a relation between the baryonic mass of a galaxy (stars plus gas) and its flat rotation speed the Baryonic Tully-Fisher relation (BTFR: McGaugh et al., 2000). This relation is a simple power law of the form
Mb = AVf4 (equation 1)with A ≈ 50 M⊙ km−4 s4 (McGaugh, 2005).
Aaronson et al. (1979) provided a straightforward interpretation for a relation of this form. A test particle orbiting a mass M at a distance R will have a circular speed V
V2 = GM/R (equation 2)where G is Newton’s constant. If we square this, a relation like the Tully-Fisher relation follows:
V4 = (GM/R)2 ∝ MΣ (equation 3)where we have introduced the surface mass density Σ = M/R2. The Tully-Fisher relation M ∝ V4 is recovered if Σ is constant, exactly as expected from Freeman’s Law (Freeman, 1970).
LSB galaxies, by definition, have central surface brightnesses (and corresponding stellar surface densities Σ0) that are less than the Freeman value. Consequently, DD predicts, through equation (3), that LSB galaxies should shift systematically off the Tully-Fisher relation: lower Σ means lower velocity. The predicted effect is not subtle2 (Fig. 4). For the range of surface brightness that had become available, the predicted shift should have stood out like the proverbial sore thumb. It did not (Hoffman et al., 1996; McGaugh and de Blok, 1998a; Sprayberry et al., 1995; Zwaan et al., 1995). This had an immediate impact on galaxy formation theory: compare Dalcanton et al. (1995, who predict a shift in Tully-Fisher with surface brightness) with Dalcanton et al. (1997b, who do not).

Instead of the systematic variation of velocity with surface brightness expected at fixed mass, there was none. Indeed, there is no hint of a second parameter dependence. The relation is incredibly tight by the standards of extragalactic astronomy (Lelli et al., 2016b): baryonic mass and the flat rotation speed are practically interchangeable.
The above derivation is overly simplistic. The radius at which we should make a measurement is ill-defined, and the surface density is dynamical: it includes both stars and dark matter. Moreover, galaxies are not spherical cows: one needs to solve the Poisson equation for the observed disk geometry of LTGs, and account for the varying radial contributions of luminous and dark matter. While this can be made to sound intimidating, the numerical computations are straightforward and rigorous (e.g., Begeman et al., 1991; Casertano & Shostak, 1980; Lelli et al., 2016a). It still boils down to the same sort of relation (modulo geometrical factors of order unity), but with two mass distributions: one for the baryons Mb(R), and one for the dark matter MDM(R). Though the dark matter is more massive, it is also more extended. Consequently, both components can contribute non-negligibly to the rotation over the observed range of radii:
V2(R) = GM/R = G(Mb/R + MDM/R), (equation 4)(4)where for clarity we have omitted* geometrical factors. The only absolute requirement is that the baryonic contribution should begin to decline once the majority of baryonic mass is encompassed. It is when rotation curves persist in remaining flat past this point that we infer the need for dark matter.
A recurrent problem in testing galaxy formation theories is that they seldom make ironclad predictions; I attempt a brief summary in Table 1. SH represents a broad class of theories with many variants. By construction, the dark matter halos of galaxies of similar stellar mass are similar. If we associate the flat rotation velocity with halo mass, then galaxies of the same mass have the same circular velocity, and the problem posed by Tully-Fisher is automatically satisfied.
Table 1. Predictions of DD and SH for LSB galaxies.
Observation | DD | SH |
---|---|---|
Evolutionary rate | + | + |
Size distribution | + | + |
Clustering | + | X |
Tully-Fisher relation | X | ? |
Central density relation | + | X |
While it is common to associate the flat rotation speed with the dark matter halo, this is a half-truth: the observed velocity is a combination of baryonic and dark components (eq. (4)). It is thus a rather curious coincidence that rotation curves are as flat as they are: the Keplerian decline of the baryonic contribution must be precisely balanced by an increasing contribution from the dark matter halo. This fine-tuning problem was dubbed the “disk-halo conspiracy” (Bahcall & Casertano, 1985; van Albada & Sancisi, 1986). The solution offered for the disk-halo conspiracy was that the formation of the baryonic disk has an effect on the distribution of the dark matter. As the disk settles, the dark matter halo respond through a process commonly referred to as adiabatic compression that brings the peak velocities of disk and dark components into alignment (Blumenthal et al., 1986). Some rearrangement of the dark matter halo in response to the change of the gravitational potential caused by the settling of the disk is inevitable, so this seemed a plausible explanation.
The observation that LSB galaxies obey the Tully-Fisher relation greatly compounds the fine-tuning (McGaugh and de Blok, 1998a; Zwaan et al., 1995). The amount of adiabatic compression depends on the surface density of stars (Sellwood and McGaugh, 2005b): HSB galaxies experience greater compression than LSB galaxies. This should enhance the predicted shift between the two in Tully-Fisher. Instead, the amplitude of the flat rotation speed remains unperturbed.
The generic failings of dark matter models was discussed at length by McGaugh and de Blok (1998a). The same problems have been encountered by others. For example, Fig. 5 shows model galaxies formed in a dark matter halo with identical total mass and density profile but with different spin parameters (van den Bosch, 2001b). Variations in the assembly and cooling history were also considered, but these make little difference and are not relevant here. The point is that smaller (larger) spin parameters lead to more (less) compact disks that contribute more (less) to the total rotation, exactly as anticipated from variations in the term Mb/R in equation (4). The nominal variation is readily detectable, and stands out prominently in the Tully-Fisher diagram (Fig. 5). This is exactly the same fine-tuning problem that was pointed out by Zwaan et al. (1995) and McGaugh and de Blok (1998a).
What I describe as a fine-tuning problem is not portrayed as such by van den Bosch (2000) and van den Bosch and Dalcanton (2000), who argued that the data could be readily accommodated in the dark matter picture. The difference is between accommodating the data once known, and predicting it a priori. The dark matter picture is extraordinarily flexible: one is free to distribute the dark matter as needed to fit any data that evinces a non-negative mass discrepancy, even data that are wrong (de Blok & McGaugh, 1998). It is another matter entirely to construct a realistic model a priori; in my experience it is quite easy to construct models with plausible-seeming parameters that bear little resemblance to real galaxies (e.g., the low-spin case in Fig. 5). A similar conundrum is encountered when constructing models that can explain the long tidal tails observed in merging and interacting galaxies: models with realistic rotation curves do not produce realistic tidal tails, and vice-versa (Dubinski et al., 1999). The data occupy a very narrow sliver of the enormous volume of parameter space available to dark matter models, a situation that seems rather contrived.

Both DD and SH predict residuals from Tully-Fisher that are not observed. I consider this to be an unrecoverable failure for DD, which was my hypothesis (McGaugh, 1992), so I worked hard to salvage it. I could not. For SH, Tully-Fisher might be recovered in the limit of dark matter domination, which requires further consideration.
I will save the further consideration for a future post, as that can take infinite words (there are literally thousands of ApJ papers on the subject). The real problem that rotation curve data pose generically for the dark matter interpretation is the fine-tuning required between baryonic and dark matter components – the balancing act explicit in the equations above. This, by itself, constitutes a practical falsification of the dark matter paradigm.
Without going into interesting but ultimately meaningless details (maybe next time), the only way to avoid this conclusion is to choose to be unconcerned with fine-tuning. If you choose to say fine-tuning isn’t a problem, then it isn’t a problem. Worse, many scientists don’t seem to understand that they’ve even made this choice: it is baked into their assumptions. There is no risk of questioning those assumptions if one never stops to think about them, much less worry that there might be something wrong with them.
Much of the field seems to have sunk into a form of scientific nihilism. The attitude I frequently encounter when I raise this issue boils down to “Don’t care! Everything will magically work out! LA LA LA!”

*Strictly speaking, eq. (4) only holds for spherical mass distributions. I make this simplification here to emphasize the fact that both mass and radius matter. This essential scaling persists for any geometry: the argument holds in complete generality.
I am in no position to judge your work or the views you express about the work of others. But I empathize with your situation.
I read your blog because it had been mentioned quite often on Sabine Hossenfelder’s blog a while back. And I started reading Dr. Hossenfelder’s blog because my personal research as an autodidact in the foundations of mathematics has similarities with the kinds of “symmetry” found in work on “quantum foundations.”
I know the difference between physics and metamathematics. What they share is a basis in cognitive experience. Physics tries to provide explanations for this experience validated by measurements and predictions of measurement. Metamathematics seeks some sort of “concrete” basis for certain mathematical constructions. So, the cognitive ground lies with the sensible impression of symbol shapes used to “talk” about mathematics and the ontology of mathematics.
My work differs from the received views of academia because, as an autodidact, I had never been exposed to indoctrination with the “folklore of arithmetization.” So, when I learn of how every Boolean lattice is an orthomodular lattice (an early attempt at understanding how quantum logic works) I recognize that the logical constants decorating the (16-element) free Boolean lattice on two generators are related to reflection groups of interest to physicists. Indeed, that classical propositional logic is not categorical had been shown in 1999,
https://arxiv.org/abs/quant-ph/9906101
I learned of this in 2003. I have a 100% failure rate of finding any professional or amateur internet presence who will discuss this result.
Because standard number theory courses in mathematics emphasize divisibility, I have written tentative axiom sets which allow numbers described with respect to divisibility to be compared with numbers described with respect to counting (and, a divisibility perspective relates directly to abstract group theory).
To make these axiom systems work, I had to write alternate inference rules for quantification from those of first-order logic. And to demonstrate the effective use of these inference rules, I formulated Fitch-style proofs of Thoralf Skolem’s idea of counting using definite descriptions relative to my counting arithmetic axioms.
As with the published material mentioned above, I have a 100% failure rate at finding an interested party. It is as if our universities are now in the business of producing experts rather than critical thinkers.
Recently, I found this translated material from the algebraic geometer, Alexander Grothendieck:
https://sniadecki.wordpress.com/2021/10/24/grothendieck-church/
He wrote this criticism of the modern scientific paradigm in 1971.
As this comment has nothing to do with physics and as I have no professional standing, I do not expect you to publish this to your blog.
Nevertheless, I hope my story resonates with you, and, I hope you find the Grothendieck link to be insightful.
LikeLiked by 1 person
Mls, you sparked my interest on infinitesimal fields and thus on Lie algebra’s. It was nice to read your article, kinda abstract but cool to connect with possible mathematics and computer logic. The problem in finding interested parties lies mostly in the decoherence problem, to make any such logic work a quantum computer must be extremely (impossibly?) much shielded from its environment. But the math is still intriguing and useful, computers aren’t that great anyway. It’s simply cool to observe nature and anything to do with infinity!!
LikeLike
Mr. Havinga,
I am glad you found the paper by Pavicic and Megill interesting. And, I have certainly wondered about how this algebraic approach to a quantum logic (orthomodular logic) introduced by Birkhoff and von Neumann might directly relate to the capabilities of modern technology. I take that to be equivalent to your mention of decoherence.
When I spoke of failure, however, I had been speaking of the philosophers, logicians, and foundational mathematicians. The notion of “truth tables” had been discerned from work by Russell and Whitehead. This recognition is variously attributed to Wittgenstein or Post who published about a century ago.
Pavicic and Megill identified the fact that truth table semantics is not unique. Nor is it faithful to the standard axioms of propositional logic. Having noticed their work, the mathematician Eric Schecter wrote a syntactic “hexagon interpretation” for propositional logic in 2006. This syntactic work, then, is directly comparable to the syntactic nature of truth tables.
The reason I chose to write these comments because I see that an entire academic community is refusing to look at a result that challenges their “received paradigm.” It is not as if I have not tried to get rational discussion in this matter on sites dedicated to these topics (and elsewhere). Perhaps incorrectly, I see this to be analogous to what Dr. McGaugh seems to be experiencing.
I apologize for any confusion. It is, of course, my fault because this is off-topic for a physics blog. I had simply been trying to give a specific context to go with the general criticism of the Grothendieck passage.
LikeLike
The confusion is due to my focus on subjects I liked best at the moment (infinities), my apologies for that. I’ve spent some extra time now on free boolean algebra’s. I see how they are related to reflection groups, you can map p, q or both to their negation as reflection, switch p and q etc.
I now see the point of the paper: that quantum logic as it is usually approached can lead to contradictions – two valid computations (of a function) with the same starting data can give contradictory results. This is due to quantum logic being mapped to an orthomodular lattice as hidden assumption.
Is your work a reformulation of counting theorems (divisibility) that uses the weakly distributive model of classical logic? I’d like to have a look!
You explained the relation with MOND well in terms of it being unorthodox.
LikeLike
My work is extremely technical. It is motivated by Cantor’s continuum question. I strongly doubt that it will be anything like what you might expect.
You can ping me at the disposable wfbg-at-xemaps-dot-com address. That will get this off of Dr. McGaugh’s blog.
The fact of the matter is that I have more questions than answers. Much of the work is in fragments. I have convinced myself that progress would be further if I had found an interlocutor over the years to provide feedback. But, that is probably the fantasy of a serious amateur with respect to a state of mind professionals know well. You just do not know if your ideas can be made to work until they do. They are your ideas and no one can think for you.
Thank you, Dr. McGaugh, for your tolerance. Unless there is a problem pinging me, I will not post again.
LikeLike
This is why I am skeptical that the cosmology community would abandon dark matter in the near future. Even if cosmology abandoned the Lambda CDM model completely due to the current Hubble tension, S8 tension, CMB dipole, et cetera becoming significant at much greater than 5 sigma, the vast majority of the problems with the Lambda CDM model being discussed right now by the community have very little to do with dark matter, and whatever conservative or radical new model that comes to replace the Lambda CDM model is still likely to have dark matter in the model. The big bang itself might even be replaced, but even then the replacement of the big bang would most likely have dark matter as part of the model.
LikeLiked by 2 people
Yep. Dark matter has become baked into how people think. Most scientists can imagine doing without it no more than they can imagine not breathing. That was true already in the ’90s; it took a Herculean effort of fact-checking and intellectual honesty to admit to myself that maybe I had been wrong to be so sure the answer had to be dark matter. Too few people are making any such effort.
LikeLike
Do you have any comment on the announcement two days ago of a 7 sigma anomaly in the Fermilab measurement of the W/Z boson mass ratio? I ask because, to me, this is a very strong signal that the standard model of particle physics does not understand what mass is. And, if so, then the very concept of dark matter becomes meaningless, since the entire concept of “mass” splits into (at least) two distinct concepts.
LikeLike
I do not have much to say about it. The signal looks anomalous because the stated error bar is tiny. It is otherwise in line with other measurements and the Standard Model. So the first issue is whether to believe the claimed uncertainty.
If we do believe it, the claimed deviation is statistically significant, but it is also tiny in an absolute sense. So it isn’t clear yet what it means. Sometimes a window on new physics opens through a tiny crack, but it could also mean a more mundane adjustment to theory.
More often than not it turns out that the uncertainties are underestimated – either experimental or theoretical. Details of the Standard Model often require higher order numerical calculations that are a pain in the ol’ patutsky, so don’t get done until strongly motivated by something like this. So it wouldn’t surprise me if the prediction isn’t quite right even if the measurement is.
Those are experiential observations. I have no particular expertise or insight into this particular issue.
LikeLike
I guess you are probably right, but I don’t have any particular expertise on this issue either. It’s just that the model I have been developing over several years seems to imply that this particular mass ratio cannot be measured consistently on the basis of the standard model to a greater accuracy than +/-.14%. This figure isn’t an estimate, but a hard limit. Since this experiment claims an accuracy of .015% (if I understand correctly) and an anomaly of .1%, it is (on the face of it) consistent with my model but not with the standard model. I do not have the expertise to investigate in much more detail than this, but if any physicists are interested in discussing it, they are welcome to my blog. When you don’t have a clue what is wrong, the tiniest crack is worth investigating.
LikeLiked by 1 person
We know that the Standard Model is incomplete because it doesn’t predict neutrino rest masses, and we know that neutrinos have to have rest masses because if they were massless they would not oscillate between the different types (electron, muon and tau). While we cannot measure the masses, we know they must be non-zero. Similarly, the gyromagnetic ratio (g) of the muon is not as predicted by the Standard Model (which does predict the corresponding property of the electron). So, regardless of the W boson we have to go beyond the Standard Model.
LikeLike
You use the word “know” a lot. The only thing I know is that I don’t know anything.
LikeLike
I know what the experimental results tell us; and while I don’t know what the theory is that predicts (or more accurately postdicts) these experimental results; theory cannot change experimental results, only their interpretation. Reading Stacy’s blog should be enough to convince you of that. You claim to have a model; yet you admit that you know nothing; if your model does not make predictions that can be tested against experiment it is worthless.
LikeLike
I’ve told you what my prediction was. A prediction is not knowledge. Nor is it a postdiction, since it was made back in 2015. The experiment has produced a result which is consistent with my prediction, and is not consistent with the standard model prediction. What more do you want?
LikeLike
Well, to be fair, from the experiments, we know the neutrinos change flavor while traveling. This is what we know.
That they have a rest mass it is only a possible interpretation derived based on the experimental results. So therefore we cannot say that “we know” that neutrinos have rest mass. We just assume it (by reading Robert’s blog, I think this is why he said he knows nothing).
It is the same story as with DM.
LikeLike
I am sorry if I have somewhat hijacked this discussion. My intention was to direct traffic to my blog, rather than have the discussion here. It is not completely off-topic, however, since if my model and analysis are valid (two big ifs) then what this experiment has detected is a quantum gravity signal from the Moon (or more accurately, from the Earth-Moon-Sun system as a whole). My model also accurately postdicts the muon g-2 anomaly and kaon oscillations, among other things, and predicts that a similar mass anomaly for kaons, with a slightly smaller amplitude of .05%, can be found by a suitable experiment. As far as I am aware, no other quantum theory of gravity actually makes testable predictions of this kind. The supposed mechanism is that changes in the quantum gravitational field are correlated with changes in neutrino energy (and flavour), so can be detected in precision measurements of weak force processes.
LikeLike
I fully agree to that!
LikeLike
Dear Stacy,
the historian Harari writes
that we humans always believe in myths of some kind
and that these myths enable us to do great things.
The pyramids are one example.
And the LHC at Cern is another.
In
https://tritonstation.com/author/tritonstation/page/5/
you are not very happy about the arrogance of the physicists in Princeton
and the particle physicists in particular.
Well, whenever Sabine says she is a particle physicist,
I have to smile, because: There are no particles.
Particles, mass points are excellent models for planets,
where the objects are small and the distances are large.
A single atom is demonstrably some 100nm large
and that is nucleus and electrons. The textbook idea
with atoms of 10^-10m and nuclei of 10-15m is a myth
and leads to well known and even better ignored contradictions
in the double slit experiment.
Robert Betts Laughlin is the only known physicist who has penetrated to this (NP 1998)
and says it aloud.
I have never understood what string theory is all about.
They can’t explain the double-slit experiment; in fact, they don’t even try.
I am not impressed by their marketing department.
Even less do I understand those who chase after it….
Any high school graduate should be able to see that this is going nowhere.
There are so many obvious unresolved issues.
And it is not to be seen that string theory solves or tries to solve any of these questions.
Why is an electron stable?
And why does it annihilate with a positron to 2 or 3 photons?
In this process all properties of the objects change. All!
A mathematics is used, which was invented to determine the position of planets
in the planetary system and whose only property, the mass, does not change.
In this way we will never understand it.
MOND would win everything if one can show that the geometry at the edge of
Galaxies is 2-dimensional. Then you have won.
Cosmology is not very truthful with its calculations.
It would be more honest to say: hey, we can only calculate a homogeneous cosmos,
so let’s do it and see what comes out.
However, it is said: On quite big length scales the cosmos is homogeneous then….
I don’t know any.
Stefan
LikeLike
I’ve always thought there was something ‘ugly’ about exotic matter (DM). So I’ve watched in irritation as people have wasted years searching for it. Meanwhile, MOND seems to be more like a constraint that the final theory has to satisfy, not an end-all answer in itself. So I wonder….the gravitationally bound structures do not expand and so the expansion of the Universe can only occur in volumes not gravitationally bound. Therefore the expansion must be confined to the Voids. Having concluded this we can then ask if there exists a metric junction between the Spacetime of the gravitationally bound matter structures and the Spacetime of the Voids. Interestingly, MOND’s ao seems to be related to the cosmological constant. In this recent paper ( https://arxiv.org/abs/1908.08735) the authors state that sometimes gravitational singularities on the thin shell connecting two disparate Spacetimes lead to energy problems and can be unphysical. Maybe that’s not such a problem after all….maybe it’s an answer. BTW if the Bootes Void is taken as being a typical size Void, then a back of the envelope computation tells us there could be about 500000 of them contained within the volume of the known Universe. All of them expanding, which calls into question exactly how the FLRW metric should be utilized.
LikeLike
Yes, MOND provides a mathematical description of a constraint that must be satisfied by any successful theory. This is unnatural in terms of dark matter, as such theories do not automatically do that (unless fine-tuned or built to do so, like superfluid dark matter).
As for the connection between bound structures and the expanding universe, there’s a lot of nothing that isn’t properly a void – the filaments and the walls around voids are not self-bound and do participate in the expansion of the universe. However, you make a good point that I have been reluctant to admit: there may be enough structure in the universe that a uniform FLRW metric may not hold (see, e.g., Indranil Banik’s post about the KBC void – https://tritonstation.com/2020/10/23/big-trouble-in-a-deep-void/)
LikeLike
Love your blog. So I believe Mond is true. What could be the underlying cause? (I’m sure you’ve thought about this more than me.) So like a drunk who can only search for his lost keys under the streetlight. I can think only about the physics I know. (Solid State experimentalist.) I had this crazy idea of a Bose Einstein condensate of virtual gravitons. You get condensation at low energy (acceleration). With a (stupidly guessed at) lowest energy state when the graviton wavelength is something like the size of the (observable) universe. I have no idea how to guesstimate the graviton density. It seems likely someone has had this idea before, and I wonder if you (or on of your readers) know of a reference? Or know why it’s an idea that doesn’t work. I do find things like this https://www.researchgate.net/publication/328350598_Dark_Matter_as_a_Non-Relativistic_Bose-Einstein_Condensate_with_Massive_Gravitons
LikeLike