The stability of spiral galaxies was a foundational motivation to invoke dark matter: a thin disk of self-gravitating stars is unstable unless embedded in a dark matter halo. Modified dynamics can also stabilize galactic disks. A related test is provided by how thin such galaxies can be.

Thin galaxies exist

Spiral galaxies seen edge-on are thin. They have a typical thickness – their short-to-long axis ratio – of q ≈ 0.2. Sometimes they’re thicker, sometimes they’re thinner, but this is often what we assume when building mass models of the stellar disk of galaxies that are not seen exactly* edge-on. One can employ more elaborate estimators, but the results are not particularly sensitive to the exact thickness so long as it isn’t the limit of either razor thin (q = 0) or a spherical cow (q = 1).

Sometimes galaxies are very thin. Behold the “superthin” galaxy UGC 7321:

UGC 7321 as seen in optical colors by the Sloan Digital Sky Survey.

It also looks very thin in the infrared, which is the better tracer of stellar mass:

Fig. 1 from Matthews et al (1999): H-band (1.6 micron) image of UGC 7321. Matthews (2000) finds a near-IR axis ratio of 14:1. That’s super thin (q = 0.07)!

UGC 7321 is very thin, would be low surface brightness if seen face-on (Matthews estimates a central B-band surface brightness of 23.4 mag arcsec-2), has no bulge component thickening the central region, and contains roughly as much mass in gas as stars. All of these properties dispose a disk to be fragile (to perturbations like mergers and subhalo crossings) and unstable, yet there it is. There are enough similar examples to build a flat galaxy catalog, so somehow the universe has figured out a way for galaxy disks to remain thin and dynamically cold# for the better part of a Hubble time.

We see spiral galaxies at various inclinations to our line of sight. Some will appear face on, others edge-on, and everything in between. If we observe enough of them, we can work out what the intrinsic distribution is based on the projected version we see.

First, some definitions. A 3D object has three principle axes of lengths a, b, and c. By convention, a is the longest and c the shortest. An oblate model imagines a galaxy like a frisbee: it is perfectly round seen face-on (a = b); seen edge-on q = c/a. More generally, an object can be triaxial, with a ≠ b ≠ c. In this case, a galaxy would not appear perfectly round even when seen perfectly face-on^ because it is intrinsically oval (with similar axis lengths a ≈ b but not exactly equal). I expect this is fairly common among dwarf Irregular galaxies.

The observed and intrinsic distribution of disk thicknesses

Benevides et al. (2025) find that the distribution of observed axis ratios q is pretty flat. This is a consequence of most galaxies being seen at some intermediate viewing angle. One can posit an intrinsic distribution, model what one would see at a bunch of random viewing angles, and iterate to extract the true distribution in nature, which they do:

Figure 6 from Benevides et al. (2025): Comparison between the observed (projected) q distribution and the inferred intrinsic 3D axis ratios for a subsample of dwarfs in the GAMA survey with M=109109.5M. The observed shapes are shown with the solid black line and are used to derive an intrinsic c/a (long-dashed) and b/a (dotted) distribution when projected. Solid color lines in each panel corresponds to the q values obtained from the 3D model after random projections. Note that a wide distribution of q values is generated by a much narrower intrinsic c/a distribution. For example, the blue shaded region in the left panel shows that an observed 5% of galaxies with q<0.2 requires 41% of galaxies to have an intrinsic c/a<0.2 for an oblate model. Similarly, for a triaxal model (right panel, red curve) 43% of galaxies are required to be thinner than c/a=0.2. The additional freedom of ba in the triaxial model helps to obtain a better fit to the projected q distribution, but the changes mostly affect large q values and changes little the c/a frequency derived from highly elongated objects.

That we see some thin galaxies implies that they they have to be common, as most of them are not seen edge-on. For dwarf$ galaxies of a specific mass range, which happens to include UGC 7321, Benevides et al. (2025) infer a lot% of thin galaxies, at least 40% with q < 0.2. They also infer a little bit of triaxiality, a ≈ b.

The existence and numbers of thin dwarfs seems to come as a surprise to many astronomers. This is perhaps driven in part by theoretical expectations for dwarf galaxies to be thick: a low surface brightness disk has little self-gravity to hold stars in a narrow plane. This expectation is so strong that Benevides et al. (2025) feel compelled to provide some observed examples, as if to say look, really:

Figure 8 – images of real galaxies from Benevides et al. (2025): Examples of 10 highly elongated dwarf galaxies with q0.2 and M=107108.5M. They resemble thin edge-on disks and can be found even among the faintest dwarfs in our sample. Legends in each panel quote the stellar mass, the shape parameter q, as well as the GAMA identifier. Objects are sorted by increasing M, left to right.

As an empiricist who has spent a career looking at low mass and low surface brightness galaxies, this does not come as a surprise to me. These galaxies look normal. That’s what the universe of late type dwarf$ galaxies looks like.

Edge-on galaxies in LCDM simulations

Thin galaxies do not occur naturally in the hierarchical mergers of LCDM (e.g., Haslbauer et al. 2022), where one would expect a steady bombardment by merging masses to mess things up. The picture above is not what galaxy-like objects in LCDM simulations look like. Scraping through a few simulations to find the flattest galaxies, Benevides et al. (2025) find only a handful of examples:

Figure 11 – images of simulated galaxies from Benevides et al. (2025): Edge-on projection of examples of the flattest galaxies in the TNG50 simulation, in different bins of stellar mass.

Note that only the four images on the left here occupy the same stellar mass range as the images of reality above. These are as close as it gets. Not terrible, but also not representative&. The fraction of galaxies this thin is a tiny fraction of the simulated population whereas they are quite common in reality. Here the two are compared: three different surveys (solid lines) vs. three different simulations (dashed lines).

Figure 9 from Benevides et al. (2025): Fraction of galaxies that are derived to be intrinsically thinner than c/a0.2 as a function of stellar mass. Thick solid lines correspond to our observational samples while dashed lines are used to display the results of cosmological simulations. Different colors highlight the specific survey or simulation name, as quoted in the legend. In all observational surveys, the frequency of thin galaxies peaks for dwarfs with M109M, almost doubling the frequency observed on the scale of MW-mass galaxies. Thin galaxies do not disappear at lower masses: we infer a significant fraction of dwarf galaxies with M<109M to have c/a<0.2. This is in stark contrast with the negligible production of thin dwarf galaxies in all numerical simulations analyzed here.

Note that the thinnest galaxies in nature are dwarfs of mass comparable to UGC 7321. Thin disks aren’t just for bright spirals like the Milky Way with log(M*) > 10.5. They are also common*$ for dwarfs with log(M*) = 9 and even log(M*) = 8, which are often gas dominated. In contrast, the simulations produce almost no galaxies that are thin at these lower masses.

The simulations simply do not look like reality. Again. And again, etc., etc., ad nauseam. It’s almost as if the old adage applies: garbage in, garbage out. Maybe it’s not the resolution or the implementation of the simulations that’s the problem. One could get all that right, but it wouldn’t matter if the starting assumption of a universe dominated by cold dark matter was the input garbage.

Galaxy thickness in Newton and MOND

Thick disks are not merely a product of simulations, they are endemic to Newtonian dynamics. As stars orbit around and around a galaxy’s center, they also oscillate up and down, bobbing in and out of the plane. How far up they get depends on how fast they’re going (the dynamical temperature of the stellar population) and how strong the restoring force to the plane of the disk is.

In the traditional picture of a thin spiral galaxy embedded in a quasi-spherical dark matter halo, the restoring force is provided by the stars in the disk. The dark matter halo is there to boost the radial force to make the rotation curve flat, and to stabilize the disk, for which it needs to be approximately spherical. The dark matter halo does not contribute much to the vertical restoring force because it adds little mass near the disk plane. In order to do that, the halo would have to be very squashed (small q) like the disk, in which case we revive the stability problem the halo was put there to solve.

This is why we expect low surface brightness disks to be thick. Their stars are spread thin, the surface mass density is low, so the restoring force to the disk should be small. Disks as thin as UGC 7321 shouldn’t be possible unless they are extremely cold*# dynamically – a situation that is unlikely to persist in a cosmogony built by hierarchical merging. The simulations discussed above corroborate this expectation.

In MOND, there is no dark matter halo, but the modified force should boost the vertical restoring force as well as the radial force. One thus expects thinner disks in MOND than in Newton.

I pointed this out in McGaugh & de Blok (1998) along with pretty much everything else in the universe that people tell me I should consider without bothering to check if I’ve already considered. Here is the plot I published at the time:

Figure 9 of McGaugh & de Blok (1998): Thickness q = z0/h expected for disks of various central surface densities σ0. Shown along the top axis is the equivalent B-band central surface brightness μ0 for ϒ* = 2. Parameters chosen for illustration are noted in the figure (a typical scale length h and two choices of central vertical velocity dispersion ςz). Other plausible values give similar results. The solid lines are the Newtonian expectation and the dashed lines that of MOND. The Newtonian and MOND cases are similar at high surface densities but differ enormously at low surface densities. Newtonian disks become very thick at low surface brightness. In contrast, MOND disks can remain reasonably thin to low surface density.

There are many approximations that have to be made in constructing the figure above. I assumed disks were plane-parallel slabs of constant velocity dispersion, which they are not. But this suffices to illustrate the basic point, that disks should remain thinner&% in MOND than in Newton as surface density decreases: as one sinks further into the MOND regime, there is relatively more restoring force keep disks thin. To duplicate this effect in Newton, one must invent two kinds of dark matter: a dissipational kind of dark matter that forms a dark matter disk in addition to the usual dissipationless cold dark matter that makes a quasi-spherical dark matter halo.

The idea of the plot above was to illustrate the trend of expected thickness for galaxies of different central surface brightness. One can also build a model to illustrate the expected thickness as a function of radius for a pair of galaxies, one high surface brightness (so it starts in the Newtonian regime at small radii) and one of low surface brightness (in the MOND regime everywhere). I have chosen numbers** resembling the Milky Way for the high surface brightness galaxy model, and scaled the velocity dispersion of the low surface brightness model so it has very nearly the same thickness in the Newtonian regime. In MOND, both disks remain thin as a function of radius (they flare a lot in Newton) and the lower surface brightness disk model is thinner thanks to the relatively stronger restoring force that follows from being deeper in the MOND regime.

The thickness of two model disks, one high surface brightness (solid lines) and the other low surface brightness (dashed lines), as a function of radius. The two are similar in Newton (black), but differ in MOND (blue). The restoring force to the disk is stronger in MOND, so there is less flaring with increasing radius. The low surface brightness galaxy is further in the MOND regime, leading naturally to a thinner disk.

These are not realistic disk models, but they again suffice to illustrate the point: thin disks occur naturally in MOND. Low surface brightness disks should be thick in LCDM (and in Newtonian dynamics in general), but can be as thin as UGC 7321 in MOND. I didn’t aim to make q ≈ 0.1 in the model low surface brightness disk; it just came out that way for numbers chosen to be reasonable representations of the genre.

What the distribution of thicknesses is depends on the accretion and heating history of each individual disk. I don’t claim to understand that. But the mere existence of dwarf galaxies with thin disks is a natural outcome in MOND that we once again struggle to comprehend in terms of dark matter.


*Seeing a galaxy highly inclined minimizes the inclination correction to the kinematic observations [Vrot = Vobs/sin(i)] but to build a mass model we also need to know the face-on surface density profile of the stars, the correction for which depends on 1/cos(i). So as a practical matter, the competition between sin(i) and cos(i) makes it difficult to analyze galaxies at either extreme.

#Dynamically cold means the random motions (quantified by the velocity dispersion of stars σ) are small compared to ordered rotation (V) in the disk, something like V/σ ≈ 10. As a disk heats (higher σ) it thickens, as some of that random motion goes in the vertical direction perpendicular to the disk. Mergers heat disks because they bring kinetic energy in from random directions. Even after an object is absorbed, the splash it made is preserved in the vertical distribution of the stars which, once displaced, never settle back into a thin disk. (Gas can settle through dissipation, but point masses like stars cannot.)

^Oval distortions are a major source of systematic error in galaxy inclination estimates, especially for dwarf Irregulars. It is an asymmetric error: a galaxy with a mild oval distortion can be inferred to have an inclination (i > 0) even when seen face-on (i = 0), but it can never have an inclination more face-on (i < 0) than exactly face-on. This is one of the common drivers of claims that low mass galaxies fall off the Tully-Fisher relation. (Other common problems include a failure to account for gas mass, bad distance estimates, or not measuring Vflat.)

$In a field with abominable terminology, what is meant by a “dwarf” galaxy is one of the worst offenders. One of my first conference contributions thirty years ago griped about the [mis]use of this term, and matters have not improved. For this particular figure, Benevides et al. (2025) define it to mean galaxies with stellar masses in the range 9 < log(M*) < 9.5, which seems big to me, but at least it is below the mass of a typical L* spiral, which has log(M*) ~ 10.5. For comparison, see Fig. 6 of the review of Bullock & Boylan-Kolchin (2017), who define “bright dwarfs” to have 7 < log(M*) < 9, and go lower from there, but not higher into the regime that we’re calling dwarf right now. So what a dwarf galaxy is depends on context.

%Note that the intrinsic distribution peaks below q = 0.2, so arguably one should perhaps adopt as typical the mode of the distribution (q ≈ 0.17).

&Another way in which even the thin simulated objects are not representative of reality is that they are dynamically hot, as indicated by the κrot parameter printed with the image. This is the fraction of kinetic energy in rotation. One of the more favorable cases with κrot = 0.67 corresponds to V/σ = 2.5. That happens in reality, but higher values are common. Of course, thin disks and dynamical coldness go hand in hand. Since the simulations involve a lot of mergers, the fraction of kinetic energy in rotation is naturally small. So I’m not saying the simulations are wrong in what they predict given the input physics that they assume, but I am saying that this prediction does not match reality.

*$The fraction of thin galaxies observed by DESI is slightly higher than found in the other surveys. Having looked at all these data, I am inclined to suspect the culprit is image quality: that of DESI is better. Regardless of the culprit for this small discrepancy between surveys, thin disks are much more common in reality than in the current generation of simulations.

*#There seems to be a limit to how cold disks get, with a minimum velocity dispersion around ~7 km/s observed in face-on dwarfs when the appropriate number, according to Newton, would be more like 2 km/s, tops. I remember this number from observations in the ’80s and ’90s, along with lots of discussion then to the effect of how can it be so? but it is the new year and I’m feeling too lazy to hunt down all the citations so you get a meme instead.


&%In an absolute sense, all other things being equal, which they’re not, disks do become thicker to lower surface brightness in both Newton and MOND. There is less restoring force for less surface mass density. It is the relative decline in restoring force and consequent thickening of the disk that is much more precipitous in Newton.

**For the numerically curious, these models are exponential disks with surface density profiles Σ(R) = Σ0 e-R/Rd. Both models have a scale length Rd = 3 kpc. The HSB has Σ0 = 866 M pc-2; this is a good match to the Eilers et al. (2019) Milky Way disk; see McGaugh (2019). The LSB has Σ0 = 100 M pc-2, which corresponds roughly to what I consider the boundary of low surface brightness, a central B-band surface brightness of ~23 mag. arcsec-2. For the velocity dispersion profile I also assume an exponential with scale length 2Rd (that’s what supposed to happen). The central velocity dispersion of the HSB is 100 km/s (an educated guess that gets us in the right ballpark) and that of the LSB is 33 km/s – the mass is down by a factor of ~9 so the velocity dispersion should be lower by a factor of 9\sqrt{9}. (I let it be inexact so the solid and dashed Newtonian lines wouldn’t exactly overlap.)

These models are crude, being single-population (there can be multiple stellar populations each with their own velocity dispersion and vertical scale height) and lacking both a bulge and gas. The velocity dispersion profile sometimes falls with a scale length twice the disk scale length as expected, sometimes not. In the Milky Way, Rd ≈ 2.5 or 3 kpc, but the velocity dispersion falls off with a scale length that is not 5 or 6 kpc but rather 21 or 25 kpc. I have also seen the velocity dispersion profile flatten out rather than continue to fall with radius. That might itself be a hint of MOND, but there are lots of different aspects of the problem to consider.

52 thoughts on “Very thin galaxies

  1. This post highlights a basic but often overlooked point: a gravitational potential is not something to be assumed in advance. It is an empirical input, inferred from how a system actually behaves.

    In the Solar System, observed motions genuinely lead to a Newtonian 1/r potential dominated by a central mass. That inference is straightforward. Thin galaxies are different. Their extreme planarity and long-term dynamical coldness already tell us that the effective gravitational potential governing them is not Newtonian in the same sense.

    What this calls for is not ad hoc fixes, but a more generic GR framework: one in which the empirically inferred potential is first accepted, then embedded into a local spacetime metric satisfying a contextual field equation, as done in relativistic MOND-like extensions. Consistency conditions are then imposed only where different local descriptions overlap, exactly as in an atlas construction in differential geometry.

    Imposing a predetermined Newtonian potential and forcing galaxies to comply reverses the logic. If observation is taken seriously, the potential must be read from the galaxy, then geometrized locally—not imposed globally in advance.

  2. “&%In an absolute sense, all other things being equal, which they’re not, disks do become thicker to lower surface brightness in both Newton and MOND. There is less restoring force for less surface mass density. It is the relative decline in restoring force and consequent thickening of the disk that is much more precipitous in Newton.”

    So I assume the MOND model uses or agrees with the inferred rotational velocites from doppler shift, because that’s what it is designed to do. But what do you do for the Newton model? What if the inferred velocities that MOND expains are not real? Then the Newton model should probably not assume the same kinetic energy. Which way was it modeled?
    Thank you!

    1. These models only illustrate the difference in the disk potential in the vertical direction, which is usually assumed to be separable from the potential in the plane. That assumption is made for convenience, and is one of many details I have intentionally glossed over here. I’ve made no attempt to build a fully self-consistent disk model for this toy illustration.

      1. Thanks Stacy. I meant to more generally ask if any conclusion that the ubiquitous thin disks are much more stable in MOND than in Newtonian gravity might require one big assumption – that the inferred velocities are real velocities which represent the intrinsic kinetic energy of the observed baryons.

        If they are not real velocities, then modeling Newtonian gravity with potentially fictitious kinetic energy will not provide natural results. Consequently, any stability estimates for thin disks in Newtonian gravity could be unnaturally skewed.

        In the comments of your prior post someone pondered why MOND involves a geometric mean modification to Newton. Well, if there were fictitious velocities near and into the MOND regime, then the fictitious acceleration would follow from the square of the fictitious velocity. Seems like a real possibility to me given all that you have elucidated over the years regarding this mysterious DM vs MOND paradigm.

        1. The velocities are measured from the Doppler effect in most cases and presumed to result from real kinematic motion, yes. If that’s wrong, we have bigger problems than dark matter or MOND.

          Note that velocities in the Milky Way and in some nearby dwarf satellites are measured from proper motions – we see individual stars moving on the sky over time. There’s nothing fictitious about that, and it leads to the same results.

          1. I expect that the fictitious part enters only in the MOND regime, which is why only a modification is required and not a complete rewrite of astrophysics.
            However, this would imply that (at least in one view) the big bang itself contains fictitious elements. At a minimum there would have to be a complementary view of cosmology where the big bang contains fictitious elements. So indeed that would need to be a pretty big rewrite. It could be the paradigm shift we are looking for though, if only for explanatory purposes.

          2. Do the nearby dwarf galaxies give flat rotation curves purely from the proper motions of stars – do the two methods corroborate each other there? This is of interest if our galaxy may be giving a somewhat different picture, coming out of a different method.

              1. I don’t know how easy it would be to test whether the observed doppler effects for galactic baryons in the MOND regime represent fictitious or real kinematic baryon velocities, but I see no harm in the proposal.

                Here a group discusses what I would consider fictitious elements of the CMB. They construct a fictitious observer and analyze what they call FOTO (Finger Of The Observer).

                “The basic idea is to transform the individual galaxyredshifts of a catalogue (but not their angular positions and luminosities) to the rest frameof a fictitious observer that moves with a pre-selected peculiar velocity vector. The resultingpower-spectrum monopole differs from the one measured by a comoving observer because ofthree additive corrections — see eq. (5.7) — whose sum we dub the B-FOTO signal (shortfor boosted FOTO). Two of them are the standard FOTO signal and an analogous termdepending on the square modulus of the fictitious velocity, the third one, instead, scales withthe scalar product of the true observer velocity and the fictitious one.”
                https://arxiv.org/abs/2412.03953

  3. The real tragedy here is all the hours you’ve had to waste over the years dealing with this dark matter BS. Dark matter is not there; this is a well documented fact. There is absolutely no empirical evidence supporting the existence of dark matter an it has been actively sought for 40 years; it’s not there. No further argument is necessary; dark matter funding should be reallocated to more promising avenues of investigation. But what are the chances of that sensible revolution? I do not envy you your position.

    1. There is an epistemological problem at the root of this. There is ample empirical evidence for acceleration discrepancies. Those could be due to dark matter, or they could be due to a failure of the equations that lead us to infer the existence of dark matter. The latter option seemed to radical 40 years ago, so we fell into the habit of mislabeling the issue as the dark matter problem. That presupposes the answer, so that’s the way most scientists think about it.

      I’ve spent considerably more effort demonstrating the ways in which dark matter does not provide a satisfactory explanation for the observations than I have actually working on MOND. I don’t resent that; it was a necessary step to convince myself that the dark matter paradigm was in genuine difficulty. That was a prerequisite for me to take MOND seriously, so I don’t object if other scientists feel the same way.

      What I do object to is a communal failure of objectivity. Rather than engage with the problems I’ve pointed out as existential, people have instead engaged in a kind of divide and conquer, claiming to solve some subset of the problem (e.g., rotation curve diversity is attributed to feedback) without simultaneously explaining why this so-called diversity correlates with surface brightness. Papers that do this typically evince no awareness of the latter observational fact, let alone explain it, yet frequently assert that the problem is solved without providing any enlightenment, merely the semblance of understanding.

      I can’t stop people from fooling themselves, as Feynmann warned theorists are predisposed to do: https://www.brainyquote.com/quotes/richard_p_feynman_137642

      1. When a theory refuses to be flexible, as when claiming to be universal, that flexibility does not disappear, it is displaced into unseen matter, tuned processes, or inflated ontology. That theoretical rigidity and lack of objectivity go hand in hand.

        1. The cause of the theoretical rigidity is the lack of predictive power. If it predicted wrongly, theorists would switch more easily. But since it is not predicting precisely, they need to decide to switch to MOND based on the use of precise predictions. Dark matter would be harmful to our ability to predict dynamics, based on accurate measurements. I’m very happy it doesn’t seem to exist! Thanks to Milgrom, Stacy and other MOND researchers we can enjoy this stronger predictive ability now.

          1. Any theory’s predictive and explanatory power is always limited.

            Scientific theories are essentially compression tools for regularities in reality. They rely on a predetermined structure, assumed symmetries, and simplifications. Those assumptions automatically define the context where the theory can work well. Compact and elegant descriptions are most accurate only in situations with strong regularities and symmetries.

            For example, the original Einstein Field Equations reproduce the Newtonian gravitational potential and implicitly assume scale invariance in that regime. MOND showed empirically that this Newtonian regime is not scale invariant. Relativistic MOND extensions are constructed to reproduce a different, observed gravitational potential, typically assuming radial symmetry.

            Galaxy clusters clearly do not exhibit simple galactic symmetries. In that context, radial dependence and symmetry lose meaning, and the assumptions built into MOND no longer apply; at larger hierarchical levels, structural properties depart even further from those present in individual galaxies, and the observed effective gravitational potential departs even more from the MOND regime, requiring a different contextual spacetime metric to faithfully encode the dynamics.

            So the lesson cuts both ways: assuming MOND or any of its relativistic extensions to be universal falls into the same trap as assuming GR is universal. Predictive power always comes with contextual limits, and objectivity requires recognizing where those limits lie.

  4. I supervised a study into this issue, comparing GAMA to TNG50:
    https://doi.org/10.3847/1538-4357/ac46ac

    Thanks to both my efforts and referee comments, we carefully explored the impact of resolution. Higher resolution does lead to thinner discs, but we concluded that further increases in resolution would not be sufficient to remove the discrepancy pointed out by Stacy. I think one of the papers that cites the above study claimed there is good agreement between the observed projected aspect ratios and those in mock observations of LCDM simulations, but I am not familiar with the details.

  5. For a long time I was enamored of the idea that acceleration signals detected by a group of physicists, led by Martin Tajmar at the Austrian Research Center (ARC), might have a connection to MOND’s acceleration threshold a0. Tajmar’s team conducted 250 experimental runs between 2003 and 2006, rapidly spinning up a niobium ring immersed in liquid helium. Accelerometers detected faint signals that, if real, were a remarkable 30 orders of magnitude larger than the gravitomagnetic field predicted by General Relativity permitted for a spinning mass. I was so intrigued by this report that I asked my boss at my job place if I could conduct experiments after hours, and he agreed. Unfortunately, my liquid nitrogen experiments with YBCO superconductors yielded no anomalous signals.

    The ARC team was working on the assumption that their signals were the result of an enhanced gravitomagnetic field due to the graviton gaining mass, just as a photon gains mass in a superconductor according to standard theory. But I wondered if perhaps on spin-up and spin-down the superconductor was emitting gravitons at an energy level and flux sufficient to be picked up by the accelerometers. Extrapolating I wondered if MOND’s threshold of a0 marked a regime, (connected to the accelerated expansion rate of the Universe), where perhaps graviton emission (on the acceleration axis) of every particle ceased, which perhaps lowers the inertia of the particle. This, of course, would obviate the need for any dark matter. Assuming this mechanism were correct it would explain as well the “should boost the vertical restoring force” condition that allows for thin discs in MOND. I know this is a bit vague and simplistic, but it’s an attempt to deduce a mechanism underlying MOND.

  6. “In MOND, there is no dark matter halo, but the modified force should boost the vertical restoring force as well as the radial force. One thus expects thinner disks in MOND than in Newton.” Can computer models be manipulated to cast doubt on MOND? I have suggested that dark-matter-compensation-constant =
    (3.9±.5) * 10^–5 , slightly contradicting general relativity theory.
    Consider “Support for the thermal origin of the Pioneer anomaly” by Slava Turyshev, Vikor Toth, et al., 2012
    https://arxiv.org/abs/1204.2507
    The science team constructed a “finite-element thermal model” that supposedly completely explains the Pioneer anomaly. I attempted to convince members of the science team that their theory should be tested in a vacuum chamber on planet Earth. My guess is that the Pioneer anomaly is a real failure of general relativity theory and lends support to FUNDAMOND — am I wrong?

    1. The anomalous acceleration of Pioneer is tiny in absolute terms, hence the concern about subtle thermal effects. However, by the standards of MOND, it is a surprisingly large, as Pioneer is nowhere near the MOND radius of the sun. Consequently, I would not expect such a large effect. God is shouting loudly at us if it is.

      1. The acceleration from New Horizon’s is in the same direction as the acceleration of the Pioneer probes. In both cases, the same model is used to ‘correct’ the offset. It is possible that this is a combined thermal and MOND effect.

        Pluto does not look icy: It looks terrestrial-just like Titan and half of Iapetus – so I think God is screaming. The Saturn/Titan mission planned for the next decade could resolve this completely (tholins/sulfates or sand) if it carries the right instrumentation.

  7. In the past you’ve mentioned data that loosely suggests gravity in the vertical direction lands somewhere between Milgrom and Newton. Perhaps there’s less than might be expected in MOND, but still enough to create the observed thin disks.

    Looking at that, the vertical force may be about the fields of the individual stars, which reach their own MOND radius a little away from the disk. By contrast, the radial force is more about the field of the galaxy as a whole. How are vertical velocities calculated in MOND – does one work from the field of the galaxy, or the fields of the stars?

    Looking at the conceptual side via the PSG interpretation for MOND (which unlike LCDM, allows the emitted ‘DM’ to be in a flat disk), perhaps with individual fields combining, issues similar to those with the EFE arise, and might need to be taken into account. This can apply with or without the PSG picture – speculatively, perhaps there’s a need to sum over many individual fields using MOND, and put in some general term that represents the effect of the EFE.

    1. The vertical force in MOND is theory dependent: it might be different in modified inertia than in modified gravity theories. So the difference you allude to could be a clue in that direction, but I think there is a lot to sort out first.

  8. Re ‘Thin galaxies do not occur naturally in the hierarchical mergers of LCDM (e.g., Haslbauer et al. 2022), where one would expect a steady bombardment by merging masses to mess things up.’
    Do galaxies (thin or otherwise) gain mass under MOND in some other way than messy hierarchical merging?

      1. With the assumption of scale invariance, another assumption is quietly smuggled in: time-scale invariance.

        Cosmic hierarchies are also timeframe hierarchies. Each gravitational regime emerges only when a system has enough time to organize, and higher-level regularities are invisible at shorter timescales.

        This naturally leads to temporal decoupling, which explains much of the apparent regime blindness. Star systems evolve so rapidly that they are effectively blind to galaxy-scale organization, while galaxies encode slow, collective regularities that cannot be seen in short-timescale dynamics.

        Once timeframes are taken in consideration, contextual gravity stops looking “exotic”.

        1. In Special Relativity, energy is the time component of four-momentum, so higher energy directly corresponds to slower proper-time evolution.

          This already encodes a deep relation between energy and time: short timescales correspond to high energies densities, long timescales to low energies densities, and different energy regimes genuinely experience time differently. As a result, energy decoupling necessarily implies timeframe decoupling.

          This is why microscopic, high-energy processes evolve rapidly, while macroscopic structures evolve slowly.

          Hierarchical cosmic structures are simply a large-scale manifestation of this same principle: higher levels of organization only emerge at lower effective energies densities and over much longer times.

          Each higher level of the cosmic hierarchy (star → galaxy → cluster → …) effectively acts as a low-pass filter. As energy becomes contextualized into larger, bound structures, the high-frequency information associated with individual components is averaged out, leaving only slow, collective dynamics. If mainstream cosmology insists on describing all levels with a single spacetime metric, it is effectively using a high-frequency tool to probe a low-frequency structure.

          The resulting mismatches are what we call “tensions” — missing mass is just one prominent example.

          1. In a Newtonian, static, scale- and time-invariant framework, a finite disk leads to a 1/r far-field potential. But that conclusion collapses once those assumptions are dropped.

            MOND corresponds to a “low-frequency”, long-timescale gravitational closure of disk systems, then trying to derive it from a static Newtonian disk is as misguided as deriving thermodynamics from a frozen molecular snapshot.

            Ironically, in trying so hard to protect General Relativity, cosmologists ended up not applying its most basic principles consistently. By enforcing scale invariance, they quietly reintroduced a Newtonian decoupling of space and time, precisely what Relativity was meant to eliminate. The resulting tensions are not failures of GR per se, but consequences of abandoning its general, contextual logic.

            1. This is a minimal, self-contained toy simulation of a disk galaxy built from concentric rings. The mass profile is fixed. A fast Newtonian field, G_n, is computed immediately from the enclosed mass, while a separate, “slow” collective field, G_e, evolves over time. The total acceleration is g = G_n + G_e. The purpose is not realism, but rather isolating the time-scale and coupling assumptions that are usually implicit in simulations.

              ​This simulation introduces special-relativistic time bookkeeping: each ring evolves using its own proper time, while the observer uses a single coordinate time. This does not change the force law or add new physics; it only changes how fast different radii relax.

              Consequently, MOND-like behavior appears as a long-time, low-acceleration attractor rather than an instantaneous modification. This highlights how regime separation can arise from time-scale structure alone.

              ​The entire simulation is self-contained in an HTML file that can be run on a smartphone or PC. The code was generated using ChatGPT and contains ample commentary.

              Link to the self contained HTML file:

              https://drive.google.com/file/d/1neDl7Jwy7XmpvM9oAU8pXvKPajvspmKF/view?usp=drivesdk

  9. Since you linked to the “What we have here is a failure to communicate” post, I want, first, to point out that there is a spelling error in the name of Bekenstein, even if it’s a post older than 2 years: when you introduce Bekenstein’s TeVeS you spelled his name as Beklenstein.
    The second point, actually related to the same older post, I checked recently the blog of Pavel Kroupa (the trigger was seeing Indranil Banik commenting here) and Pavel Kroupa talks about not trusting anymore the CMB data. I tracked this to some older posts of his (I would highlight post 104) and from what I can tell he is saying that the CMB spectrum as presented in papers is actually built assuming CDM. That is, the processing of the raw data to get the across-sky variations in the CMB temperature assumes LCDM.
    This would mean that maybe the second to third peaks ratio would change if the data processing would be done with a no-CDM assumption and so your initial prediction might not be that off.
    I’m not sure if such a task (reprocessing the data) could be done meaningfully by the few researchers that are working with no-CDM models, but it will be interesting to see if there are differences in the resulting CMB power spectrum and what would be the effects on the predictions.
    On the current post subject’s I knew that Pavel Kroupa’s blog talked also about thin galaxies, and yes, I saw post 66 where it is said that the observed fraction of thin disk galaxies is not compatible with LCDM (and Indranil Banik is a co-author to the paper reported there).
    But here you go deeper in some details (e.g. why/how interactions lead to thicker disks) so I can get a better understanding of the argument (at least that’s my case), so thank you for your, as usual, good post!

    1. The standard interpretation of the CMB certainly assumes CDM, but I don’t think that is necessary for the data processing. That doesn’t mean people didn’t do it that way, but I don’t see how that would preclude finding a very low CDM density if that’s what was in those data.

      The issue, to me, is whether CDM provides a unique interpretation of the CMB data (as many cosmologists seem to presume) or if, as with everything else, there is merely evidence of a discrepancy from pure GR with baryons only that can be explained in a multiplicity of ways (e.g., Skordis & Zlosnik’s AeST).

      1. If in the future the AeST theory rivals or displaces LCDM, couldn’t the LCDM supporters just say, “Well we had cold dark matter and now you have a ghost condensate. We had dark energy and now you have an aether. Well, have you detected the condensate yet? Have you found the source of the aether?”
        And I suppose eventually people will just decide that most likely none of those things were real anyway.

        Or maybe someone will come along and point out that the fully relativistic redshift-space distortion terms look an awful lot like relativistic aether, scalar and vector term corrections, but here they are Modifying the Newtonian Dyamics of the observer. That perspective seems like it could be much more powerful, and more realistic, in my opinion.

        1. Sure, they could say that. I share this concern. My point is that the CDM interpretation of the CMB is not unique, which seems to be the entire gist of their argument against considering MOND: it has to be CDM and no other interpretation is possible. One could just as well – indeed, more correctly – argue that the predictions of MOND are unique and the dark matter interpretation is not viable.

          MOND by itself is not a complete answer; there has to be something more to it. I don’t know what that is. In contrast, an answer that brooks no debate is invisible mass. It does claim to be a complete answer for all problems astronomical; the only mystery is what it is.

          1. Yes, I certainty agree that there appears to be no uniqueness to either description as they stand today. I could even imagine a place where both coexist in some sense.

            If someone said “there must be dark matter” I would say OK, but can I just move it all over there into the background at the boundary of the observable universe? And if someone said “but there has to be MOND” I would say OK, but can I just embed it into the foreground at the observer’s bounday?

            Now they are both connected and it’s perfectly ok if any projection of dark matter or modified gravity into our observable universe is considered somewhat fictitious.

          2. I think you are making a very important argument, and maybe one that will lead to a breakthrough in our understanding of what nonlocality actually means.

            It occurs to me that there may be a metaphysical connection between the missing dark sector of LCDM or the nonlocality of AeST MOND theories and my argument that some of the velocity or temperature components from kinematic and thermodynamic measurements using redshift and anisotropy surveys are fictitious, and that is what leads to the inference of dark matter/dark energy or modified gravity with the ghost condensate/aether.

            It is arguably true that if one accepts GR and QM then one has to give up locality. So we are willing to accept that we are left with GR, QM and Nonlocality.

            But what does Nonlocality really mean?

            I would argue that Nonlocality describes the effects that arise when the observational entropy is unassigned or transformed away from the observer. The It from Bit has to go somewhere, and if it is not assigned to the observer, it becomes nonlocal.

            1. The trouble with accepting GR and QM is that they are inconsistent with each other, even mathematically, let alone physically. So if you accept GR and QM then you can believe anything you like, including non-locality. I prefer to believe in locality, and use it to analyse what is wrong with QM and GR, not the other way round.

  10. …”the CMB spectrum as presented in papers is actually built assuming CDM.'”
    With the WMAP data, the third-year map did not properly overlay with the first-year signatures. The scientists assumed pixel-by-pixel temperature variance in the probe’s camera and smoothed about their best estimates of the baseline. This was part of the reason that the ESA decided to launch another background probe with higher resolution and more importantly, absolute calibration. The new map introduced the unresolved Hubble tension.

    In assuming the radiation is background, none of these missions anticipated the presence of the ‘little red dots’ that permeate well past the CDM predictions of the sentinel event, if there ever was such a thing. It is exciting, because we are at the cusp of transition from 20th century cosmology and there is much to learn.

  11. About measurements of G, I have something that may be interesting. The constant is hard to measure, so some of the differences between measured values are simply noise. If you find a pattern in the numbers, it will probably be a loose one – but I did. In 2014 I found what looked like a loose correlation between values for G and the latitude at which the measurement was done. This was interesting, because the flyby anomaly, which I was also studying at the time, definitely has a dependence on latitude – the NASA team that found an empirical equation for the anomaly had incoming and outgoing latitudes of the probe as terms in the equation, also the Earth’s radius – it looked very like something to do with the Earth’s rotation.

    I found a chart of G measurement results, they went from 6.623e-11 to 6.657e-11, and got the latitudes for the experiments, which went from about 35 to 49. There seemed to be a clear pattern of higher latitude, higher value for G. I never published that, as it wasn’t conclusive enough, but I did publish the flyby anomaly conclusions, in a book and a paper, as they were clearer.

    The basic idea from PSG is, the Earth is rotating, while emitting a refractive medium, which thins out as it moves away, and causes gravity (via the nature of matter at a very small scale). The rotation speed of the Earth varies with latitude, and this affects the emission of the medium – right near the Earth the medium is distorted out of shape, and that affects gravity. The flyby anomaly only appeared in probes that passed very near the Earth, further out there was no effect.

    One of many interesting points was that two separate theories (one from Hafele, of Hafele and Keating) found they could model the flyby anomaly using a lightspeed delay to the effect of gravity. This is only very near the Earth – if it went further out the solar system wouldn’t be stable. But even when the flyby anomaly reappeared years later with the Juno probe near Jupiter, a lightspeed delay on the effect of gravity was one way to get to the anomaly. So something was travelling at c, and seemed closely associated with gravity – there aren’t many things that could be, but in PSG it’s the emitted medium. I hope that’s of interest.

    1. I found a correlation between the earth flyby anomalies and the position of the moon; in that the sign of the anomalies flips when the moon is waning compared to waxing. The difference is too small to hang a hat on, but a necessary effect for at least one alternative theory of gravity.

  12. could you focus on

    Comparison of MOND and Verlinde’s emergent gravity in dwarf spheroidals

    Authors: Youngsub Yoon, Sanghyeon Han, Ho Seong Hwang

    Verlinde’s emergent gravity is in close agreement with the observed values. In the present work, we additionally confirm that, for 21 of the 23 samples examined, Verlinde’s emergent gravity follows the trend of the observed values within each dwarf spheroidal more closely than MOND. Combining the statistical significance of all the 23 samples, ranging from to 3.41 , we conclude that Verlinde’s emergent gravity is favored over MOND at 5.2 .

    1. Emergent gravity mimics MOND in the low acceleration regime, so I’m not sure how to tell the difference. Perhaps is a lack of the external field effect; don’t recall offhand if that is a thing in emergent gravity. That’s necessary in many of the ~5 dozen dwarf spheroidals I’ve looked at, which would seem to contradict what they’re saying.

      Emergent gravity fails around a0 where it differs perceptibly from MOND: https://arxiv.org/abs/1702.04355

      1. Yeah, these are the same dwarfs I’ve analyzed previously. Some are genuine problems, as I discuss in https://tritonstation.com/2018/09/12/dwarf-satellite-galaxies-ii/ and https://tritonstation.com/2025/10/14/non-equilibrium-dynamics-in-galaxies-that-appear-to-have-lots-of-dark-matter-ultrafaint-dwarfs/. However, many of them are well-known success of MOND, like the dwarfs of Andromeda. https://tritonstation.com/2018/09/14/dwarf-satellite-galaxies-iii-the-dwarfs-of-andromeda/. The extra thing they are doing here is to analyze the radial variation of the velocity dispersion within each dwarf. I’ve tried that and obtained similar results doing the same kind of analysis, but that is not adequate: one really needs to do an orbit reconstruction like a Jeans analysis. See the numerical simulations of https://iopscience.iop.org/article/10.3847/1538-4357/835/2/233 in which the flat velocity dispersion profiles of the better observed dwarfs in the deep MOND regime come out naturally.

          1. No. Looking at the paper you cite, they do similar things, but neither are quite right. What I’m saying is that the modeling approach Yoon et al is not adequate – that’s what I’ve tried myself. If you actually simulate the motions of stars in these dwarfs with MOND, the correct answer falls right out. That’s what Alexander et al showed.

  13. Sorry, I left out a 7 in the values for G, they went from 6.6723e-11 to 6.6757e-11. (There was what seemed to be a mathematical pattern there, but I couldn’t link it directly to anything.) I also didn’t mention that the NASA formula for the flyby anomaly contained the Earth’s angular rotation speed, as well as the radius and incoming and outgoing latitudes.

  14. I meant to say, two measurements of G at very similar latitudes (Seattle 2000 and Zurich 2006) got very similar results.

    1. Measured values of G are a good historical example of confirmation bias. Everybody kept getting the same number until they didn’t. In more modern times there is a tension in, like, the sixth place of decimals; I don’t have any insight as to why that might be. A latitude dependence could be checked by moving the same apparatus to a different laboratory.

  15. I would really love to know whether you have any thoughts on Pavel Kroupa’s thoughts challenging the validity of the orthodox CMB/Planck interpretation when applied outside of LCDM. Could there be something there or do you suspect he is tilting at windmills? Seems like that would be something that might actually convince dark matter believers to take a genuine second look.

    https://darkmattercrisis.wordpress.com/2025/12/26/112-the-meaning-of-the-cosmic-microwave-background-cmb-for-cosmology-and-the-role-early-galaxies-have-in-this-matter/

    1. I’d be very surprised if the CMB signal is not cosmic in origin, and I doubt very much that all of it can be attributed to dust in early star forming galaxies. However, it is an intriguing possibility that such objects could contribute some to the background radiation; even a percent would screw up the best-fit LCDM parameters.

  16. Great post — and a clean empirical lever. Disk thickness is an unusually orthogonal constraint…it probes the vertical restoring field and long-term heating history (not just how we fit V(R) in the plane).

    If you had to pick the single “next” observable that’s hardest to explain away with feedback/merger-history details, would you prioritize HI flaring, stellar σ_z(R) (or σ_z profiles), or the intrinsic c/a distribution itself?

    Great work, as always.

    1. These things are coupled, so it is hard to rank one above another. However, I would emphasize velocity dispersion profiles, especially those in the low acceleration regime – both of bright galaxies that extend far out, and low surface brightness galaxies that are low acceleration everywhere. It’s one thing to explain the Oort discrepancy locally; it’s another to explain it at all radii in all galaxies.

Leave a Reply

Your email address will not be published. Required fields are marked *