People have been asking me about comments in a recent video by Sabine Hossenfelder. I have not watched it, but the quote I’m asked about is “the higher the uncertainty of the data, the better MOND seems to work” with the implication that this might mean that MOND is a systematic artifact of data interpretation. I believe, because they consulted me about it, that the origin of this claim emerged from recent work by Sabine’s student Maria Khelashvili on fitting the SPARC data.
Let me address the point about data interpretation first. Fitting the SPARC data had exactly nothing to do with attracting my attention to MOND. Detailed MOND fits to these data are not particularly important in the overall scheme of these things as I’ll discuss in excruciating detail below. Indeed, these data didn’t even exist until relatively recently.
It may, at this juncture in time, surprise some readers to learn that I was once a strong advocate for cold dark matter. I was, like many of its current advocates, rather derisive of alternatives, the most prominent at the time being baryonic dark matter. What attracted my attention to MOND was that it made a priori predictions that were corroborated, quite unexpectedly, in my data for low surface brightness galaxies. These results were surprising in terms of dark matter then and to this day remain difficult to understand. After a lot of struggle to save dark matter, I realized that the best we could hope to do with dark matter was to contrive a model that reproduced after the fact what MOND had predicted a priori. That can never be satisfactory.
So – I changed my mind. I admitted that I had been wrong to be so completely sure that the solution to the missing mass problem had to be some new form of non-baryonic dark matter. It was not easy to accept this possibility. It required lengthy and tremendous effort to admit that Milgrom had got right something that the rest of us had got wrong. But he had – his predictions came true, so what was I supposed to say? That he was wrong?
Perhaps I am wrong to take MOND seriously? I would love to be able to honestly say it is wrong so I can stop having this argument over and over. I’ve stipulated the conditions whereby I would change my mind to again believe that dark matter is indeed the better option. These conditions have not been met. Few dark matter advocates have answered the challenge to stipulate what could change their minds.
People seem to have become obsessed with making fits to data. That’s great, but it is not fundamental. Making a priori predictions is fundamental, and has nothing to do with fitting data. By construction, the prediction comes before the data. Perhaps this is one way to distinguish between incremental and revolutionary science. Fitting data is incremental science that seeks the best version of an accepted paradigm. Successful predictions are the hallmark of revolutionary science that make one take notice and say, hey, maybe something entirely different is going on.
One of the predictions of MOND is that the RAR should exist. It was not expected in dark matter. As a quick review of the history, here is the RAR as it was known in 2004 and now (as of 2016):

The big improvement provided by SPARC was a uniform estimate of the stellar mass surface density of galaxies based on Spitzer near-infrared data. These are what are used to construct the x-axis: gbar is what Newton predicts for the observed mass distribution. SPARC was a vast improvement over the optical data we had previously, to the point that the intrinsic scatter is negligibly small: the observed scatter can be attributed to the various uncertainties and the expected scatter in stellar mass-to-light ratios. The latter never goes away, but did turn out to be at the low end of the range we expected. It could easily have looked worse, as it did in 2004, even if the underlying physical relation was perfect.
Negligibly small intrinsic scatter is the best one can hope to find. The issue now is the fit quality to individual galaxies (not just the group plot above). We already know MOND fits rotation curve data. The claim that appears in Dr. Hossenfelder’s video boils down to dark matter providing better fits. This would be important if it told us something about nature. It does not. All it teaches us about is the hazards of fitting data for which the errors are not well behaved.
While SPARC provides a robust estimate of gbar, gobs is based on a heterogeneous set of rotation curves drawn from a literature spanning decades. The error bars on these rotation curves have not been estimated in a uniform way, so we cannot blindly fit the data with our favorite software tool and expect that to teach us something about physical reality. I find myself having to say this to physicists over and over and over and over and over again: you cannot trust astronomical error bars to behave as Gaussian random variables the way one would like and expect in a controlled laboratory setting.
Astronomy is not conducted in a controlled laboratory. It is an observational science. We cannot put the entire universe in a box and control all the variables. We can hope to improve the data and approach this ideal, but right now we’re nowhere near it. These fitting analyses assume that we are.
Screw it. I really am sick of explaining this over and over, so I’m just going to cut & paste verbatim what I told Hossenfelder & Khelashvili by email when they asked. This is not the first time I’ve written an email like this, and I’m sure it won’t be the last.
Excruciating details: what I said to Hossenfelder & Khelashvili about the perils of rotation curve fitting on 22 September 2023 in response for their request for comments on the draft of the relevant paper:
First, the work of Desmond is a good place to look for an opinion independent of mine.
Second, in my experience, the fit quality you find is what I’ve found before: DM halos with a constant density core consistently give the best fits in terms of chi^2, then MOND, then NFW. The success of cored DM halos happens because it is an extremely flexible fitting function: the core radius and core density can be traded off to fit any dog’s leg, and is highly degenerate with the stellar M*/L. NFW works less well because it has a less flexible shape. But both work because they have more parameters [than MOND].
Third, statistics will not save us here. I once hoped that the BIC would sort this out, but having gone down that road, I believe the BIC does not penalize models sufficiently for adding free parameters. You allude to this at the end of section 3.2. When you go from MOND (with fixed a0 it has only one parameter, M*/L, to fit to account for everything) to a dark matter halo (which has at a minimum 3 parameters: M*/L plus two to describe the halo) then you gain an enormous amount of freedom – the volume of possible parameter space grows enormously. But the BIC just says if you had 20 degrees of freedom before, now you have 22. That does not remotely represent the amount of flexibility that represents: some free parameters are more equal than others. MOND fits and DM halo fits are not the same beast; we can’t compare them this way any more than we can compare apples and snails.
Worse, to do this right requires that the uncertainties be real random errors. They are not. SPARC provides homogeneous mass models based on near-IR observations of the stellar mass distribution. Those should be OK to the extent that near-IR light == stellar mass. That is a decent mapping, but not perfect. Consequently, we expect the occasional galaxy to misbehave. UGC 128 is a case where the MOND fit was great with optical data then became terrible with near-IR data. The absolute difference in the data are not great, but in terms of the formal chi^2 it is. So is that a failure of the model, or of the data to represent what we want it to represent?
This happens all the time in astronomy. Here, we want to know the circular velocity of a test particle in the gravitational potential predicted by the baryonic mass distribution. We never measure either of those quantities. What we measure is the (i) stellar light distribution and the (ii) Doppler velocities of gas. We assume we can map stellar light to stellar mass and Doppler velocity to orbital speed, but no mass model is perfect, nor is any patch of observed gas guaranteed to be on a purely circular orbit. These are known unknowns: uncertainties that we know are real but we cannot easily quantify. These assumptions that we have to make to do the analysis dominate over the random errors in many cases. We also assume that galaxies are in dynamical equilibrium, but 20% of spirals show gross side-to-side asymmetries, and at least 50% mild ones. So what is the circular motion in those cases? (F579-1 is a good example)
While SPARC is homogeneous in its photometry, it is extremely heterogeneous in its rotation curve measurements. We’re working on fixing that, but it’ll take a while. Consequently, as you note, some galaxies have little constraining power while others appear to have lots. That’s because many of the rotation curve velocity uncertainties are either grossly over or underestimated. To see this, plot the cumulative distribution of chi^2 for any of your models (or see the CDF published by Li et al 2018 for the RAR and Li et al 2020 for dark matter halos of many flavors. So many, I can’t recall how many CDF we published.) Anyway, for a good model, chi^2 is always close to one, so the CDF should go up sharply and reach one quickly – there shouldn’t be many cases with very low chi^2 or very high chi^2. Unfortunately, rotation curve data do not do this for any type of model. There are always way too many cases with chi^2 << 1 and also too many with chi^2 >> 1. One might conclude that all models are unacceptable – or that the error bars are Messed Up. I think the second option is the case. If so, then this sort of analysis will always have the power to mislead.

A key thing to watch out for is the outsized effects of a few points with tiny error bars. Among galaxies with high chi^2, what often happens is that there is one point with a tiny error bar that does not agree with any of the rest of the data for any smoothly continuous rotation curve. Fitting programs penalize a model for missing this point by many sigma, so will do anything they can to make it better. So what happens is that if you let a0 vary with a flat prior, it will got to some very silly values in order to buy a tiny improvement in chi^2. Formally, that’s a better fit, so you say OK, a0 has to vary. But if you plot the fitted RCs with fixed and variable a0, you will be hard pressed to see the difference. Chi^2 is different, sure, but both will have chi^2 >> 1, so a lousy fit either way, and we haven’t really gained anything meaningful from allowing for the greater fitting freedom. Really it is just that one point that is Wrong even though it has a tiny error bar – which you can see relative to the other points, never mind the model. Dark matter halos have more flexibility from the beginning, so this is less obvious for them even though the same thing happens.
So that’s another big point – what is the prior for a dark matter halo? [Your] Table 1 allows V200 and C200 to be pretty much anything. So yes, you will find a fit from that range. For Burkert halos, there is no prior, since these do not emerge from any theory – they’re just a flexible French curve. For NFW halos, there is a prior from cosmology – see McGaugh et al (2007) among a zillion other possible references, including Li et al (2020). In any[L]CDM cosmology, the parameters V200 and C200 correlate – they are not independent. So a reasonable prior would be a Gaussian in log(C200) at a given V200 as specified by some simulation (Macio et al; see Li et al 2020). Another prior is how V200 (or M200) relates to the observed baryonic mass (or stellar mass). This one is pretty dodgy. Originally, we expected a fixed ratio between baryonic and dark mass. So when I did this kind of analysis in the ’90s, I found NFW flunked hard compared to MOND. (I didn’t know about the BIC then.) Galaxy DM halos simply do not look like NFW halos that form in LCDM and host galaxies with a few percent of their mass in the luminous disk even though this was the standard model for many years (Mo, Mao, & White 1998). If we drop the assumption that luminous galaxies are always a fixed fraction of their dark matter halos, then better fits can be obtained. I suspect your uniform prior fits have halo masses all over the place; they probably don’t correlate well with the baryonic mass, nor are their C and V200 parameters likely to correlate as they are predicted to do. You could apply the expected mass-concentration and stellar mass-halo mass relations as priors, then NFW will come off worse in your analysis because you’ve restricted them to where they ought to live.
So, as you say – it all comes down to the prior.
Even applying a stellar mass-halo mass relation from abundance matching isn’t really independent information, though that’s the best you can hope to do. But I was saying 20+ years ago that fixed mass ratios wouldn’t work, but nobody then wanted to abandon that obvious assumption. Since then, they’ve been forced to do so. But there is no good physical reason for it (feedback is the deus ex machina of all problems in the field), what happened is that the data forced us to drop the obvious assumption. Data including kinematic data (McGaugh et al 2010). So adopting a modern stellar mass-halo mass relation will give you a stronger prior than a uniform prior, but that choice has already been informed by the kinematic data that you’re trying to fit. How do we properly penalize the model for cheating about its “prior” by peaking at past data?
So, as you say – it all comes down to the prior. I think it would be important here to better constrain the priors on the DM halo fits. Li et al (2020) discuss this. Even then we’re not done, because galaxy formation modifies the form of the halo function we’re fitting. They shouldn’t end up as NFW even if they start out that way – see Li et al 2022a & b. Those papers consider the inevitable effects of adiabatic compression, but not of feedback. If feedback really has the effects on DM halos that is frequently advertised, then neither NFW or Burkert are appropriate fitting functions – they’re not what LCDM+feedback predicts. Good luck extracting a legitimate prediction from simulations, though. So we’re stuck doing what you’re trying to do: adopt some functional form to represent the DM halo, and see what fits. What you’ve done here agrees with my experience: cored DM halos work best. But they don’t represent an LCDM prediction, or any other broader theory, so – so what?
Another detail to be wary of – the radial range over which the RC data constrain the DM halo fit is often rather limited compared to the size of the halo. To complicate matters further, the inner regions are often star-dominated, so there is not much of a handle on DM from where the data are best, at least beyond many galaxies preferring not to have a cusp since the stars already get the job done at small R. So, one ends up with V_DM(R) constrained from 3% to 10% of the virial radius, or something like that. V200 and C200 are defined at the notional virial radius, so there are many combinations of these parameters that might adequately fit the observed range while being quite different elsewhere. Even worse, NFW halos are pretty self-similar – there are combinations of (C200,V200) that are highly degenerate, so you can’t really tell the difference between them even with excellent data – the confidence contours look like bananas in C200-V200 space, with low C/high V often being as good as high C/low V. Even even even worse is that the observed V_DM(R) is often approximately a straight line. Any function looks like a straight line if you stretch it out enough. Consequently, the fits to LSB galaxies often tend to absurdly low C and high V200: NFW never looks like a straight line, but it does if you blow it up enough. So one ends up inferring that the halo masses of tiny galaxies are nearly as big as those of huge galaxies, or more so! My favorite example was NGC 3109, a tiny dwarf on the edge of the Local Group. A straight NFW fit suggests that the halo of this one little galaxy weighs more than the entire Local Group, M31 + MW + everything else combined. This is the sort of absurd result that comes from fitting the NFW halo form to a limited radial range of data.
I don’t know that this helps you much, but you see a few of the concerns.
Hi Stacy.
Thank you for your post “Full speed in reverse!”.
Sabine Hossenfelder now covers a wide range of topics in her videos. Recently,
when mentioning dark matter, she usually adds a comment along the lines of
“…, if dark matter exists, which it may not”. So it seems to me she is sowing seeds of doubt against dark matter more often than against modified gravity.
I, too, was puzzled by Sabine’s comment on MOND fits. Thanks for answering before I asked. The process of coring and fitting seems like a great example of a conventionalist stratagem.
Good point. I don’t think Sabine herself is engaging in a conventionalist stratagem, but a related one I have witnessed is divide & conquer – redefine the problem in more friendly terms (cusp vs. core instead of the baryonic tail wagging the dark matter dog), then argue about the best way to fit/explain cusp-core, then declare victory on that more limited topic and project it onto everything else. That happens enough that it enters the veins of the dialog until it becomes the dialog, choking out all else like a creeper vine, and people forget that there was any other issue to the point of thinking it unreasonable to keep talking about it.
AFAIK the strategy you describe is labeled as “Motte and bailey”.
https://backreaction.blogspot.com/2019/03/motte-and-bailey-particle-physics-style.html
https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy
Regarding divide et impera, it would be about putting various modified-gravity factions to fight each other instead of focusing on their common differences with respect to dark matter.
BTW I’m a DM guy who considers MOND as telling us something important, thus I watch the DM vs. MOND exchanges as a sort of soap opera. What is your favorite kind of soap anyway?
I’m a scientist who considers MOND as telling us something important. I am at once neither a DM guy or a MOND guy, and both.
There is certainly a tendency for oppressed factions to squabble amongst themselves: https://www.youtube.com/watch?v=kHHitXxH-us
If (big IF) I understood Dr. Banik’s comments here, he was saying that hte previous two Wide Binary papers agreed better with MOND because they included some high-uncertainty data; and that when that data ia removed in his paper’s analysis, the MOND agreement disappears. Given that impression, I assumed that was the basis of Dr. Hossenfelder’s comment, or at least part of it.
Yes, I believe that is part of the basis of Dr. Hossenfelder’s comment.
Dr. Banik uses a lot more data than the other papers, and argues that their result is an artifact of how they select the data. Dr. Chae just released a new version of his paper, in which he claims to refute this.
So – continue to pass the popcorn.
They will both be presenting their points at Tejinder’s OSMU23. I’ll have popcorn.
https://x.com/TEJINDER_TIFR/status/1726600486625870322?s=20
I’ve watched Chae’s presentation and his responses to questions this afternoon and one point that was made (I think by Hernandez originally) was that Banik has not shown results where the imputed acceleration is >> a0, only ~ a0. It seems to me that if you want to show MOND behaviour (or lack of it) below a0, you also need to show a good fit to Newtonian behaviour above a0 with the same parameters, so some close binaries as well as wide binaries.
I profess I find her video titles to often be clickbait-y and cringe, especially when she talks about areas that are well outside of her expertise such as non-physics subjects. It reminds me of the common joke about the physicist who comes in to a biologist’s lab and assumes they’ll master the subject in a week because ‘it is just applied physics’.
This has had the side effect that I now have a harder time taking her seriously even when she is talking about physics.
I liked her “Back Reaction” blog a lot. That was where I was pointed to this blog, some years ago now. I gather that making a living doing physics is very hard and what she is doing now provides money for doing physics.
Hi Stacy, I just wanted to thank you for the latest post ” A post in which some value…”
The summary of the data/ papers was perfect for a layman such as myself. I read Hernandez (2023) , but was having trouble understanding just what was being plotted. Your discussion about all the messy issues with the data we have and the data we might want was also great.
Oh and aren’t the WBs in the lower left panel of the EFE picture.
g_in < a_0 < g_ex (g_ex ~ 1.8 a_0) So just a bit bigger.
And yeah why isn’t figure 12 of Banik etal a data point in favor of mond? Both of the lower panels at larger r_sky (is that separation?) fit mond better.
I am very confused by figure 12 in the Banik et al. paper. For r sky =2-3 kau, the expected v tilde for MOND appears to be significantly lower than for Newtonian gravity, and in particular the MOND distribution shows a tighter grouping of v tilde around a slightly lower mode.
I would expect that the predicted MOND distribution would show first order stochastic dominance for v tilde, as MOND can either increase it more (below a0) or less (above a0). This is what we do see for r sky 5-12 kau, which as noted by Stacy is actually evidence for MOND. But as above I cannot see how MOND can predict a reduction in v tilde, even in the transition region – rather there should just be a smaller increase than in the sample with larger separations.
It is almost as if the data series have been reversed for r sky =2-3 kau.
Additionally, I cannot see why the two distributions would differ in such a complex way, with for example the MOND prediction showing less moderately large values (e.g. around 0.9) but more extra large values (e.g. around 1.5).
So this is my understanding (which is probably wrong.) Banik etal fit the data to two different models and got the parameters shown in Table 2. Then in figure 12 they plot the data for different r_sky to compare the models. The different parameters for the mond fit (table 2) give a worse fit to the data at smaller r_sky. But a slightly better fit at larger r_sky. So most of the ‘betterness’ of the newtonian fit happens at the low r_sky data points. And now I have the heebie jeebie’s too.
Yes but my issue is that the for the smaller separations, the pdf should still look roughly the same, except with the MOND one shifted to the right a little (because of the gravitational boost) and stretched out (because the boost will differ with acceleration). Instead we see the reverse in figure 12.
If we take it at face value, and with your detail above, then MOND is being rejected largely because for low seperations, it apparently predicts far too many observations of low v tild (and in excess of the frequency for Newton) which are not seen. But theoretically this seems impossible. Moving from Newton to MOND should lower, not increase, the predicted density of low v tilde observations because in some of the cases where Newton predicts a low v tilde, there will be a MOND effect which would push that prediction into a bin with larger v tilde.
Right I agree with what you say. But there are two different fits to the data. And (reading from the paper) the mond fit has a smaller fraction of CB’s (companion binaries?… an extra nearby star?) And the CB’s make a big adjustment to the expected data (velocities?). So my limited understanding is that when you fit a large sample size (Banik) you get a better fit to the data with the newtonian model and if you are more ‘selective’ in your selection of WBs you can squeeze a slightly better mondian fit to the data (Hernandez). And this makes sense, cause you can see in fig 12 of Banik a slightly better fit to the larger r_sky data with the mond model. So I’m left feeling like the WBs are not a smoking gun in favor of mond, nor are they the death nell for small distance mond. The data is not good enough to tell. But it does look like there is mond hidden in there, (figure 12.) which is exciting.
I see. Having a much higher CB density in the Newtonian model would explain the higher v tild (and seemingly higher variance) in the Newtonian expected distribution for small separations.
If I then understand the model correctly, the supposed refutation of MOND is because a MOND like effect (excess v tild) is better explained by a high CB density than by MOND.
I think in this case the argument by Chae that there is a desire to calibrate the model to match the high acceleration case is very pertinent, as is the need for constraint on the the CB density.
One other worry here is that the data is noisy, and a higher CB density (which naturally follows from the best fit with a reduced MOND effect) allows the noise to be better fit.
Yeah I don’t know enough to have an opinion. But I can imagine a galaxy with mond might have more or less CBs than a purely Newtonian galaxy.
I find figure 6 from their analysis very interesting. It shows that there are clear centres with the most WB’s around r_sky low and another one around
Oops! My mistake.
Even better perhaps: figure 10. It shows clearly that in the largest bin with r_sky = 2-3, there is some kind of effect as if the distribution is the sum of two sinuses, the first peaking around v=0.45 and the smaller second around v=0.95 (but harder to see).
To my ears, that sounds like some greatly amplified velocities and most velocities just sub-Newtonian with v = 0.45. That sounds quite like what I expect from my earlier thought experiment starting from Deur’s approach: some orbit directions with amplified gravity, but most with slightly dampened gravity. This figure is from after the quality cuts and their CB/LOS handling. For higher r-sky it’s still clear that the peak is a little early, with the average (or median?) still around 0.5 apparently.
On 25 November David Merritt published a guest post (an edited version of an earlier IAI essay) on the darkmattercrisis website. He was introduced as the author of A Philosophical Approach to MOND and in the article writes in praise of MOND, thinking that we are in the midst of a paradigm shift in its favour. As an outside observer, I would say it is precisely on philosophical grounds that one might have trouble with the theory. I make the following comment after reading both Merritt’s and Indranik Banil’s IAI essay.
IB summarises MOND in these terms: ‘Within this Newtonian regime, gravity becomes 4 times weaker at double the distance. Once gravity becomes weaker than a0, MOND postulates that gravity switches to an inverse distance decline, so it still weakens with distance – but now doubling the distance halves the gravity in the MOND regime.’ On first principles I find this concerning. MOND in effect proposes to replace Newton’s law with another that starts to come in below the a0 threshold and by virtue of being another law contradicts Newton’s law (‘modifies’ it?), yet the thing it is purporting to describe (if not explain) remains the same. An object of a certain mass has a definable property called gravity which inheres in its nature, but at the point that Newton’s mathematical characterisation appears to break down, another mathematical relation is substituted.
It also troubles me at a philosophical level that MOND has gone through various modifications itself. It must be more complicated (less elegant) than IB makes out.
IB is no philosopher (though he understands about the importance of rigorously testing scientific theories). At the end of the essay Bud Rapanault makes the comment. ‘Comparing and contrasting MOND and ΛCDM represents a category error, since MOND is a gravitational model and ΛCDM a cosmological model. … [The chief components of the latter are dark energy and cold dark matter] but neither are found in physical reality – their supposed existence has been repeatedly falsified by all attempts to directly detect them. The falsification standard for the standard model is in essence non-existent.’ IB should not be using his results as justification for flirting with the hope of a modified version of ΛCDM.
Until recently I was prepared to lend my layman’s credence to MOND in view of its empirical success in describing galactic motions. Then came the evidence that the Milky Way’s rotation curve was not MONDian. And then came the Banik paper. Merritt’s article is worth a read because one might well feel that MOND is caught in the same bind – having to accept a Poplerian falsification when one encounters it – as it sees Dark Matter to be in.
Modern cosmology is underpinned by assumptions that are even more fundamental than LCDM: that the universe had a natural origin, that it is expanding (almost everything is calibrated on this basis), that it is billions of years old, that the speed of light has been constant, that spiral arms trace the trajectory of primordial matter of having been sucked in rather than ejected from the centre, that the black holes at the centre of galaxies – including the luminous bodies interpreted as black holes at very high redshift – have always been black holes (having grown from BH seeds). These are sacrosanct and are likely to remain so. But if no one is prepared to challenge the ultimate philosophical framework that these represent, what prospect is there of resolving the tension between our own Newtonian galaxy and all the others, further back in time?
There are some in the field who are questioning whether the universe is indeed expanding. This article appeared in IAI a few months ago, titled “Evidence the universe might not be expanding”:
https://iai.tv/articles/evidence-the-universe-might-not-be-expanding-auid-2551
While the concept of an expanding universe was developed to explain cosmological redshift, it has issues with the Tolman surface brightness test and the distance duality test, whose results indicate that the universe is not expanding. However, alternative explanations for cosmological redshift like tired light have their own problems to the point of being ruled out as well. This means that both the big bang and its competing alternatives are disfavored when it comes to explaining observations in the universe, leaving virtually no available explanations for cosmological redshift which also explain everything else in the universe.
This is similar to the case with the phenomena attributed to dark matter – where it’s becoming clearer by the day that both dark matter and MOND are disfavored in explaining the behavior of galaxies, leaving virtually no available explanations for such phenomena either.
It’s not hard to understand the r^(-2) to r^(-1) transition in the gravitational effect for a disk galaxy if you think of it in terms of the overall geometry of a galactic system. Near the center, the mass distribution is essentially spherical. At any given radial distance near the center there is a related, notional sphere where there is a gravitational equipotential – the gravitational flux through the sphere has a fixed total value. As you increase radial distance from the center of mass, the related notional sphere increases its surface area as r^(2). Consequently the flux per unit surface area drops off as r^(-2).
Moving radially outward along the disk the geometry transitions from spherical to circular, so at any given radial distance there is a circumferential surface area that increases as r which in turn means that the gravitational flux at the circumference declines as r^(-1) as the radius increases. Milgrom traced that geometry at the same time he was ignoring it while modeling the observed behavior using an acceleration scale framework. It’s just some math, but it works at the galactic scale.
In a sense it is not really fair to criticize MOND as just clever math with no explanatory power since the same can be said for both Newtonian Dynamics and General Relativity. No current model provides a plausible description of the physical mechanism by which the gravitational effect is produced. It is one of the peculiarities of the theoretical physics community that there seems to be no interest in investigating the physical cause of the gravitational effect. Wheeler’s empirically baseless spacetime conjecture isn’t much of an offering.
You really believe that Milgrom and all other physicists have made such a fundamental mistake. You really believe that all physicists on earth have made a mistake that any high school student with good marks in physics could notice and correct?
Do note that this assumes that the flow of gravity will follow the geometry of the mass distribution. But I agree nonetheless, this may very well be the cause of many MOND phenomena.
That the flow of gravity appears to follow the geometry of the mass distribution isn’t an assumption; it is a statement of fact.
Huh? If you had an infinite column of matter, what would the gravity field look like?
My comment was with regard to real physical systems – disk galaxies. Your question is in regard to a physically meaningless mathematical conjecture. What’s your point?
Out of curiosity, I have just searched for MOND and wide binaries, to see what comes up and I found https://arxiv.org/abs/2205.02846 from Charalambos Pittordis and Will Sutherland. They write “The fitting results show a clear preference for Newtonian gravity over MOND, with a high formal significance”.
I don’t really have the tools to evaluate any of this myself (expect that I can be skeptical of 16σ confidences which Stacy McGaugh says is asserted in Banik’s paper), so what am I to make of this?
MOND has an acceleration threshold a_0 where above a_0 the behavior is Newtonian while below a_0 the behavior is MONDian. MOND also says that in the presence of an external gravitational field with acceleration g, at g = a_0, there is a transition from a Newtonian regime to a quasi-Newtonian MOND regime, where for g > a_0 the behaviour is Newtonian, while for g < a_0 the system acts as if it is Newtonian but where Newton's constant is multiplied by a factor of g / a_0. This is known as the external field effect.
All of these galaxies are in the Milky Way, so there is an external field effect where the gravitational field of the Milky Way is also acting upon the wide binaries. According to MOND, for wide binaries in the Milky Way, the transition between the Newtonian regime and the quasi-Newtonian MOND regime with separation of 2,000 AU, where the behaviour below 2,000 AU is Newtonian and the behaviour above 2,000 AU is quasi-Newtonian MONDian. In order to really test MOND for wide binaries in the MIlky Way, one has to have wide binaries below 2,000 AU and wide binaries above 2,000 AU and see if the behaviour is consistent between the two sets of wide binaries. However, Banik et al's analysis doesn't have any wide binaries below 2,000 AU, and Pittordis and Sutherland's analysis doesn't have any wide binaries below 7,000 AU, so it is possible that they are both simply mistakenly analyzing the quasi-Newtonian MONDian regime as Newtonian.
The best thing to do right now is to wait for the next release of data on wide binaries from GAIA, such that Banik et al and Pittordis and Sutherland could get enough good data on wide binaries with separations below 2,000 AU, to see whether the MONDian transition actually occurs at 2,000 AU for wide binaries or not.
Banik’s article in The Conversation: https://theconversation.com/do-we-live-in-a-giant-void-it-could-solve-the-puzzle-of-the-universes-expansion-216687
Although he has elsewhere questioned MOND effects in wide binaries, here he is making a case for MOND to account for structure formation in the universe and explain the Hubble tension.