I want to take another step back in perspective from the last post to say a few words about what the radial acceleration relation (RAR) means and what it doesn’t mean. Here it is again:

The Radial Acceleration Relation over many decades. The grey region is forbidden – there cannot be less acceleration than caused by the observed baryons. The entire region above the diagonal line (yellow) is accessible to dark matter models as the sum of baryons and however much dark matter the model prescribes. MOND is the blue line.

This information was not available when the dark matter paradigm was developed. We observed excess motion, like flat rotation curves, and inferred the existence of extra mass. That was perfectly reasonable given the information available at the time. It is not now: we need to reassess as we learn more.

There is a clear organization to the data at both high and low acceleration. No objective observer with a well-developed physical intuition would look at this and think “dark matter.” The observed behavior does not follow from one force law plus some arbitrary amount of invisible mass. That could do literally anything in the yellow region above, and beyond the bounds of the plot, both upwards and to the left. Indeed, there is no obvious reason why the data don’t fall all over the place. One of the lingering, niggling concerns is the 5:1 ratio of dark matter:baryons – why is it in the same ballpark, when it could be pretty much anything? Why should the data organize in terms of acceleration? There is no reason for dark matter to do this.

Plausible dark matter models have been predicted to do a variety of things – things other than what we observe. The problem for dark matter is that real objects only occupy a tiny line through the vast region available to them in the plot above. This is a fine-tuning problem: why do the data reside only where they do when they could be all over the place? I recognized this as a problem for dark matter before I became aware$ of MOND. That it turns out that the data follow the line uniquely predicted* by MOND is just chef’s kiss: there is a fine-tuning problem for dark matter because MOND is the effective force law.

The argument against dark matter is that the data could reside anywhere in the yellow region above, but don’t. The argument against MOND is that a small portion of the data fall a little off the blue line. Arguing that such objects, be they clusters of galaxies or particular individual galaxies, falsify MOND while ignoring the fine-tuning problem faced by dark matter is a case of refusing to see the forest for a few outlying trees.%

So to return to the question posed in the title of this post, I don’t know why it had to be MOND. That’s just what we observe. Pretending dark matter does the same thing is a false presumption.


$I’d heard of MOND only vaguely, and, like most other scientists in the field, had paid it no mind until it reared its ugly head in my own data.

*I talk about MOND here because I believe in giving credit where credit is due. MOND predicted this; no other theory did so. Dark matter theories did not predict this. My dark matter-based galaxy formation theory did not predict this. Other dark matter-based galaxy formation theories (including simulations) continue to fail to explain this. Other hypotheses of modified gravity also did not predict what is observed. Who+ ordered this?

Modified Dynamics. Very dangerous. You go first.

Many people in the field hate MOND, often with an irrational intensity that has the texture of religion. It’s not as if I woke up one morning and decided to like MOND – sometimes I wish I had never heard of it – but disliking a theory doesn’t make it wrong, and ignoring it doesn’t make it go away. MOND and only MOND predicted the observed RAR a priori. So far, MOND and only MOND provides a satisfactory explanation of thereof. We might not like it, but there it is in the data. We’re not going to progress until we get over our fear of MOND and cope with it. Imagining that it will somehow fall out of simulations with just the right baryonic feedback prescription is a form of magical thinking, not science.

MOND. Why’d it have to be MOND?

+Milgrom. Milgrom ordered this.


%I expect many cosmologists would argue the same in reverse for the cosmic microwave background (CMB) and other cosmological constraints. I have some sympathy for this. The fit to the power spectrum of the CMB seems too good to be an accident, and it points to the same parameters as other constraints. Well, mostly – the Hubble tension might be a clue that things could unravel, as if they haven’t already. The situation is not symmetric – where MOND predicted what we observe a priori with a minimum of assumptions, LCDM is an amalgam of one free parameter after another after another: dark matter and dark energy are, after all, auxiliary hypotheses we invented to save FLRW cosmology. When they don’t suffice, we invent more. Feedback is single word that represents a whole Pandora’s box of extra degrees of freedom, and we can invent crazier things as needed. The results is a Frankenstein’s monster of a cosmology that we all agree is the same entity, but when we examine it closely the pieces don’t fit, and one cosmologist’s LCDM is not really the same as that of the next. They just seem to agree because they use the same words to mean somewhat different things. Simply agreeing that there has to be non-baryonic dark matter has not helped us conjure up detections of the dark matter particles in the laboratory, or given us the clairvoyance to explain# what MOND predicted a prioi. So rather than agree that dark matter must exist because cosmology works so well, I think the appearance of working well is a chimera of many moving parts. Rather, cosmology, as we currently understand it, works if and only if non-baryonic dark matter exists in the right amount. That requires a laboratory detection to confirm.

#I have a disturbing lack of faith that a satisfactory explanation can be found.

81 thoughts on “Why’d it have to be MOND?

  1. I would argue that the root cause of our inane cosmological model lies in the unstated assumption of the FLRW model: the Cosmos is a unified, coherent, simultaneously existing entity that can be modeled using equations derived in the context of the Solar System. That unitary assumption is falsified by known physics and current cosmological observations.

    The speed of light has a finite maximum of 3×10^8 meters per second; the most distant observed galaxies are in excess of 10 billion light years away.

    It follows directly from those two facts that it is impossible for cosmologists to have any knowledge of the simultaneous state of the Cosmos. It also means that it is impossible for the Cosmos to have simultaneous knowledge of itself.

    The unitary Universe assumption underlying all modern cosmological models is simply wrong – it is fundamentally wrong about the nature of physical reality in the same way that Ptolemy’s geocentric assumption was fundamentally wrong. The Universe of modern cosmology is an imaginary entity – it cannot and does not exist.

    If the unitary Universe assumption is wrong then it follows that the recessional velocity interpretation of the cosmological redshift cannot be correct; it is scientifically meaningless to assert that something that does not exist is expanding.

    Modern cosmology needs to start over without FLRW or its axiomatic baggage.

    1. Agree, for the most part. Problem when starting with red shift based on mythical big bang (a fairy tail that has acquired political correctness.) I believe that the mass associated with the Higgs field “warps” due to concentrations of energy. And that time as a dimension is mathematically convenient but otherwise is nonsense.

      1. I’d go back one stage further to the assumption that the universe on a sufficiently large scale is homogenous and isotropic. All the observational evidence points to larger and larger volumes over which this assumption breaks down, not least the existence of some big voids which should not have had the time to form under ΛCDM. It’s all very well taking Einstein’s equations and applying this simplifying assumption to enable them to be solved, but if the simplifying assumption is invalid then so are the solutions using it.

        1. The particular form of the unitary assumption was the universal metric (FLRW metric) that was applied to GR (which does not have a preferred or universal frame) and solved for under the simplifying assumptions of homogeneity and isotropy. All three assumptions were wrong but not completely unreasonable 100 years ago. That they are now treated as axiomatic truths is the fundamental problem of modern cosmology. That position can only be maintained by ignoring basic physics.

        2. When a successful theory like General Relativity in simple gravitational systems requires the introduction of ad hoc, unobservable entities such as dark matter and dark energy trying to keep it consistent in complex systems like galaxies and beyond, it suggests that the theory may be applied beyond its complexity range of applicability. This implies that the initial assumptions of the theory might not hold in these more complex scenarios, or that new emergent properties are present in complex systems that are not accounted for in simpler systems.

          Complexity always is a boundary for the predictive/explanatory power of any theory.

    2. Summary of an interaction with ChatGPT, summary given by ChatGPT:

      The hierarchical structure of reality imposes inherent limitations on the applicability and predictive power of any scientific theory, including General Relativity (GR). While GR has been highly successful in explaining gravitational phenomena in simpler systems, such as around stars or black holes, it struggles with more complex systems like galaxies, galaxy clusters, and the universe as a whole. This limitation is evident in the need to introduce concepts like dark matter and dark energy to reconcile GR with observations at these larger scales.These challenges highlight that no single theory can fully account for the complexities across all levels of reality. Theories are context-dependent, working well at certain hierarchical levels but potentially failing at others where new, emergent phenomena arise. This suggests that our understanding of the universe may require a pluralistic approach, with different theories or new frameworks needed to address the complexities of different levels of reality.In essence, the hierarchical nature of the universe means that the effectiveness of any theory is inherently constrained by the level of complexity it was designed to address, and this limitation must be recognized when applying theories beyond their original scope.

      1. One of the problems with ChatGPT is that you can convince it of anything. You can lead it where you will, but go back the next day and you will have to “re-educate” it into agreeing with you. Large Language Models don’t learn and they don’t think. They give no sign of actually being intelligent – however defined.

        BTW I generally agree with your scale/complexity arguments. It’s just that Artificial Intelligence is as overhyped as artificial food and even less satisfying. At root it’s just a computational system like any computer. In the early days people would caution that computers were fast idiots, totally dependent on the instructions they’re given. The difference now is, AI is a fast idiot with an enormous dataset.

        1. True that I guided ChatGPT to reach that conclusion but AI models are getting better and better, being able to solve mathematical problems at International Mathematical Olympiad level as a silver medalist. They already are better than humans at solving specific problems.

          As a tool is getting better and better as a real help for scientific research, people using these tools will be ahead making connections almost impossible to make without them.

          https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/

  2. Many people in the field hate MOND, often with an irrational intensity that has the texture of religion

    maybe standard general relativity was right before but wrong to give Newton but MOND

    arXiv:2408.00358 (gr-qc)[Submitted on 1 Aug 2024]Quasilocal Newtonian limit of general relativity and galactic dynamicsMarco Galoppo, David L. Wiltshire, Federico Re

    A new Newtonian limit of general relativity is established for stationary axisymmetric gravitationally bound differentially rotating matter distributions with internal pressure. The self-consistent coupling of quasilocal gravitational energy and angular momentum leads to a modified Poisson equation… offering an explanation for their observed rotation curves. Halos of abundant cold dark matter particles are not required.

    1. The authors conclude:

      ‘This discovery has far-reaching consequences … Comparisons with MOND phenomenology potentially open the prospect not only of placing MOND within the theoretical framework of general relativity, but also of providing insights into the development of this important new physical limit of Einstein’s theory.’

      My italics.

      1. if correct MOND is simply general relativity, which in the weak field self-consistent coupling of quasilocal gravitational energy and angular momentum leads to a modified Poisson equation

        could any one with expertise in general relativity comment on ” coupling of quasilocal gravitational energy and angular momentum leads to a modified Poisson equation”

        1. Unfortunately I cannot (yet) help you on the GR front, but I saw this referenced in the previous post and I thought I’d give it a bit more traffic in the comment section.

          This paper very interesting. In critiques the FLRW assumption and calls out Mach’s principle, aligned with a lot of the comments on this blog and post. The result appears to boost local angular momentum in thin disk distributions very much like MOND. In this case, as neo states, MOND is just GR.

          I’m a bit concerned about Figure 2. The resulting effect peaks at around 24 kpc and declines thereafter, in contrast to Stacy’s recent weak lensing experiments where the flat rotation curve appears to extend indefinitely. However, the decline is gradual and may still be within the error bars of a truly flat rotation curve.

          It also only appears to be applicable to galaxies in dynamic equilibrium up to around a billion years, so does not explain galaxy formation from baryons only in the absence of a gravitation boost (as clearly stated in the paper).

          It would be interesting to see how this holds up with real galaxies and galaxy clusters. As discussed, a lot of effort has gone into trying to GR-ify MOND, but the original simple MOND still seems to beat them all in the end (with the exception of clusters).

          My intuition is that MOND is modeling some emergent behaviour of disk galaxies, and something similar but more pronounced is happening with galaxy clusters. This paper may be an important step in explaining this emergent behaviour in GR. Also, being emergent, it is not simply just a sum of the parts, just as we are not just a collection of mostly water molecules.

    2. The paper by Galoppo et al. is wonderful. They have not yet derived the baryonic Tully-Fisher relation, nor the RAR. But if they manage to do so, they will change galactic astrophysics completely.

      Their approach might be the best to make physical sense of the 1/r decrease of acceleration at values below a_0.

      In fact , Wiltshire himself is even more iconoclastic than this. He suggests that even the cosmological constant (dark energy) is result of other effects.

      If he is right, simple general relativity with Lambda=0 would desribe nature completely. Interesting times!

  3. The underlying problem is a departure from objectivity: assuming that Reality must follow humans preconceptions or wishful thinking. Reality can’t care less about that. Brutal, uncompromising objectivity is critical.

  4. “The speed of light has a finite maximum of 3×10^8 meters per second; the most distant observed galaxies are in excess of 10 billion light years away.”

    The sandbox the FLRW cosmology painted itself into is too small.

    The Tension between supernova distance indices and the CMB is not tension, it is failure. Any relaxed system will have a background. In the case of the universe, the microwave background is processed by Lyman absorptions and Balmer boosts.

    MOND is telling us the same story: Our pet theory has no place in our universe.

  5. The problem underlying the way people hang onto assumptions that start to look wrong is insecurity. People feel better with the idea that ‘we know more or less what’s going on’. But new measuring instruments are showing the cracks in some of the old ideas, and to be fair, a lot of people are more open minded nowadays, or would be if anyone had a real solution. To reach an understanding of the RAR, we might have to admit that MOND could be an effect, not a universal gravity theory, with different values for a0 here and there.

    It certainly doesn’t look like a higher order of complexity – the field is ‘two-tone’ in that the outer regions are different from the inner ones (that in itself rather suggests an effect of some kind). The inner regions seem to go by the same simple gravity laws we know about at many smaller scales, even with all that complexity.

    And what Stacy said about BGCs at the centres of clusters using the same acceleration scale as the clusters of which they are part sure looks like a clue of some kind. What with other ellipticals – large and small – using a0. Again, that potentially looks like an effect. I can’t pretend to have an explanation, but I have a picture of the field that’s more capable of getting deformed out of shape, and of unexpected behaviour generally, than in most theories.

    1. Indeed – it is very hard to admit that we understand less than we thought we did. It is especially hard to back out of thinking specific to a seemingly well-established paradigm to re-evaluate what it is we actually know empirically. Fact and interpretation become bound up together in the mind.

  6. It seems MOND is definitely trying to tell us something. That galaxies can be accurately modeled based on only baryons through many orders of magnitude, when the imputed mass is more than half an order or magnitude larger is counter intuitive.

    I heard that more scientific advancement is generated from the phrase “that’s weird” than “Eureka!”. But we have to have an open mind.

    Newton is still being used ubiquitously over four and a half centuries later. And I don’t know how many times I’ve read in a science paper that “Einstein was right again.” And Einstein came up with SR and GR almost exclusively via “thought experiments”.

    I’m not a huge fan of the “interpolation function du jour”, but after over 40 years, MOND kind’ve looks like a physical law (if it weighs as much as a duck). In the intervening years we’ve found no elusive dark matter, and we’ve ruled nearly everything out. DM is looking more and more like Carl Sagan’s invisible dragon in a garage.

    As a computer scientist, I’ve done a lot of simulations. The simulation needs to capture the essence of the problem or the results will be meaningless, or at best misleading. With computers you can spit out a lot of very impressive looking numbers and charts that are essentially nonsense. Simulating nonlinear and chaotic systems are also susceptible to the “butterfly effect”.

    Sabine Hossenfelder recently opined that maybe we really don’t understand the full implications of GR (i.e., Einstein was right again). I was thinking along the same lines. GR is nonlinear and very difficult to calculate even in the simplest models. The metric tensor is a brilliant insight and seems a very simple concept, but the implications are that everything depends on everything else throughout all of time.

    We seem to know four things:

    1. Matter curves space and time
    2. Information travels at the speed of light (including likely gravity)
    3. The universe is expanding (i.e., it was smaller in the past)
    4. Galaxies and galaxy clusters exert more influence (i.e., gravity) than they should based on what we can see

    Could the implications of 1, 2, and 3 lead directly to 4?

    We cannot ever imagine the universe in its entirely and full complexity, but maybe we can still inch a bit further in our understanding.

    1. I conjecture that you are right.

      First, I conjecture that our conception of space as a continuum is inadequate. This conjecture comes from quantum mechanics and the experiments on entangled objects.
      For example, two entangled photons can be very far away from each other.
      As soon as you measure the polarization of one, you also determine the polarization of the other.
      At this point, there are two possibilities:
      Superdeterminism and spooky action at a distance.
      Personally, I don’t believe in superdeterminism, because everything depends on everything else and the end result has to be a comparatively orderly world. On the other hand, nature is not even capable of forming stable orbits for 3 bodies.

      I personally prefer spooky action at a distance.
      In my opinion, a network with dynamic edges would be a good candidate for modeling our world. However, I am struggling to construct such a network.

      Secondly, I conjecture that if you take into account the expansion of space, you will again have a conservation law for momentum.

      1. Loop quantum gravity gives an approach to making spacetime discrete that I find interesting. I hope the quantum graphity / string nets approach will be studied more the coming decades. It appears to be ignored a little for being too complex. But it involves no spooky action at a distance, I guess? I just read Smolin’s book “three roads to quantum gravity” and LQG gives interesting leads at the very least. Only it reduces to GR which, unless neo’s reference is the tip of the iceberg, doesn’t immediately include MOND.

        Anyway, I’m also a little drawn to the idea of entanglement being a ‘wormhole’ in spacetime. It’s perhaps a problem when you accept spooky action at a distance, from the perspective of GR that might also include action across the time dimension. But instead of “shut up and calculate” I rather say “the math that describes it, is what the object is”. I find other interpretations of quantum physics as statistics, just math overhead that somehow predicts well, too magical. So if the math of quantum physics says two things are entangled across space or time or whatever, their state is an inseparable object across all that space or time somehow, which would be solved by a wormhole – there is no actual distance if spacetime is examined there at the Planck scale.

  7. Part of the issue is that the true answer looks a lot like Milgrom’s initial toy model MOND from 1983, but we also definitely know that Milgrom’s toy model MOND is not the whole story.

    We know that it needs a relativistic generalization (at a minimum, we know that the “Newtonian” side of the a(0) threshold is really GR and not Newtonian).

    We known that even in the MOND limit, the speed of gravity is almost surely “c” and that gravity in the MOND limit bends light to the same extent that a GR gravitational field of the same strength does, unlike Newtonian gravity.

    We know that MOND underestimates the effect in clusters (which have their own scaling law similar in character to the RAR) and that it is missing something that make the bullet cluster possible. Less widely known, we know that MOND isn’t quite right in spiral galaxies when you are relatively far from the plane of the spiral galaxy. And MOND may or may not accurately address wide binary stars – the jury’s out on that question due to the difficulties involved in data analysis right at the brink of the MOND threshold.

    The data surely favor some kind of gravitational force law modification that replicates MOND over a broad domain of applicability over basic dark matter models. There are examples of gravity based explanations of “dark matter phenomena” such as efforts by Moffat and Deur, that show as proof of principle that these tweaks to the true answer are almost surely possible somehow with a gravity based explanation. But it is easy to get hung up on the issues with the toy model MOND and miss its big lesson in the process.

    1. Yes, I agree with almost all of this. I think the jury is also out on out-of-plane behavior, but this concern is why I’ve been careful to call the empirical RAR the RADIAL acceleration relation: it is not entirely clear that identically the same relation applies in the vertical direction. But there are MOND-like hints even there, e.g., the inflection in the run of vertical velocity dispersion seen at large radii/low acceleration in both the Milky Way and in external spirals in the DiskMass survey. Milgrom’s recent paper on modified inertial is intriguing in this regard.

      It is good to consider other ideas to generalize the phenomenon, as we definitely need to recover GR in the appropriate limit. In my experience so far, the manifestly unworkable original toy version of MOND consistently outperforms every attempt to improve on it. That’s clearly telling us something important, but I despair of learning what that is in my lifetime.

      1. Don’t give up hope yet! Even if you take emeritus status and pass the baton at some point, you’ve left a lot of people with shoulders to stand on to figure it out and you’ve got plenty of years to see those breakthroughs happen.

        I’m a lot more bullish about the prospects for astrophysics than some other areas of science because new “telescopes” broadly defined, are providing such a torrent of high quality data, more data analysis processing power, and lots of independent groups of researchers.

        1. I hope you are correct. It is true that some great telescopes and instruments are in the works. But I’ve lived long enough to see lots of those come to fruition, e.g., JWST. The issue isn’t data quality. It is human attitudes. We have an extraordinary power to deny the obvious.

  8. ‘Less widely known, we know that MOND isn’t quite right in spiral galaxies when you are relatively far from the plane of the spiral galaxy.’ Can anyone give any information about that – is it Newtonian when stars are off the plane, and has it been found in the data generally?

    1. No, they’re definitely not Newtonian. Crudely speaking, there is a MOND-like boost, but it is smaller than naively anticipated. So the vertical version of the radial relation seems weaker; it might fall on the opposite side of the RAR from clusters in the plot above.

      I worked very hard to sort this out eight years ago, and kept finding that I had to include a lot of cross-terms that we ordinarily ignore. Typically we assume vertical and radial motions are decoupled, but that is only valid to a point and we are past that point. There are other subtle complications as well, so I got confused and gave up, realizing that we’d have to wait for Gaia quality data to have even a chance of sorting it out. Now that Gaia is here, it is clear that all these terms and more (there are clear indications of non-equilibrium processes) matter, so I’ve been reluctant to get burned all over again.

  9. The fact that after 50 years of searching we haven’t ‘found’ DM means absolutely nothing and pointing it out as an argument in favor of something else makes you look like a preacher and not as a scientist.

    Correction; it does mean something. It means that there is some yet undefined phenomena that accelerates the growth of our hubris as a function of time passed since the first of us raised the bone (monolith’s presence invariant).

    DM is nonsense for many reasons. Us not having detected it is not one of them.

    It seems that most want to write the epilogue with little to no idea what the prologue even should be about.

    Leaving aside creation; intentional or accidental, for obvious reason, why Jupiter’s red spot is doing whatever it’s doing isn’t sourced in Jupiter’s formation because Jupiter’s formation is sourced in Sol’s formation and Sol’s formation is sourced in … get the picture? To put forward something somewhat pertinent to us.

    Whatever we figure out will be just another approximation. Hopefully better but at this point I’m not even optimistic about that anymore. And we’ll, rightfully so, chalk it down as progress. We still will not have solved the red spot but who cares, right?, that’s planetology, we are cosmologists. The big boys on the hood.

    The ‘who cares’ gets it correct in one aspect. Somebody asks you to calculate rotation curves of stars in some galaxy? Do it, put ‘done by using MOND’ in the footnote (‘because by now it is obvious that results are better that way’ as possible footnote of the footnote) and if they don’t like it they can either do them themselves or have somebody else do it using something else. Who cares.

    I mean, what has this come to? “My dad can beat your dad!”? Recognition, fame, glory, money, … All those lofty things scientists should have at the top of their lists of reasons to be.

    Ye think MOND is a better tool (to me it’s nothing more than a tool for reasons I’ve already stated in the past) for calculating/predicting something? Great. Now use it for the next step (if you can). Does it matter that textbooks will still peddle DM? To the contrary. Since MOND is clearly better ye have an advantage, don’t you? All those poor DM efs will be in the dead end alley for years.

    OK, I think there’s no such thing as posthumous Nobel prize but the way things are developing they’ll probably have to change that sooner rather than later.

    1. You misrepresent my position. The argument against dark matter is that the data occupy an infinitesimal fraction of the volume available in the plot in this post. That’s a fine-tuning problem. Since one cannot prove a negative, in this instance by not detecting dark matter, suffering a serious fine-tuning problem is nearly the worst thing that can be said about it. That MOND is the reason why this fine-tuning problem occurs is the clearest possible indication that we’ve been barking up the wrong tree with dark matter. Not detecting dark matter is not part of the argument; it is just an obvious inference.

    2. As far as MOND being mere tool, sure, I’m OK with that, and I’ve done exactly what you suggest for predicting things. I’ve also used it for next steps, of which there are many. These often work surprisingly well, but sometimes don’t. So, again, MOND seems to be telling us something; I’m open to what that is but I have become impatient with assertions that other things are better when I have yet to see such a thing.

      More generally, I find the advice to use it as a tool but apparently not to take it seriously to be a curious position. This is essentially what the Inquisition told Galileo: it’s fine to use heliocentrism to calculate things, but don’t dare suggest it has something to do with reality.

      1. As a rule I tend to stay away from debates. Not because I’d find them useless but because we’ve driven our intellectual development to a point where they are mostly just fights with heavily padded gloves. Prefer we both tell our stories and leave it at that. Particularly on topics I find more or less unimportant. Then again, rules aren’t necessary sacred, principles are.

        But I’ve just stopped at about one third of latest Quanta Magazine’s cover of April’s DESI announcement and both have pissed me off. I know, should stop reading Quanta. Know thy enemy though 😉

        Quanta due to me peeking down the article and seeing some faces and names that told me we were *again* headed into Simons’ pet farm and DESI due to fact that a heavy weight team spends months to come out telling us there’s a hint. Well, I’m sorry, I get hints all the time. And to add insult to injury you tell me “MOND seems to be telling us something“.

        First thing first though. Either I don’t understand your logic or it is flawed. Isn’t “Since one cannot prove a negative, in this instance by not detecting dark matter, suffering a serious fine-tuning problem is nearly the worst thing that can be said about it.” just shifting the argument? Compounding. Using a logic gate to build a circuit? If you take “… by [not] detecting dark matter …” out of your ‘circuit’ what do you get?

        Yes, one could use an unknown gate and build a circuit with it that would, based on output, reveal what the unknown was if every other component were fully known. Incorporating “existence of dark matter” into logic diagram where all other components are known, hooking up input and reading the output. Does output match observation? Then DM exist and vice versa. But I have no idea how one could do such thing with non-existent one.

        As for not taking things seriously, fairly certain that’s not what I said. Even though I do think one should not take such things *too* seriously. I said “Who cares.” As in fuck them. Let them think whatever they want. You think you are on the right path and so you keep going until something tells you you’ve made a wrong turn or you actually get somewhere. Slightly philosophically, even if where you end up isn’t where you were heading.

        Does this mean potential blisters, occasional thorn and mosquito bite? Probably. But whether you’d rather fight for the highway to be built according to your plan or do the above has nothing to do with science. Both approaches are valid. Only driving on the highway (dreaming of it) and wishing you’d hike or the other way around gets you …

        You will have to fill out the missing by yourself.

          1. Sorry, I misunderstood you. You just want to give up on convincing hard-core dark matter advocates if I’m right. Nevermind my comment.

  10. @Dr. McGaugh

    First, I would like to thank you for our interactions in the past.

    Because of this blog I have sorted out statistical inference from statistics. As you discuss, demarcation issues eventually force one to recognize that people using the same words may not be using them with the same intended meanings. The same is true for inclusive words like “mathematics,” “logic,” and “science.”

    The link,

    https://en.m.wikipedia.org/wiki/Naturalism_(philosophy)#Providing_assumptions_required_for_science

    offers a set of assumptions thought to be necessary to scientific reasoning (and the knowledge claims attributed to science). There are several avenues where mathematical platonism intrudes upon these assumptions — including the last assumption that introduces statistical inference. The assumption about populations leads to a trace of definitions ending in measure theory.

    My “go to” statistics book describes statistics as “applied mathematics” — by which it becomes mathematics that is meaningful to the person using it. Where you have written,

    “Fact and interpretation become bound up together in the mind.”

    I recently communicated,

    “Teaching applied mathematics teaches math belief.”

    to an interlocutor. I believe that you have good cause for having written,

    “I have a disturbing lack of faith that a satisfactory explanation can be found.”

    While it is “just philosophy,” the typical interpretation of Goedel’s incompleteness is that truth simpliciter (how we speak of tables and chairs in our homes) is different from provability. That is, epistemic limitation.

    In general, the sociology of science wants to deny any epistemic limitation. But, when you run out of data (or pretend that manufactured data is just like that obtained from the observation of reality), what remains is rhetoric (provability) without an empirical ground. Even without this problem, it appears unlikely that a “fact” obtained using statistical inference can be factual in the sense of truth simpliciter.

    Dr. Freundt has conjectured that our conception of the continuum is inadequate. My knowledge of these matters arises from study of Cantor’s continuum hypothesis *while rejecting all results that use the real numbers*. It is non-trivial to understand the consequences which arise from denying the real number line to the pedagogy of applied mathematics. All of the topology for the matrix methods of quantum mechanics depend upon mappings into the real numbers. Information theory fundamentally abandoned discrete methods when Shannon “encoded” symbolic logic with logarithms. Now we pretend that everything is “scientific” using frictionless Newtonian dynamics,

    https://en.m.wikipedia.org/w/index.php?title=Billiard-ball_computer&diffonly=true

    and a pretense that “meaningfulness” for “science” will magically be discovered at some future date because “folk psycholgy” (Goedel’s incompleteness and the logical studies on which it os founded) must be wrong,

    https://en.m.wikipedia.org/wiki/Eliminative_materialism#

    What is happening in foundational physics is also happening in “artificial intelligence,” although one would have to be fully versed in the foundations of mathematics.

    People are talking in circles. Indeed, where you write,

    “Feedback is single word that represents a whole Pandora’s box of extra degrees of freedom, and we can invent crazier things as needed.”

    you can substitute “metalanguage” or “metatheory” for “feedback” to understand why there is now an established study of “models of set theory” described as “multiverse geology.”

    I consider you very fortunate to be working in a field of science where data is still available for challenging theories.

    And, I am very grateful for what I have learned because I read this blog.

  11. There has been a lot of soul searching about what science is this century – some question the basic principles, and are disillusioned. A large part of this identity crisis comes out of an unrealistic overconfidence people had in the 20th century, which got very deeply ingrained. We were told physics knew a whole lot more than it did, and a lot of people got used to that. Now we’re on the rebound from it, as it has turned out to be wrong. But it always was – they papered over the cracks, and there were a lot of them (for one example, we still don’t know what time is). But good physicists are always found staring into the cracks, as that’s where the best clues are.

    So don’t throw out the baby with the bathwater – don’t throw out science, just throw out the 20th century attitudes that led to false expectations, and the idea that we were nearly there (they thought that at the end of the 19th century as well). It shuts down the use of the imagination in a disastrous way, and held us back in the 20th century like nothing else. Instead, you just get used to science not knowing a lot of stuff they told people we knew. It’s great, because then we can actually find things out, if we leave holes in the jigsaw, and admit what we don’t know.

  12. Hi Stacy,

    Given a 3D point cloud of all the atoms of a galaxy (a random galaxy in the SPARC dataset), where every atom has a row with the columns position, velocity, and acceleration, can MOND predict every acceleration? If so, what would the formula or function or program be?

    To be clear, every acceleration of this hypothetic atomic-level SPARC galaxy 3D point cloud is a vector total acceleration.

    I bring this up because I think there is a hidden rule in the current definition of MOND when it comes to total acceleration calculations and it would benefit MOND to expose or extract it.

    Sahil

    1. Hi Sahil,

      My understanding of MOND means Yes; MOND can predict every acceleration. Remember it’s in the name, MOND, modified Newtonian dynamics; so no relativity, no time delay, etc. MOND uses Newtonian gravitation to get at the gravitational force. MOND modifies Newton’s second law of motion so F=ma (force = mass x acceleration) no longer holds and you have to apply MOND to determine the acceleration from the force.
      For your example, if you have a collection of masses and know the position, velocity and acceleration at one time step, then you can integrate forward to get the position and velocity at the next time step, but not the acceleration. Knowing the masses and their new positions you can solve Poisson’s equation for the Newtonian gravitational potential (or for point masses sum the individual contributions) and hence get the Newtonian gravitational acceleration. You then apply the MOND formula to get the MOND acceleration for the new time step. You now know the mass, position, velocity & acceleration at the new time step. You can repeat this process to move forward.
      If you change Newton’s law of gravitation then you also change the gravitational potential. Given the observed mass/density distribution for a galaxy you can no longer solve Poisson’s equation to get at the gravitational potential, force & acceleration. Basically, you end up in a mess.

      To get back to this post and the RAR (radial acceleration relation): the observed density distribution leads to the horizonal access, g(bar), and the observed rotation curve leads to the vertical access, g(obs). MOND isn’t used in the construction of the RAR diagram, but it is used in its interpretation.

      1. If so, what would the formula or function or program be? I am asking for this specifically, because your answer was talking above the problem.

        Please consider a concrete case of a silicon atom that is a a part of a rocky moon, for example, going around a planet, going around a SPARC galaxy.

        Is your program going to output the accurate total acceleration of that silicon atom?


        1. I have answered this point repeatedly, including citing one of the programs that exist to do what you ask (Phantom of Ramses). There are others, but if you want to reinvent the wheel, have a great time. You need a Poisson solver, not a particle pusher.

          Your line of questioning stems from a known misconception. I have attempted to help you through that, but you keep asking the same questions I’ve already answered. As is often the case with misconceptions, this is apparently something you have to figure out on your own.

          1. With all due respect as a guest on your platform, your responses have not answered my question. I have asked for a program (and I hope articulated the benefits of a program) that takes in a list of a galaxy’s atoms and outputs one atom’s acceleration vector, and you haven’t provided that code, and the papers and posts you reference also don’t provide that code.

            I’ve scanned this 2014 paper, https://www.researchgate.net/publication/262604264_Phantom_of_RAMSES_POR_A_new_Milgromian_dynamics_N-body_code and the 2021 paper you linked. They do not have such a program. The PoR code doesn’t have the resolution for atomic-level total acceleration. It does clustering/smoothing of grid cells which basically encodes foreknowledge of how much mass is inside that volume, so it can “operate in the MOND regime”. That’s the “systems and subsystems” logic. Can you take it to the next level and not have any subsystems?

            Please see how in the 2014 paper there’s a finest grid level of .76 pc. For a solar mass at that distance, the acceleration is ~ 2 * 10^-13 m/s^2 << a0. https://www.wolframalpha.com/input?i=G+*+mass+of+sun+%2F+%28.76+pc%29%5E2. How convenient…

            That clustering/smoothing of grid cells is a subtle hard coding. https://en.wikipedia.org/wiki/Hard_coding. That subtle hard coding should be scrutinized.

            To take MOND to the next level, can you write a program that only operates on a list of a galaxy’s atoms and outputs one atom’s total acceleration vector?

            1. First, no one owes you code.
              Second, the code you envision does not exist – not for Newton, not for Einstein, certainly not for MOND.
              Third, dynamic range in computational astrophysics is a huge (and well known) problem. One wants to be able to simulate scales from the smallest particles to the entire cosmos in one go. Sure, but you might just as well ask for a unicorn. The practicalities of doing such calculations are numerically challenging in a variety of ways that you clearly have not yet begun to envision: there are decades of literature on the subject; it can’t be done in a few lines of code. At present, the only computer big enough to do the job is the universe itself.

              1. It’s not about owing. It’s about, why are you calling my questions a “misconception” before understanding my question.

                So back to my first comment on your previous blog post, I think MOND should be working on reducing the finest grid level of .76 pc, and that finest grid level is a better definition for MOND than “a regime where acceleration is < a0”.

    2. Yes.

      Newton works where accelerations are high, which covers pretty much all the internal constituents you allude to. MOND effects kick in only at low accelerations, and only apply to the center of mass of lumps (as you called them before). When it is fair to treat these lumps as bowling balls where you only care about the center of mass (and not at all about what is going on inside the lump) is given by when the acceleration drops to a0 or that of the surrounding external field. For the solar system, that occurs around 7000 AU.

      That is, for the purpose for computing the orbit of the solar system around the Galaxy, everything inside 7000 AU is part of the same lump. For reference, Pluto is 40 AU out. The internal motions of the planets and the atoms of which they are composed matter not at all.

      This is discussed in the “Internal Field Effect” portion of the link I sent you before. It is a common misconception, but a misconception all the same.

      1. When it is fair to treat these lumps as bowling balls where you only care about the center of mass (and not at all about what is going on inside the lump) is given by when the acceleration drops to a0 or that of the surrounding external field. For the solar system, that occurs around 7000 AU.

        This is the hidden rule, see? When given a 3D point cloud of atoms and their properties, you’re pre-assessing whether their classic acceleration drops to a level, then you treat it like a unit lump, then go about the rest of the calculation. I don’t how to say this politely, but can you relax treating things a systems and subsystems? Can you relax calling things planets and stars? And can you relax classifying special effects?

        Please consider a concrete case of a silicon atom that is a a part of a rocky moon, for example, going around a planet, going around a SPARC galaxy. Is there a program that takes in a 3D point cloud (a list of atoms with columns for mass, position, velocity, acceleration), and puts out the total acceleration of that silicon atom?

          1. Please give this a closer look: the calculation of total acceleration. At any instant, the example silicon atom has an acceleration with components in the direction of the nearby moon, planet, star, and galaxy centers. But it’s ultimately just one vector.

            The way MOND is currently defined, can it calculate that vector given a 3D point cloud (a list of atoms with columns for mass, position, velocity, and for evaluation (test data), acceleration)?

            I am asking whether MOND’s current definition can literally take in an array of point cloud data, and produce one vector.

            If you can only answer via a paragraph, and not with a program, then MOND has a hidden rule.

            1. Is the problem you’re assessing that there are many tiny accelerations that in Newton just add up regularly but in MOND should all be rescaled for being below a_0 according to 1/r asymptotics?

              If I understand correctly Stacy’s answer and Milgrom’s papers, in MOND the rescaling for being an acceleration below a_0 happens after having added all accelerations as in Newtonian dynamics. I agree this is not clear enough in most presented material about MOND, and I’m actually a tiny bit unsure whether my words here are indeed the rule in MOND.

              1. My complaint is neither the 1/r asymptotics nor the nonlinearity. My complaint is that MOND as it currently is does not have a computer-runnable program to

                calculate the total acceleration vector on an atom given a 3D galactic point cloud (a list of atoms with columns for masspositionvelocity, and for evaluation (test), acceleration).

                Can you write that program? Can you write that program here?

            2. I realize I’m just a minnow in a really large pond here, and the comment section of this blog post has mushroomed, but I cannot resist a direct challenge (probably a character flaw). I also find galaxy dynamics fascinating and I’m slowly trying to understand this myself, and I find doing more productive than just thinking.

              I also know that coding it doesn’t actually prove anything, but it does help somewhat in understanding the basic mechanics. Since your original program was pseudo code, I’ve provided a response in pseudo code.

              gnewtx = 0
              gnewty = 0
              gnewtz = 0
              for each atom:
              for each otherAtom:
              dx = x(otherAtom) – x(atom)
              dy = y(otherAtom) – y(atom)
              dz = z(otherAtom) – z(atom)
              r = sqrt(dx^2 + dy^2 + dz^2)
              /* gravitation from point mass
              gnewt = G * massOtherAtom / r^2
              gnewtx += gnewt * dx / r
              gnewty += gnewt * dy / r
              gnewtz += gnewt * dz / r
              /* magnitude only
              gnewtAtom = sqrt(gnewtx^2 + gnewty^2 + gnewtz^2)
              /* mond boost
              gmondAtom = gnewtAtom / (1 – exp(-sqrt(gnewtAtom / a0)))

              However, this is not practical for many reasons. Also, the jury is out on whether MOND applies in the same form for scales much below or above the galactic scale and outside the galactic plane.

              I thought I’d start with a grain of sand.

              A grain of sand has an estimated 10^20 atoms. To model the gravitational acceleration on each atom in a grain of sand you’d need about 100 bytes per atom, or 10^22 bytes. That is 10^4 exabytes, or 10,000 times the current total estimated global cloud storage.

              To run this program, you would need to iterate through 10^40 (10^20 * 10^20) calculations. Assuming no lag in storing and retrieving this data (a big assumption), at 10^6 calculations per second, it would take 10^34 seconds or about 10^26.5 years (a year is about 10^7.5 seconds), or about 10^16 times the current age of the universe.

              And that is just a grain of sand.

              The earth is about 10^50 atoms. The sun is about 10^57 atoms, a galaxy about 10^68 atoms, and the observable universe about 10^79 atoms.

              And this is just a single moment in time. Atoms also are in the quantum realm, so not sure this would all aggregate properly in the end. Point masses are normally assumed and not sure an atom would truly constitute a point mass in this context.

              This is what I believe Stacy was emphasizing when he said the only computer currently powerful enough to calculate the universe is the universe itself (probably ever).

              So we clearly have to replace “atom” with something significantly larger. With Gauss’s law of gravity, we can model densities, which leads to Poisson’s equation for gravity. With fast fourier transforms (and inverse transforms) we can reduce the computational complexity from O(n^2) to O(n log n).

              A galaxy like the Milky Way at a resolution of 1 parsec (100 kpc radially and 10 kpc vertically) would require 10^16 bytes (100 * 10^5 * 10^5 * 10^4) to store, or 1% the current total global cloud storage. This is if we could even accurately model a galaxy to a 1 parsec accuracy (GAIA DR3 is only 1%-2% of the Milky Way locally biased around our Sun). Computation of a dynamically static gravitational potential using a Poisson solver at 10^6 calculations per second would likely require on the order of 10^9.5 seconds (10^14 log 10^14 / 10^6) or about 100 (10^2) years. You could probably throw it at a whole bunch of CUDA cores and run it in parallel and get it down to around a year or maybe even a month, but to what end?

              Most of these cubic parsecs would be basically empty, hence the need for adaptive meshes with some threshold for density. This would definitely speed things up, at a cost of possibly ignoring something significant (MOND is in the weak acceleration realm, small things may end up being big things). And this is static and not dynamic, and just Newton and not GR. Ultimately, you’d end up with (presumably) Phantom of RAMSES.

              MOND is observed as a boost on Newtonian acceleration, so you need to compute Newton first, then apply the boost afterwards depending on its relationship to a0. MOND is an observed phenomenon, not a full theory. But it is based only on what we can observe, whereas the required dark matter, as discussed, has an enormous phase space.

              FYI, I was able to get decent results and performance from the above code using Stacy’s Milky Way mass model at a granularity of 1 kpc and density in millions of solar masses. I ignored the vertical because Stacy’s model is essentially two dimensional (i.e., thin disk). I modeled the spiral arms using density wave theory. I only went out to 25 kpc (51 x 51 = 2,601 cells) and was able to run through the code in about 20 seconds on my laptop (about 300K iterations per second). I admit this is still very much a “toy” model, but even these coarse results clearly show the declining rotation curve in Newton, and the flat rotation curve in MOND, with orbital velocities in the right range, and the expected perturbations caused by the spiral arms.

  13. The expanding Universe offers a strong hint that something is being overlooked. The current paradigm is that gravitational collapse clears out the Voids. But, what if that picture is backwards? What if the Voids actively expel baryons? In an expanding Universe the moment (?) self gravitating structures form you have two disparate Spacetime regimes. One is static and belongs to the gravitational structures. The other belongs to the expansion. If the expansion (Hubble Flow) carries structures away from each other, why isn’t it recognized that localized expanding voids (which is the only place where the expansion can occur) carry away baryons? And eventually expel them from their volumes?
    Assuming this point is valid, this disparity of Spacetimes must be evidence of unexplored physics. If our universe consisted of only two self-gravitating structures we’d notice they receded from each other, and an explanation that they were being carried away from each other by Spacetime expansion would be valid. However, in our Universe gravitational structures are bounded by expanding Voids. It seems reasonable to ask whether, or not, there’s a ‘push’ being exerted against those static fields by the surrounding expansion. I find it curious that the accelerated expansion and the MOND/DM phenomena seem to be related. “Why MOND”, you ask. IMO environment is the key. But, it may take a quantum computer to sort out the variables.

  14. In the abstract of the ‘Andromeda dwarfs in light of MOND’ paper linked to above, you say ‘MOND distinguishes between regimes where the internal field of the dwarf, or the external field of the host, dominates. The data appear to recognize this distinction, which is a unique feature of MOND not explicable in LCDM.’

    So you’ve traced some quite specific behaviour of the EFE there. Could BCGs be showing a different version of the EFE for clusters, just as clusters have a different version of the RAR? I can’t see why large ellipticals in clusters should behave differently from large ellipticals elsewhere, other than some kind of EFE.

    This is just a loose idea, and it may not work – for instance the BCG at the centre of a cluster is probably in the Newtonian regime for the cluster’s overall field. But could it be that the EFE is bringing in the external field, as EFEs do, but for some reason with the accompanying set of rules attached as well?

    1. I don’t understand what is going on in clusters. It seems that whatever “extra” is going on there is shared by their BCGs. That makes some sense if there remains an undetected mass component. But – I really don’t understand what is going on in clusters.

      1. So I’ve been reading (watching conference videos) about neutrinos. And I still find them very confusing, but anyway… Something like a 11 eV sterile neutrino might work. It’s got enough mass to be the dark matter of the CMB. It would be fun to try and work out the dynamics of how big a potential region you needed to capture them gravitationally. Neutrino’s don’t interact much with anything, but (as you pointed out) being fermions they interact with each other. So I’ve got this gas of very light fermions. I guess I need to know their temperature, is there some way to guess at that? Do they decouple from the rest of matter before the photons did? Well Angus (2009) says it might work. https://academic.oup.com/mnras/article/394/1/527/1112629

          1. Yeah, well the dynamics of the neutrinos would have to be worked out. Depending on the mass and temperature they could be too hot to stick to something galaxy sized, but clusters of galaxies and things might work. If I could guess at the temperature, I’d have an average velocity and then could figure out how big a potential would be needed. It’s kinda like hydrogen, which escapes from the Earth, but is captured by bigger planets and the sun.

          1. Ahh, not sure (I can’t see any of those articles except for the abstract.) But do they assume a Newtonian type gravity? And doesn’t the limit change somewhat if MoND is true? I’m thinking of Mond and DM.

            (unrelated) So I calculated the maximum density of a fermion gas. The mass divided by the Compton wavelength cubed (m/lamda^3) And got 2×10^-11 kg/m^3 for a 10 eV neutrino. (Lots of rounding so could be off by a factor of 2 or more.)

            1. Oh sorry, the link to your pages was great. I think I can find all my numbers there. So neutrinos decoupled a bit before photons, but it doesn’t make a great deal of difference in the temperature. About 2 K.

          2. Neutrinos with some mass are interesting in the context of the residual discrepancy MOND suffers in clusters. It needs something heavy enough to stick to cluster-mass potential wells but not individual galaxies.

            we’ve been talking about sterile Neutrinos or any dark matter particles with similar mass like hypothetical axions ?

            what mass of Neutrinos heavy enough to stick to cluster-mass potential wells but not individual galaxies?

  15. In the context of PSG (which may be of interest, I hope so, there’s a near-proof I summarised in the ‘still flat after a million light years’ post), in ongoing efforts to guess wtf is going on, it’s starting to look like something to do with field combining. At any of these large scales a lot of gravity fields combine, for instance just to make an acceleration for orbiting matter of 1.2e-10. In PSG each field is made of radially travelling waves at an extremely small scale, which dissipate – matter is rotating waves at a similar scale that latch onto the field via helical path refraction. This led to a set of equations that mimic GR to 8 decimal places (Kerr 2023). When two fields combine, the background of waves combine, and the result is like a single field leading to, for instance, the EFE.

    To try to explain the RAR, perhaps when the waves in many combined fields dissipate to the point where GM/r^2 = 2e-9, they’re weak enough to start to affect each other, whether through collisions or whatever. This speeds up the dissipation process, boosting accelerations (which arise from a rate of change for the ‘density’ of the medium, ie. from the dissipation). In clusters they’re coming from different directions, so the change to the dissipation pattern kicks in quickly, and 2e-9 is the number used in calculations for the field. In individual galaxies, they’re more aligned, so the transition is slower, and goes from the same starting point, 2e-9, to a0 (the idea of a ‘directional’ aspect comes into the discussion on the ‘a few words about the Milky Way’ post). Any comments on this conceptual outline would be appreciated, it may or may not fit well with what we have to someone who knows the data better than I do.

    1. Are clusters quantitatively different from galaxies in the way this idea needs them to be? There’s always stuff coming from different directions, but clusters are more crowded, so maybe? Seems like a stretch, though.

  16. arXiv:2408.08102 (astro-ph)[Submitted on 15 Aug 2024]Hydrostatic equilibrium of X-ray gas in HMG: four representative casesRobert Monjo

    However, MOND presents difficulties in explaining the Radial Acceleration Relation (RAR) observed in galaxy clusters, and moreover, it does not completely eliminate the need for dark matter, since it requires using sterile neutrinos to explain the observed hydrostatic equilibrium of the hot gas.Our results show that the hydrostatic equilibrium in the four systems is naturally adjusted to the HMG model without the need for fitting parameters or adding sterile neutrinos, which are required for MOND theories.

    any thought

    1. Not that I know of, because RCs are very much in the weak-field limit where the Newtonian limit is obtained – i.e., the simplifying assumption in the thread you cite. To go beyond that, one would have to come up with some other metric appropriate to galaxies that for some reason is different. Very different – GR corrections enter as (v/c)^2, so for a big 300 km/s galaxy, we’re talking about an effect of one part in a million. This would be a factor of 100 weaker still for a 30 km/s dwarf galaxy. This is far too small to explain the mass discrepancy in amplitude (a factor of > 5, not one in a million) and goes in the wrong direction (the discrepancy is larger in dwarfs than in giants). So it is hard to give credence to claims that the acceleration discrepancy is some sort of subtle GR effect.

      1. The geodesic equation, at least according to Section 4 of the paper linked below, unifies inertia and gravity. If the gravitational field vanishes or becomes minimal there remains an inertial term. That would seem to suggest that it might well be applicable to the low acceleration regime, doesn’t it?

      2. The geodesic equation, at least according to Section 4 of the paper linked below, unifies inertia and gravity. If the gravitational field vanishes or becomes minimal there remains an inertial term. That would seem to suggest that it might well be applicable to the low acceleration regime, doesn’t it?

      3. The geodesic equation, at least according to Section 4 of the paper linked below, unifies inertia and gravity. If the gravitational field vanishes or becomes minimal there remains an inertial term. That would seem to suggest that it might well be applicable to the low acceleration regime, doesn’t it?

      4. The geodesic equation, at least according to Section 4 of the paper linked below, unifies inertia and gravity. If the gravitational field vanishes or becomes minimal there remains an inertial term. That would seem to suggest that it might well be applicable to the low acceleration regime, doesn’t it?

      5. The geodesic equation, at least according to Section 4 of the paper linked below, unifies inertia and gravity. If the gravitational field vanishes or becomes minimal there remains an inertial term. That would seem to suggest that it might well be applicable to the low acceleration regime, doesn’t it?

          1. A change to inertial mass rather than the effective force of gravity could work. Rather than making gravity stronger, particles could get easier to push around. How this might occur remains unclear to me, but maybe this is a way forward.

            1. I don’t think the mass needs to change. The structure of the geodesic equation seems to be designed to accommodate a transition from from a gravitational regime to an inertial state where the mass would resist a change in velocity as the gravitational conditions tailed off.

  17. The small-scale waves travel radially, so in disk galaxies at 2e-9, at a given point lot of them are travelling roughly in parallel, from nearer the centre. So there are fewer collisions, or interactions between them, and it takes until 1.2e-10 to complete the transition to a new dissipation pattern. In clusters they’re not travelling in any preferred direction, so once they start affecting each other at 2e-9, the transition is quicker.

    Is this quantitatively as it needs to be? It’d need the end point of each transition to be a value for a0, and the geometric mean of that and what Newton’s acceleration would otherwise have been, compensates for the change to the field, and gives the right acceleration. It’s possible, because in either regime the dissipation pattern affects accelerations for matter – the local ‘density’ of the medium sets the transmission speed of space, which goes straight into Snell’s law, affecting the helical refraction mechanism. And the transmission speed’s rate of change with radius, a 1/r^2 type expression in the Newtonian regime, sets the acceleration. So a new dissipation pattern, in which the waves fade out faster due to interacting with each other, would boost accelerations, by boosting that rate of change.

    Of the few things I know about the small-scale waves – they behave like light, they slow everything else down in a gravity field, and they’re at a smaller scale than everything else. So perhaps at 2e-9 they start slowing themselves down as well. I’ve found that if they get ‘bunched up’ it can fit observations, if the field gets increasing compressed radially. If you change the radius term in GM/r^2 to r’ = (r[M]r)^1/2, so the acceleration in terms of the MOND radius is GM/(r[M]r), the main part of MOND comes out. I know how the waves slow other waves down, there’d be a need to see if self-interaction of that kind leads to MOND. At present all it does is, it looks like it might.

Comments are closed.