Question of the Year (and a challenge)

Why does MOND get any predictions right?

That’s the question of the year, and perhaps of the century. I’ve been asking it since before this century began, and I have yet to hear a satisfactory answer. Most of the relevant scientific community has aggressively failed to engage with it. Even if MOND is wrong for [insert favorite reason], this does not relieve us of the burden to understand why it gets many predictions right – predictions that have repeatedly come as a surprise to the community that has declined to engage, preferring to ignore the elephant in the room.

It is not good enough to explain MOND phenomenology post facto with some contrived LCDM model. That’s mostly1 what is on offer, being born of the attitude that we’re sure LCDM is right, so somehow MOND phenomenology must emerge from it. We could just as [un]reasonably adopt the attitude that MOND is correct, so surely LCDM phenomenology happens as a result of trying to fit the standard cosmological model to some deeper, subtly different theory.

A basic tenet of the scientific method is that if a theory has its predictions come true, we are obliged to acknowledge its efficacy. This is how we know when to change our minds. This holds even if we don’t like said theory – especially if we don’t like it.

That was my experience with MOND. It correctly predicted the kinematics of the low surface brightness galaxies I was interested in. Dark matter did not. The data falsified all the models available at the time, including my own dark matter-based hypothesis. The only successful a priori predictions were those made by Milgrom. So what am I to conclude2 from this? That he was wrong?

Since that time, MOND has been used to make a lot of further predictions that came true. Predictions for specific objects that cannot even be made with LCDM. Post-hoc explanations abound, but are not satisfactory as they fail to address the question of the year. If LCDM is correct, why is it that MOND keeps making novel predictions that LCDM consistently finds surprising? This has happened over and over again.

I understand the reluctance to engage. It really ticked me off that my own model was falsified. How could this stupid theory of Milgrom’s do better for my galaxies? Indeed, how could it get anything right? I had no answer to this, nor does the wider community. It is not for lack of trying on my part; I’ve spent a lot of time3 building conventional dark matter models. They don’t work. Most of the models made by others that I’ve seen are just variations on models I had already considered and rejected as obviously unworkable. They might look workable from one angle, but they inevitably fail from some other, solving one problem at the expense of another.

Predictive success does not guarantee that a theory is right, but it does make it better than competing theories that fail for the same prediction. This is where MOND and LCDM are difficult to compare, as the relevant data are largely incommensurate. Where one is eloquent, the other tends to be muddled. However, it has been my experience that MOND more frequently reproduces the successes of dark matter than vice-versa. I expect this statement comes as a surprise to some, as it certainly did to me (see the comment line of astro-ph/9801102). The people who say the opposite clearly haven’t bothered to check2 as I have, or even to give MOND a real chance. If you come to a problem sure you know the answer, no data will change your mind. Hence:

A challenge: What would falsify the existence of dark matter?

If LCDM is a scientific theory, it should be falsifiable4. Dark matter, by itself, is a concept, not a theory: mass that is invisible. So how can we tell if it’s not there? Once we have convinced ourselves that the universe is full of invisible stuff that we can’t see or (so far) detect any other way, how do we disabuse ourselves of this notion, should it happen to be wrong? If it is correct, we can in principle find it in the lab, so its existence can be confirmed. But is it falsifiable? How?

That is my challenge to the dark matter community: what would convince you that the dark matter picture is wrong? Answers will vary, as it is up to each individual to decide for themself how to answer. But there has to be an answer. To leave this basic question unaddressed is to abandon the scientific method.

I’ll go first. Starting in 1985 when I was first presented evidence in a class taught by Scott Tremaine, I was as much of a believer in dark matter as anyone. I was even a vigorous advocate, for a time. What convinced me to first doubt the dark matter picture was the fine-tuning I had to engage in to salvage it. It was only after that experience that I realized that the problems I was encountering were caused by the data doing what MOND had predicted – something that really shouldn’t happen if dark matter is running the show. But the MOND part came after; I had already become dubious about dark matter in its own context.

Falsifiability is a question every scientist who works on dark matter needs to face. What would cause you to doubt the existence of dark matter? Nothing is not a scientific answer. Neither is it correct to assert that the evidence for dark matter is already overwhelming. That is a misstatement: the evidence for acceleration discrepancies is overwhelming, but these can be interpreted as evidence for either dark matter or MOND.

This important thing is to establish criteria by which you would change your mind. I changed my mind before: I am no longer convinced that the solution the acceleration discrepancy has to be non-baryonic dark matter. I will change my mind again if the evidence warrants. Let me state, yet again, what would cause me to doubt that MOND is a critical element of said solution. There are lots of possibilities, as MOND is readily falsifiable. Three important ones are:

  1. MOND getting a fundamental prediction wrong;
  2. Detecting dark matter;
  3. Answering the question of the year.

None of these have happened yet. Just shouting MOND is falsified already! doesn’t make it so: the evidence has to be both clear and satisfactory. For example,

  1. MOND might be falsified by cluster data, but it’s apparent failure is not fundamental. There is a residual missing mass problem in the richest clusters, but there’s nothing in MOND that says we have to have detected all the baryons by now. Indeed, LCDM doesn’t fare better, just differently, with both theories suffering a missing baryon problem. The chief difference is that we’re willing to give LCDM endless mulligans but MOND none at all. Where the problem for MOND in clusters comes up all the time, the analogous problem in LCDM is barely discussed, and is not even recognized as a problem.
  2. A detection of dark matter would certainly help. To be satisfactory, it can’t be an isolated signal in a lone experiment that no one else can reproduce. If a new particle is detected, its properties have to be correct (e.g, it has the right mass density, etc.). As always, we must be wary of some standard model event masquerading as dark matter. WIMP detectors will soon reach the neutrino background accumulated from all the nuclear emissions of stars over the course of cosmic history, at which time they will start detecting weakly interacting particles as intended: neutrinos. Those aren’t the dark matter, but what are the odds that the first of those neutrino detections will be eagerly misinterpreted as dark matter?
  3. Finally, the question of the year: why does MOND get any prediction right? To provide a satisfactory answer to this, one must come up with a physical model that provides a compelling explanation for the phenomena and has the same ability as MOND to make novel predictions. Just building a post-hoc model to match the data, which is the most common approach, doesn’t provide a satisfactory, let alone a compelling, explanation for the phenomenon, and provides no predictive power at all. If it did, we could have predicted MOND-like phenomenology and wouldn’t have to build these models after the fact.

So far, none of these three things have been clearly satisfied. The greatest danger to MOND comes from MOND itself: the residual mass discrepancy in clusters, the tension in Galactic data (some of which favor MOND, other of which don’t), and the apparent absence of dark matter in some galaxies. While these are real problems, they are also of the scale that is expected in the normal course of science: there are always tensions and misleading tidbits of information; I personally worry the most about the Galactic data. But even if my first point is satisfied and MOND fails on its own merits, that does not make dark matter better.

A large segment of the scientific community seems to suffer a common logical fallacy: any problem with MOND is seen as a success for dark matter. That’s silly. One has to evaluate the predictions of dark matter for the same observation to see how it fares. My experience has been that observations that are problematic for MOND are also problematic for dark matter. The latter often survives by not making a prediction at all, which is hardly a point in its favor.

Other situations are just plain weird. For example, it is popular these days to cite the absence of dark matter in some ultradiffuse galaxies as a challenge to MOND, which they are. But neither does it make sense to have galaxies without dark matter in a universe made of dark matter. Such a situation can be arranged, but the circumstances are rather contrived and usually involve some non-equilibrium dynamics. That’s fine; that can happen on rare occasions, but disequilibrium situations can happen in MOND too (the claims of falsification inevitably assume equilibrium). We can’t have it both ways, permitting special circumstances for one theory but not for the other. Worse, some examples of galaxies that are claimed to be devoid of dark matter are as much a problem for LCDM as for MOND. A disk galaxy devoid of either can’t happen; we need something to stabilize disks.

So where do we go from here? Who knows! There are fundamental questions that remain unanswered, and that’s a good thing. There is real science yet to be done. We can make progress if we stick to the scientific method. There is more to be done than measuring cosmological parameters to the sixth place of decimals. But we have to start by setting standards for falsification. If there is no observation or experimental result that would disabuse you of your current belief system, then that belief system is more akin to religion than to science.


1There are a few ideas, like superfluid dark matter, that try to automatically produce MOND phenomenology. This is what needs to happen. It isn’t clear yet whether these ideas work, but reproducing the MOND phenomenology naturally is a minimum standard that has to be met for a model to be viable. Run of the mill CDM models that invoke feedback do not meet this standard. They can always be made to reproduce the data once observed, but not to predict it in advance as MOND does.


2There is a common refrain that “MOND fits rotation curves and nothing else.” This is a myth, plain and simple. A good, old-fashioned falsehood sustained by the echo chamber effect. (That’s what I heard!) Seriously: if you are a scientist who thinks this, what is your source? Did it come from a review of MOND, or from idle chit-chat? How many MOND papers have you read? What do you actually know about it? Ignorance is not a strong position from which to draw a scientific conclusion.


3Like most of the community, I have invested considerably more effort in dark matter than in MOND. Where I differ from much of the galaxy formation community* is in admitting when those efforts fail. There is a temptation to slap some lipstick on the dark matter pig and claim success just to go along to get along, but what is the point of science if that is what we do when we encounter an inconvenient result? For me, MOND has been an incredibly inconvenient result. I would love to be able to falsify it, but so far intellectual honesty forbids.

*There is a widespread ethos of toxic positivity in the galaxy formation literature, which habitually puts a more positive spin on results than is objectively warranted. I’m aware of at least one prominent school where students are taught “to be optimistic” and omit mention of caveats that might detract from the a model’s reception. This is effective in a careerist sense, but antithetical to the scientific endeavor.


4The word “falsification” carries a lot of philosophical baggage that I don’t care to get into here. The point is that there must be a way to tell if a theory is wrong. If there is not, we might as well be debating the number of angels that can dance on the head of a pin.

57 thoughts on “Question of the Year (and a challenge)

  1. Tracy,

    I suspect the answer to your question is more about the psychology of group behavior, than cosmology, or the physical sciences. Getting on the bandwagon. Cosmologists are people too.
    The assumption is that with ever more cosmologists, that many more possibilities would be explored, but it has the opposite psychological effect, as the prevailing wisdom becomes that much more entrenched and difficult to question, given that large groups require most to be followers, not leaders. It seems more a consequence of academia as a whole, not just any particular field.

    The physical model I come up with is there is an inherent centripetal dynamic to wave behavior, in that it synchronizes as a path of least resistance. So that what is referred to as gravity, the centripetal effect associated with mass, is cause of mass, not effect. That this effect goes from the barest bending of light to the vortices at the center of galaxies, with mass as a stable, intermediate stage, that our tactile sensibilities tend to concentrate on, so we put it at the center our our models, like we once put earth at the center of the cosmos.

    A similar issue is Dark Energy. The original assumption was that gravity slowed the rate of expansion at a steady rate, from the visible edge, where sources appear to be receding at close to the speed of light, to closer sources, where the effect is much slower. Yet what Perlmutter and company found was the rate dropped off rapidly, then flattened out. So while it was assumed gravity caused the initial slowing, some additional force had to be postulated for why it didn’t flatten completely, thus Dark Energy.
    Yet if cosmic redshift is an optical effect, then by compounding on itself, would explain this curve in the rate as it going parabolic.

    Like

      1. As the comment about market bubbles goes, they can remain irrational for longer than you can remain solvent, betting against them.

        Like

  2. Black holes should be able to capture dark matter as well as baryonic matter, I assume. Most (all?) observed galaxies have a super-massive black hole in their central regions. Could galaxy-formation models with and without dark matter predict typical black hole masses versus age of the galaxy to provide some evidence? I would assume with LCDM predicting much more dark matter than baryonic matter that there would be a significant difference.

    Like

    1. You are reasonable to think there would be a difference, but there isn’t much of one: the growth of black holes is all normal matter in the high acceleration regime. The only role dark matter/MOND play is to get the ball rolling by forming the first galaxies. It is hard to get enough mass together to make a supermassive black hole at high redshift, but somehow the universe managed to do so. The reason it is hard is angular momentum: most particle orbits, be they normal or dark matter, do not come anywhere near the central black hole and are not subject to capture. The only way to funnel material down to the center is to transfer angular momentum away from it. That can happen in gas through various mechanisms, but not in dark matter, which lacks those non-gravitational mechanisms. The central black hole is simply too small a target to get high by much dark matter.

      Like

  3. As you said in a previous post: “Winning isn’t everything. It’s the only thing.” Predictive power is the gold standard, end of story.

    Reality is full of new irreducible properties/behavior and MOND is one of these, there is no point trying to “explain” new irreducible properties, you need to accept them, and only direct observations and/or experiments can reveal them.

    But theoreticians always will try to accommodate these new irreducible properties to their cherished preconceptions, refusing to accept their irreducibility.

    P. Anderson wisdom keeps popping up.

    Liked by 1 person

  4. Even if it is a modification of gravity, rather than an application of GR as claimed, Deur’s approach has a broader range of applicability than MOND (e.g. it works in clusters) while reproducing its essential conclusions, and providing a fundamental theoretical framework for the results, in addition to being very simple (in principle no free parameters except Newton’s constant not even a cosmological constant). No other DM or modified gravity theory is as successful. http://dispatchesfromturtleisland.blogspot.com/p/deurs-work-on-gravity-and-related.html

    Like

    1. Reading this posting, and particularly “Essentially, self-interacting gravitons, rather than going off randomly in all directions, tend to veer towards direct gravitational fields between clumps of mass, making those fields stronger, while weakening the fields in the direction of empty space” it did occur to me that Deur’s theory must predict an alignment between the planes of disk galaxies (along which the attraction is stronger) and other nearby galaxies, while under GR there should be no alignment. Has anyone looked at the alignment of disk galaxies in clusters?

      Like

    2. Mr. Oh-Willeke,

      Thank you for the link. I do not have the technical skill to judge the many accolades you have bestowed on Duer’s work. But I did download the slideshow. The analysis by analogy certainly seems to deserve all due consideration.

      Of course, to the best of my knowledge, gravitons have not been experimentally observed. So, there is that.

      Like

  5. An answer to number 3 :

    Using the scientific method, we can check if the redshift in measuring star rotation velocity with side-faced galaxies, has been correctly interpreted in the first place.

    To this purpose, the JWST could perhaps perform a detailed study of star rotation velocities using face-on galaxy observations.

    Like

  6. “MOND might be falsified by cluster data, but it’s apparent failure is not fundamental. There is a residual missing mass problem in the richest clusters, but there’s nothing in MOND that says we have to have detected all the baryons by now.”

    True. But I have a hunch that additional baryons aren’t even needed. Over the past several years increasingly high resolution imagery has come in from for example the VLA telescope which shows clusters have large radio halos. These radio halos are evidence of a tenuous highly relativistic gas that produces large amounts of non-thermal pressure which throws off any calculation based on hydrostatic equilibrium. This effect can be of order unity depending on the specific distribution of the radio halo.

    Furthermore if we add up the relativistic pressure from the radio halo and the thermal pressure from the X-ray gas, the pressure is several percent of the mass-energy density of the rest mass of the system. Now there isn’t a settled relativistic version of MOND but given the weak and strong lensing results of Brouwer and Tian it is reasonable to assume MOND applies to all terms of T_mu_nu. This should reduce the missing mass problem further by several percent by taking other elements of T_mu_nu into account.

    Milgrom notes in his cluster conundrum paper that the problem for MOND is particularly severe in the cores of cool-core clusters but can arise in all parts of the cluster. This matches qualitatively with observations from radio halos. In the most relaxed clusters the radio halos are small and concentrated in the core of the cluster. For less relaxed (and usually larger) clusters the radio halo extends as far or even farther than the X-ray gas.

    If this hunch is right then it makes sense that clusters are problematic because they would be objects that are simultaneously experiencing MOND and relativistic effects.

    Unfortunately packages like MBProj2 and JoXSZ are already extremely complicated, never mind modifying those to MOND-versions. And given the small number of people working on MOND I doubt there will be enough hours invested in the problem to chase down hunches such as these.

    Liked by 1 person

    1. Good points. I often worry about the common assumption of thermal equilibrium in clusters that is used to define the dynamical mass, and concur that weakening that assumption (or better yet, measuring it) can help. Whether it can help enough is not clear to me. Then there are the complications that you mention, which do indeed preclude progress. Something goofy is going on in the cores of clusters (that’s true in either theory) but what exactly I don’t know.

      Like

    2. It is really that hard to understand that different levels of complexity may have different behaviors/properties?

      MOND is good at galaxy level complexity but it fails on cluster level complexity, and that should be expected.

      Complexity always will be a boundary for the predictive power of any theory, because complexity is a source of irreducibility. Higher complexity levels will have new irreducible, emergent properties, just look around you.

      Once again as P. Anderson said: More is Different.

      Like

      1. I don’t think you are getting the point of fundamental science. The point is predict as many things as possible in advance using as few equations as possible. The greatest scientific breakthroughs occur when someone recognises that different complex looking things actually follow the same governing laws. Newton’s falling apple, the trajectory of cannonballs and the motions of the Sun, Moon and planets in the sky all follow from one simple equation. That one equation then lets you extrapolate to things that haven’t been observed yet (Neptune in the time of Le Verrier). Inventing new equations for every different object you encounter just leaves you with thousands of fitting functions that you can’t use in new contexts. Unification is important.

        MOND is a fundamental law. It should apply universally. It should apply to laboratory experiments*, binary stars, tiny dwarfs with mere thousands of stars, large galaxies regardless of composition, galaxy groups, galaxy clusters, cosmic filaments, large scale structure and the universe as a whole. Just saying that galaxy clusters are “complex” and that therefore MOND shouldn’t apply doesn’t explain anything nor does it justify why MOND does not apply.

        Yes science sometimes does model reality with multiple levels of explanation. You’d be mad to try to explain the mating behaviour of two cockatoos using quantum electrodynamics. In principle QED covers the behaviours of all the particles that the cockatoos consist of but there are way way way too many particles and interactions between those particles to keep track of them all and do the math in any practical sense. In such a case it makes sense to use standalone fitting functions and treat the cockatoos as “emergent” phenomena (though the word “emergent” doesn’t mean anything unless it refers to a specific model).

        But doing so comes at a loss of generality. Which means your theory is weaker. Taking that multiple level approach to the extreme you treat everything as a separate complex thing that deserves its own fit. And you loose all predictive power. This is the route that dark matter theorists take. MOND is powerful because it doesn’t yield to this impulse to just fit each thing or group of things separately as if different laws governed them.

        As for galaxy clusters I see no reason to think they are more like cockatoos than like spiral galaxies. Things do not automatically require more complex laws when the number of parts increases.

        Finally I think you should be more careful when using concepts such as “complexity” and “emergence”. The following three short essays explain why:

        https://www.readthesequences.com/Mysterious-Answers-To-Mysterious-Questions

        https://www.readthesequences.com/The-Futility-Of-Emergence

        https://www.readthesequences.com/Say-Not-Complexity

        *All laboratories are in the Newtonian regime though the work of Norbert Klein does indicate there may be MOND effects measurable on Earth given certain versions of MOND.

        Liked by 1 person

        1. MOND is a “fundamental” law that applies to galaxy level complexity, exactly as Quantum mechanics is a fundamental law that applies only to low complexity quantum sets.

          Quantum mechanics is useless at biological level complexity and beyond as new strong/irreducible emergent properties are present in complex quantum assemblies.

          Again complexity is a boundary for the predictive/explanatory power of any theory, and this has been showed by results in condensed matter physics and in formal mathematics.

          That idea of a theory of everything, or “universally” applicable theory independently of the complexity level has already showed to be wrong. Naive Reductionism is intrinsically flawed and ultimately contrary to objectivity.

          See for example:

          Strong emergence in condensed matter physics
          https://arxiv.org/abs/1909.01134

          And in formal mathematics:
          Is Complexity a Source of Incompleteness?
          https://arxiv.org/abs/math/0408144

          Like

          1. “Quantum mechanics is a fundamental law that applies only to low complexity quantum sets.”

            This tells me you don’t understand what decoherence is. Quantum mechanics (or more accurately quantum electrodynamics) applies everywhere throughout the universe. Just because it is easier in practice to use classical physics when things become hot, large and fast doesn’t mean the quantum mechanics disappears. Things can be treated as classical when decoherence is fast. If you want to claim that quantum mechanics doesn’t apply in the classical limit you have to explain why decoherence isn’t a thing anymore which makes your theory a lot more complicated.

            This reminds me of a conversation Eliezer Yudkowsky once had with a friend:

            I once met a fellow who claimed that he had experience as a Navy gunner, and he said, “When you fire artillery shells, you’ve got to compute the trajectories using Newtonian mechanics. If you compute the trajectories using relativity, you’ll get the wrong answer.”

            And I, and another person who was present, said flatly, “No.” I added, “You might not be able to compute the trajectories fast enough to get the answers in time—maybe that’s what you mean? But the relativistic answer will always be more accurate than the Newtonian one.”

            “No,” he said, “I mean that relativity will give you the wrong answer, because things moving at the speed of artillery shells are governed by Newtonian mechanics, not relativity.”

            “If that were really true,” I replied, “you could publish it in a physics journal and collect your Nobel Prize.”

            You are doing the same thing as the former navy gunner here. It is a way of thinking that violates Occam’s Razor and likely lands you with the wrong answer.

            Saying “complexity” doesn’t absolve you of the need to explain why the physics that works in one situation doesn’t work in another. You can’t just magically turn off a fundamental force whenever you feel like it. That is not how nature works. The universe is fundamentally simple and governed by simple laws of physics. This is why science works and prediction is possible at all.

            As for clusters introducing a modification of modified newtonian dynamics is just unnecessary when we haven’t even fully tested if MOND actually fails. And even if eventually it turns out MOND is falsified, which I don’t think is likely you’d still need an actual quantitative reason, not just “complexity”. There would have to be some physical quantity that changes the law if it gets big or small. It is not a size based modification because then large galaxy groups would not follow MOND either (they do). Nor is it a feature of X-ray gas density because small galaxies with X-ray halos work just fine in MOND. It is likely not a potential depth or curvature modification either because then cosmic filaments wouldn’t work in MOND (they do). Ultimately finding a further modification of MOND specifically to fit clusters is just really contrived.

            Like

            1. Complexity is a limit to the predictive/explanatory power of any theory, including quantum mechanics and general relativity.

              Quantum mechanics(quantum electrodynamics) is useless, even in principle, to explain complex biological systems, herd behavior, the emergence of intelligence, social organization, etc. More is Different, irreducibly different.

              The fact that any Real number is a limit of a sequence of Rational numbers don’t make Real numbers Rational numbers.

              Once again read:
              Strong emergence(irreducibility) in condensed matter physics
              https://arxiv.org/abs/1909.01134

              Naive Reductionism is intrinsically limited and the black matter blunder is a textbook example of that fact.

              Like

              1. jeremyjr01,

                Thank you!!!

                This is a great paper.

                I spent some time looking at the appeal to authority to which you had been subjected. After reading the entries, I hit HOME and quickly found the sentence,

                “Thankfully not many have questioned what truth is.”

                In turn, this had led to the profound defense of common sense and naive logic in the entry,

                https://www.readthesequences.com/The-Simple-Truth

                Of course, there are mathematicians who do study “truth” with a profound concern for the efficacy of science. And, Drossell properly classifies defenses such as Yudkowsky’s for the unjustifiable metaphysics that it is (Section 6.2 addresses Yudkowsky’s contempt in so far as he keeps invoking “elan vital.”).

                The problem which presents itself in the quantum formalism may be attributed to “continuity.” Among Tarski’s contribution had been the observation that S_4 modal logic could be given a topological semantics. This had largely been ignored by symbolic logicians. When Kripke’s modal semantics appeared, Kripke had been exalted. Coming from category theory (discounted by symbolic logicians), Awodey and Kishida showed that “necessity” is associated with continuity,

                Click to access Awodey-Kishida.pdf

                That is, continuity cannot support determinism. And, formulating a probability calculus to address this indeterminism does not resolve the issue this creates for reductionism.

                Top-down notions seem to be implicit to our reasoning. Implicit to the law of inertia is the notion of perpetuity. A winning strategy in an ideal game with Nash equilibrium shows that Nash equilibria assume a completed infinity. People in computation wish to emphasize finite operations. However, a partial function on the natural numbers is “partial” with respect to a totality. The halting problem makes no sense in the absence of a tape of infinite extent.

                Recently, I have been looking at the attempt by Dershowitz and Gurevich to axiomatize the Church-Turing thesis classically And, I have been comparing this with Drago’s analysis of that thesis in terms of intuitionism and modal logic. The question which presents itself is a simple one: how anyone can justify “the next step” of a Turing machine without top-down organization.

                This led me to “final cause” in Aristotle and teleology in a more general context.

                Another computational researcher, namely Scott Aaronson, likes to unload on teleology in much the same way as Yudkowsky disparages “elan vital.”

                And, for what this is worth, Yudkowsky having so much to say about rationality, games of sequential rationality with behavioral strategies can be defined to include player choices with zero valuations. However, to describe a Nash equilibrium for such games, every player choice must have a positive valuation.

                This is because Bayesian probability is involved. The choice excluded by Bayesian probabilities, then, is “none of the above” because it introduces a division by zero.

                I have difficulty reconciling a definition of rationality which would exclude human scientists from choosing “none of the above.”

                There is something terribly unscientific in that.

                Like

              2. So isn’t emergence feedback between context and content? Neither is primary.
                Nodes and networks. Organisms and ecosystems. Particles and fields.

                As the basic flaw in determinism is the act of determination can only occur as the present, there seems to be a natural, linear focus on entities and results, rather than processes.
                Operators are verbs.
                Time as dimension is map, not territory.
                In fact, space as dimensions is map, not territory, based on our location centric perspective. The conceptual parameters of space are equilibrium and infinity. Zero to infinity.

                Like

          2. Formally you can define a complexity measure for formal statements and theories, then anything with a higher complexity measure than a theory own complexity measure or independent from the theory assumptions will be unpredictable/irreducible from it.

            So a theory predictive/explanatory power is always limited by its complexity measure and its assumptions, complexity is a boundary for its predictive/explanatory power, complexity is a source of irreducibility.

            This is fully in sync with P. Anderson’s More is Different, informally: the total is sometimes more than the sum of its parts.

            Reference: Is Complexity a Source of Incompleteness?
            https://arxiv.org/abs/math/0408144

            Like

          3. jeremyjr01,

            I looked at the paper on incompleteness and did not get very far very quickly. As first-order theories both Zermelo-Fraenkel set theory and Peano arithmetic have infinitely many axioms. Neither is finitely specified.

            As the paper attributes this idea to Chaitin, I found his referenced paper and took a quick look.

            First, Chaitin is very clear on how he has to look at metamathematics somewhat differently in order to obtain a complexity result. Second, and appropriately I suppose, he is imagining the concept of a proof as one does in order to write a proof assistant. His methodology is of a form compatible with bounded arithmetic. By contrast, the incompleteness result has the interpretations that it does precisely because Goedel is able to use divisibility among natural numbers to keep classifications of strings representing different types of string interpretations ( that is, syntactic categories like “symbol,” “formula,” and “derivation”) apart from one another. He can use this to show that there must be a formula true in the standard model which is not provable.

            If one deprecates the semantical aspect of Goedel’s result — and this is common among researchers in computation — one may reasonably question whether one is even speaking about Goedellian incompleteness at all.

            Nevertheless, there are good reasons for these results. Switching functions segregate into those which are linearly separable and those which are not. Call the latter threshold functions (Hu, “Threshold Logic”).

            One may associate linearly separability to logical classification by “properties.”

            The number of parameters in a Boolean polynomial may be understood as a count of dimensions. As the number of parameters used in Boolean polynomials increases without bound, the ratio of threshold functions to the total number of switching functions for a given dimension becomes arbitrarily small. So, in a sense, ordinary logic using properties becomes less effective with greater complexity.

            The connection between first-order formulas and Boolean polynomials is the completeness theorem for first-order logic. The use of Henkin witnessing constants to accommodate quantifiers effectively constitutes a reduction to propositional logic. John Barwise parenthetically includes this exact statement in his proof of completeness in “The Handbook of Mathematical Logic.”

            Like

            1. Chaitin has a relatively accessible paper for physicists on this topic using information-theoretic arguments:

              “if one has ten pounds of axioms and a twenty pound theorem, then that theorem cannot be derived from those axioms.”

              Gödel’s theorem and information. International Journal of Theoretical Physics.

              Click to access georgia.pdf

              Like

              1. jeremyjr01,

                Thank you for the link. I will look at it.

                The problem with “mathematics for non-mathematicians” is not unlike the problem of “physics for non-physicists.” When I said that your first paper link misrepresented Goedel’s incompleteness theorems and that Chaitin had been careful to “redefine metamathematics” to obtain his result, I had been quite serious. First-order theories have infinitely many axioms, period.

                As an undergraduate, I turned to mathematics because I could not reconcile work hours with laboratory hours. I believed, as many naive people are told, that mathematics is “the language of science.” Well, kudos to everyone with educated parents who learn the pitfalls of that belief before expending “blood and treasure.”

                The axioms for a metric are problematic for science. There is an inherent vagueness which, for example, permits William Lawvere to develop an infinitesmal calculus from different princples than the tradition arising historically. It had taken a great deal of effort for me to clearly understand exactly how a tensor metric on differential manifolds is not comparable to ordinary metrics. One would never sort that out from “physics for non-physicist” articles.

                Now, looking carefully at the Lie algebra axioms, they actually impose the “worst” metric space axiom characterization into the mathematics built upon their use. Although it may not matter for calculations of interest to physicists, this feeds into the “math belief” problem of scientists (and others) pointing to equations and demanding others to “prove them wrong.”

                Of course, Mr. Freundt had noticed a slight misconstrual in my simplistic use of “math belief” skme time ago. People are accustomed to speak in terms of “models” so that issues of belief appear to be circumvented. But, that is another matter.

                Just try to remember that people often refer to what others have written (Goedel and Chaitin, for example) without actually having read their work.

                And, along similar lies, the word “mathematics” is made just as vague as the word “God” by people invoking it to “prove.”

                Thank you very much for your paper references. I have been enjoying them.

                Like

              2. jeremyjr01,

                Addendum: First-order Zermelo-Fraenkel set theory and first-order Peano arithmetic have infinitely many axioms. I should not have spoken for every first-order theory with a generality.

                And, for what it is worth, there is no first-order theory which has only finite models. Keep this in mind everywhere someone is speaking about “finiteness.” It cannot be made precise with our best effort at making logic precise.

                Like

              3. Chaitin’s arguments expand on Godel’s results as it shows that incompleteness/irreducibility is not a rare occurrence but common/pervasive. In this context you can show, with an appropriate topology, that the set of independent/irreducible statements are dense, (actually a stronger result they are corare).

                Reference:
                Is Independence an Exception?

                Click to access independ_exception.pdf

                Like

        2. What if explanation is finite, by definition, but actual reality, the territory, is not?
          We can spiral into those endless rabbit holes of knowledge, yet the end result is that old saying about expertise as knowing more and more about less and less, until you know absolutely everything about absolutely nothing.
          Remember the people leading armies are the generals, while specialist is one rank above private. Is there some overall, general dynamic, running through the infinity of detail?
          My own framing device is energy and form. Energy manifests, form defines. Energy goes past to future, form goes future to past. Energy drives the wave, while the fluctuations rise and fall. Energy radiates out, form coalesces in. Between infinity and equilibrium. Zero to infinity. Black holes and black body radiation.
          Form synchronizes, energy harmonizes. Nodes and networks. Organisms and ecosystems. Particles and fields.
          Consciousness goes past to future, thoughts go future to past. Suggesting consciousness goes function as a kind of energy.
          Life is this very complex feedback loop(One of my faves. Is it allowed?) between the extremes, but life is this thermodynamic cycle of growth expanding out, as form/structure coalesces in. Some as seed, the rest radiated back out as fertilizer.

          I realize your eyes have glazed over, since this is outside the scientific Overton window, but I think, given the degree to which science has become a Tower of Babel, as all the experts congeal around their favorite models, it does, at least, raise the question of, where do we go from here? Most will say, further down the rabbit holes, but what if the funding runs short……. Thinking outside the box is hard, when the Plan is to build the box, but, like nature, it builds up and it breaks down.

          Like

  7. Has the RelMOND paper by Skordis and Zlosnik made any difference to the dark matter debate? What was the impact of this paper? It’s been few years… And what do you think of their approach now? Thanks…

    Liked by 1 person

    1. It is hard to assess, and there are a variety of reactions. People who want to ignore it do so. A common refrain is that no theory besides LCDM can fit the CMB. When one points out that it has been done, the reaction is often “that theory is complicated” so I don’t like it. AS if LCDM isn’t complicated.
      There’s a lot of whataboutism. In the first version of RelMOND (now dubbed AeST), they only showed the fit to the CMB power spectrum. Rather than acknowledge that success, cosmologists immediately asked “what about the galaxy power spectrum” which is now in the paper as a result. Rather than engage with that, they say the theory has too many new fields…

      Like

      1. Yep, we just posted that. This is just an exploration of AeST; I don’t think it is fatal yet but it is weird. Unaddressed in this paper: dark matter does even worse. I had written that up (the data to which we refer) but the referee didn’t like that interpretation and I haven’t had time to get back to it.

        Liked by 1 person

  8. I regard your point ‘… reproducing the MOND phenomenology naturally is a minimum standard that has to be met for a model to be viable’ as the key. Even most proponents of CDM admit, that MOND gets the phenomenology of star dynamics in galaxies right. What is rarely admitted is, that this occurs with essentially one single paramter (a0) which is even fixed for all cases – though phenomenologically determined of course. This makes it already predictive. There are other relations implied by MOND, specifically BTF.
    The problem with MOND for theorists is it being purely phenomenologic without any hints to understand how it comes about from some principle. Additionally it has the weakness of being related to nonrelativistic dynamics without a clear relativistic extension of which it is the classical limit. This makes clear, that it can only be a part of the full story. (L)CDM does not have this problem. But to be observationally correct, (L)CDM must be shown to yield the observed (MOND-like) dynamics without refering to individual galaxies and fitting only these!
    That would be the challenge to prove (L)CDM is consistent with observations, which can be demanded as a criterion.
    This corresponds to e. g. showing GR has Newtonian dynamics as limiting behaviour – nobody would (or hardly could) check each case individually with GR.

    Liked by 1 person

    1. “understand how it comes about from some principle.”

      New irreducible (strong emergent) properties are by definition not “explainable ” by an existing set of principles, exactly as the parallel axiom in Euclidean Geometry is independent from the other axioms.

      And precisely that is the issue that theoreticians refuse to even see, they assume that existing principles are enough to explain everything.

      Already Stacy McGaugh showed a correlation between the galaxies radial acceleration traced by rotation curves and that predicted by the observed distribution of baryons, and that this radial acceleration relation is tantamount to a natural law for rotating galaxies, obviously you don’t need dark matter at all. This radial acceleration relation is essentially a new strong emergent property of galaxies, that only can be discovered by direct observations (objectivity).
      https://arxiv.org/abs/1609.05917

      Like

      1. Three points
        1. you have no proof for your claim: MOND is irreducible.
        Or do you have one? Can you prove that nobody can derive a0 !!!
        2. Anderson starts in his article (more is different) with an ammonia molecule,
        in which the nitrogen atom constantly oscillates through the 3 H-atoms.
        (with a frequency of about 10^10 per second ).
        As soon as the atoms get bigger and especially when you have more atoms, the pendulum atoms have to synchronize,
        what first becomes improbable and – with further increasing number – impossible.
        I see here no connection to galaxies and MOND.
        One will see MOND also in a 2-body problem.
        Both stars only have to be far enough away from all other masses,
        so that a0 becomes dominant.
        3. I personally see a good chance that one can derive the Milgrom conjecture a0 ~ cH/6.

        Liked by 1 person

        1. The “connection” is simple: a complex set of starts as galaxies exhibiting a new emergent property, as a “rigid”/flat rotational speed. Systems with huge number of discreet components will tend to exhibit “system” behaviors that may appear disconnected/irreducible from its discreet components.

          Like

            1. Why two metallic molecules don’t exhibit the rigidity of a solid?

              Why two birds don’t exhibit a flock behavior?

              Why two individuals are not a society?

              Emergent properties/behaviors need many times a large number of discreet components to “manifest”.

              Like

            2. I can also ask your question in reverse:

              If dark matter is supposed to account for around 30% of that mass of the Universe why its effects can’t be detected on the solar system, or in “small” star assemblies?

              The “dark matter” effects are only present on large system of stars, and it seems to be different on galaxies (where MOND is good) that on galaxy clusters, two different hierarchy levels.

              Like

              1. MOND is visible for small accelerations.
                There is no condition with MOND of the kind: Must originate from many stars. The only relevant parameter is a0~10^-10m.

                Like

  9. Dear Stacy,

    Thank you immensely for your work and this blog.

    In your “One Law to Rule Them All” paper (and elsewhere) you say that a force-side interpolation function of the form
    nu(y) = (1 – exp(-sqrt{g_bar/g_dag}))^-1
    fits the SPARC data better than simpler functions like a generic double power-law or inertia-side fns like:
    mu(x) = x/(1+x) .
    I’m wondering what other inertia-side fns you’ve tried.
    E.g., have you tried
    mu(x) = 1 – exp(-|a|/a_0) ?
    I’m curious what value of a_0 would emerge as the best fit for that mu(x), and how the residuals compare relative to the nu(y) above? (I’m guessing/hoping it would be easy for you to try this with your extensive software arsenal — if you haven’t already, that is. 🙂
    Kind regards.

    Like

    1. The function we chose to use fits the data well enough that we haven’t pursued many other possibilities (see https://arxiv.org/abs/0804.1314 for a list). Some functions can be excluded, but others give equally good fits over the range of the available data, including x/(1+x). The latter fails not for galaxies but in the solar system, where it predicts detectable deviations from a pure inverse square law.
      So, yes, one could imagine different functions, and each would fit with a slightly different a0. I have not tried to do this: just too many things to do. Instead, I’ve mostly tried to get to lower acceleration where on is in the deep MOND regime so the shape of the function (exactly where and how it bends) ceases to matter. Results there have, so far, been in the range 1.1 < a0 < 1.3 E-10 m/s/s. Hope to write this up some day, in my copious spare time.

      Like

      1. If someone manages to derive a0, everything falls into place. For example, the fit function. The source of a0 is the key. a0 will terminate the search for dark matter.

        Like

  10. I’m not sure how or why a physical explanation for MOND’s effectiveness would falsify it. MOND doesn’t describe a cause, it describes a second order effect, acceleration. If an explanation for the cause of the acceleration effect was found wouldn’t that clarify MOND rather than falsify it?

    Liked by 1 person

  11. jeremyjr01,

    Let me re-iterate: Chaitin advances a “redefinition” of metamathematics. Blogs such as Dr. McGaugh’s, Dr. Woit’s, and Dr. Hossenfelder’s are specifically critcizing the “slippery slopes” upon which experts are declaring propositions to be scientific facts.

    Egregiously, they are being supported in this by philosophers of science who wish to redefine science using Bayesian probabilities.

    Goedel’s incompleteness theorem specifically addresses a program in the foundations of mathematics in which the “identity of an object” conforms with the sensible impression of a symbol on a page. So, for starters, if one does not believe in the objective reality of “mathematical objects” independent of spacetime (or space and time if spacetime is illusory), Goedel’s incompleteness theorems are largely irrelevant to what one otherwise accepts for mathematics. That is, if mathematics “begins with axioms,” calling upon numbers outside of an axiom system appears to be unfounded.

    Computation involves a different question which relates to first-order model theory through through the role of sets in semantical interpretation. Look up Skolem’s paradox. Because of Turing’s analysis of human activities, he had been able to give the best description we have for a mechanical understanding of how to compute a number. The description still assumes infinity because the “tape” used to accommodate the recording of inputs and outputs is (teleologically) infinite.

    Computation is useful in modern mathematical foundations because the “structural recursion” used in Hilbert’s metamathematics to concatenate symbols into expressions can be compared with the human activity described by Turing. Its most important foundational role comes from the fact that formal languages described in metamathematics also assume a completed infinity. The specification of formal languages is such that computation can be used to verify that a given expression is “well-formed.”

    The Church-Turing thesis, stating that all possible definitions of effective computability are equivalent to a Turing machine, plays no role whatsoever in the (platonistic) definition of recursion used by Goedel in his incompleteness argument.

    While there are people looking at the possibility of axiomatizing the notion of computation and formulating a proof of the Church-Turing thesis as a theorem, this has not yet been done satisfactorily among those who are experts. It is also dubious if by “mathematics” one intends semantics in the sense of first-order logic. In his book, “Theory of Algorithms,” Markov identified the fact that reasoning about symbol strings using a classically bivalent logic requires a different semantical theory.

    In 2023, physics is having to wrestle with people using rhetoric to explain how their own good ideas “define” science.

    For mathematics, the foundational problems started with the introduction of viable non-Euclidean geometries (doubts about Euclid’s parallel postulate arose much earlier, as did criticism of infinities). Rhetoric leading to arguments about what mathematics may or may not be crystallized toward the end of the nineteenth century. People who study the foundations of mathematics have been experiencing this problem of “redefinition” for over a century. Today, speaking of pluralism with respect to foundations is far more common because mathematics has no facility for discerning a correct paradigm.

    That does not inhibit the economic incentive to make claims about how one’s work applies to “all of mathematics.” When you know enough, you see through the hyperbole.

    Again, thank you for the papers. I find all of the content informative, even if I do not accept the overreaching conclusions.

    Like

    1. Maybe spacetime as a mathematical frame. The network needs to modeled as much as the nodes. Longitude, latitude and altitude. Mapping devices.

      Maybe that there is no singular paradigm is a hint, not a problem.
      If the map tried to incorporate all the information in the territory, it would revert back to noise.
      The only absolute is zero. The rest is relational.

      Like

      1. brodix,

        Map and territory, again? I rarely see that mentioned in my readings.

        Einstein had a fairly clear grasp of Hibert’s formalist views (which are not those attributed to him by modern formalists). If this is taken into account, special relativity describes a “schema” of reference frames. One effect of this is to deprecate “at rest” in the two clauses of Newton’s first law. I am making analogy with how “schema” is uninterpreted syntax in a first-order formal system. Fix an inertial frame (“interpret”) and “at rest” becomes meaningful.

        But, as links in this comment section have shown, uncritical application of special relativity to general relativity lead to erroneous conclusions. Distances in general relativity are hierarchically built up. First, triangulation. Then, identification of standard candles. Then more of which I cannot speak intelligently.

        Now, when Meyerson tried to portray general relativity as geometrization, Einstein objected. General relativity had been a unification of inertia with gravitation in Einstein’s view. So, what does the obfuscation of “at rest” in relativistic inertia mean?

        One hint is the sense by which “virtual particles” cannot be correlated with a spacetime event (the non-geometric Einsteinian conception). Another hint is the Unruh effect if it exists and if I understand it correctly to any extent.

        One thing philosophers study is ontological dependence,

        https://plato.stanford.edu/entries/dependence-ontological

        The idea of a “mass shell” of which one can speak of “on shell” and “off shell” speaks to the dependency of “at rest” upon the mathematics of general relativity. More precisely, for energy from an energy soup to coalesce into a particle, the energy level must be sufficient to be “on the mass shell.” So, “particles” are ontologically dependent upon the physics described by general relativity.

        On the other hand, energy in a particular reference frame depends upon “color” as a “philosophical substance.” Different inertial reference frames will be associated with different “color profiles,” I think. This is what I am taking from the Unruh effect.

        So, “could” sufficiently different reference frames “witness” different interactions from one another if they could “witness” each other’s frames in addition to their own?

        Any answer is beyond my skills.

        With regard to zero, the law of inertia speaks of “at rest” and “in motion.” The introduction of the calculus with Newtonian absoluteness changes this distinction to “intialness” and “betweenness.”

        This transformation accounts for Dr. Hossenfelder’s position that the differential mathematics does not support extension to “where the universe came from.” We cannot have knowledge of the initial conditions.

        By obfuscating “at rest,” relativistic mathematics effectively excludes zero from the mathematical description unless a particular reference frame is described.

        For what this is worth, there is a large corpus on Leibniz’ identity of indiscernibles and a much smaller one on the unity of opposites. If you look them up, you will be less likely to always turn to “maps and territories.”

        Leave that to the artificial intelligence guys who use Bayesian probability to exclude human beings from choosing “none of the above.” They are lost without linguistic maps and territories, as are their machinations.

        Like

        1. mls,

          If measures of time and space shrink to zero in a frame moving at the speed of light, what of the frame with the fastest clock and longest ruler?
          Wouldn’t it be the most at rest in the overall equilibrium of the vacuum, the unmoving void of absolute zero?
          If so, than wouldn’t this be a primary characteristic of space, along with infinity?

          What fills space is this energy, which radiates towards infinity, while the forms it manifests coalesce toward equilibrium. Both entropic. Black body radiation and black holes..

          Like

        2. There are other fields where maps and territories are a useful analogy.
          Say surgery, or rocket science. If say the O ring, or the artery isn’t where the diagrams say it should be, the consequences tend to feedback on themselves all too quickly and the reality check drives with a large thud.
          With math though and its assorted physics disciples, the map/equation rules and any inconvenient messiness can always be patched with an extra dimension, factor, field, particle, property, energy, whatever fills the gaps. Then all it’s good again.

          Like

    1. Maybe we are looking at it backwards and the stable properties of mass are an intermediate effect of this centripetal dynamics of structure coalescing, radiating energy in the process, rather than gravity a property of mass. From the barest bending of the light to black holes.
      The energy goes past to future, as the forms expressed go future to past.
      Energy drives the wave, the fluctuations rise and fall.

      Like

Comments are closed.