A common refrain I hear is that MOND works well in galaxies, but not in clusters of galaxies. The oft-unspoken but absolutely intended implication is that we can therefore dismiss MOND and never speak of it again. That’s silly.

Even if MOND is wrong, that it works as well as it does is surely telling us something. I would like to know why that is. Perhaps it has something to do with the nature of dark matter, but we need to engage with it to make sense of it. We will never make progress if we ignore it.

Like the seventeenth century cleric Paul Gerhardt, I’m a stickler for intellectual honesty:

“When a man lies, he murders some part of the world.”

Paul Gerhardt

I would extend this to ignoring facts. One should not only be truthful, but also as complete as possible. It does not suffice to be truthful about things that support a particular position while eliding unpleasant or unpopular facts* that point in another direction. By ignoring the successes of MOND, we murder a part of the world.

Clusters of galaxies are problematic in different ways for different paradigms. Here I’ll recap three ways in which they point in different directions.

1. Cluster baryon fractions

An unpleasant fact for MOND is that it does not suffice to explain the mass discrepancy in clusters of galaxies. When we apply Milgrom’s formula to galaxies, it explains the discrepancy that is conventionally attributed to dark matter. When we apply MOND clusters, it comes up short. This has been known for a long time; here is a figure from the review Sanders & McGaugh (2002):

Figure 10 from Sanders & McGaugh (2002): (Left) the Newtonian dynamical mass of clusters of galaxies within an observed cutoff radius (rout) vs. the total observable mass in 93 X-ray-emitting clusters of galaxies (White et al. 1997). The solid line corresponds to Mdyn = Mobs (no discrepancy). (Right) the MOND dynamical mass within rout vs. the total observable mass for the same X-ray-emitting clusters. From Sanders (1999).

The Newtonian dynamical mass exceeds what is seen in baryons (left). There is a missing mass problem in clusters. The inference is that the difference is made up by dark matter – presumably the same non-baryonic cold dark matter that we need in cosmology.

When we apply MOND, the data do not fall on the line of equality as they should (right panel). There is still excess mass. MOND suffers a missing baryon problem in clusters.

The common line of reasoning is that MOND still needs dark matter in clusters, so why consider it further? The whole point of MOND is to do away with the need of dark matter, so it is terrible if we need both! Why not just have dark matter?

This attitude was reinforced by the discovery of the Bullet Cluster. You can “see” the dark matter.

An artistic rendition of data for the Bullet Cluster. Pink represents hot X-ray emitting gas, blue the mass concentration inferred through gravitational lensing, and the optical image shows many galaxies. There are two clumps of galaxies that collided and passed through one another, getting ahead of the gas which shocked on impact and lags behind as a result. The gas of the smaller “bullet” subcluster shows a distinctive shock wave.

Of course, we can’t really see the dark matter. What we see is that the mass required by gravitational lensing observations exceeds what we see in normal matter: this is the same discrepancy that Zwicky first noticed in the 1930s. The important thing about the Bullet Cluster is that the mass is associated with the location of the galaxies, not with the gas.

The baryons that we know about in clusters are mostly in the gas, which outweighs the stars by roughly an order of magnitude. So we might expect, in a modified gravity theory like MOND, that the lensing signal would peak up on the gas, not the stars. That would be true, if the gas we see were indeed the majority of the baryons. We already knew from the first plot above that this is not the case.

I use the term missing baryons above intentionally. If one already believes in dark matter, then it is perfectly reasonable to infer that the unseen mass in clusters is the non-baryonic cold dark matter. But there is nothing about the data for clusters that requires this. There is also no reason to expect every baryon to be detected. So the unseen mass in clusters could just be ordinary matter that does not happen to be in a form we can readily detect.

I do not like the missing baryon hypothesis for clusters in MOND. I struggle to imagine how we could hide the required amount of baryonic mass, which is comparable to or exceeds the gas mass. But we know from the first figure that such a component is indicated. Indeed, the Bullet Cluster falls at the top end of the plots above, being one of the most massive objects known. From that perspective, it is perfectly ordinary: it shows the same discrepancy every other cluster shows. So the discovery of the Bullet was neither here nor there to me; it was just another example of the same problem. Indeed, it would have been weird if it hadn’t shown the same discrepancy that every other cluster showed. That it does so in a nifty visual is, well, nifty, but so what? I’m more concerned that the entire population of clusters shows a discrepancy than that this one nifty case does so.

The one new thing that the Bullet Cluster did teach us is that whatever the missing mass is, it is collisionless. The gas shocked when it collided, and lags behind the galaxies. Whatever the unseen mass is, is passed through unscathed, just like the galaxies. Anything with mass separated by lots of space will do that: stars, galaxies, cold dark matter particles, hard-to-see baryonic objects like brown dwarfs or black holes, or even massive [potentially sterile] neutrinos. All of those are logical possibilities, though none of them make a heck of a lot of sense.

As much as I dislike the possibility of unseen baryons, it is important to keep the history of the subject in mind. When Zwicky discovered the need for dark matter in clusters, the discrepancy was huge: a factor of a thousand. Some of that was due to having the distance scale wrong, but most of it was due to seeing only stars. It wasn’t until 40 some years later that we started to recognize that there was intracluster gas, and that it outweighed the stars. So for a long time, the mass ratio of dark to luminous mass was around 70:1 (using a modern distance scale), and we didn’t worry much about the absurd size of this number; mostly we just cited it as evidence that there had to be something massive and non-baryonic out there.

Really there were two missing mass problems in clusters: a baryonic missing mass problem, and a dynamical missing mass problem. Most of the baryons turned out to be in the form of intracluster gas, not stars. So the 70:1 ratio changed to 7:1. That’s a big change! It brings the ratio down from a silly number to something that is temptingly close to the universal baryon fraction of cosmology. Consequently, it becomes reasonable to believe that clusters are fair samples of the universe. All the baryons have been detected, and the remaining discrepancy is entirely due to non-baryonic cold dark matter.

That’s a relatively recent realization. For decades, we didn’t recognize that most of the normal matter in clusters was in an as-yet unseen form. There had been two distinct missing mass problems. Could it happen again? Have we really detected all the baryons, or are there still more lurking there to be discovered? I think it unlikely, but fifty years ago I would also have thought it unlikely that there would have been more mass in intracluster gas than in stars in galaxies. I was ten years old then, but it is clear from the literature that no one else was seriously worried about this at the time. Heck, when I first read Milgrom’s original paper on clusters, I thought he was engaging in wishful thinking to invoke the X-ray gas as possibly containing a lot of the mass. Turns out he was right; it just isn’t quite enough.

All that said, I nevertheless think the residual missing baryon problem MOND suffers in clusters is a serious one. I do not see a reasonable solution. Unfortunately, as I’ve discussed before, LCDM suffers an analogous missing baryon problem in galaxies, so pick your poison.

It is reasonable to imagine in LCDM that some of the missing baryons on galaxy scales are present in the form of warm/hot circum-galactic gas. We’ve been looking for that for a while, and have had some success – at least for bright galaxies where the discrepancy is modest. But the problem gets progressively worse for lower mass galaxies, so it is a bold presumption that the check-sum will work out. There is no indication (beyond faith) that it will, and the fact that it gets progressively worse for lower masses is a direct consequence of the data for galaxies looking like MOND rather than LCDM.

Consequently, both paradigms suffer a residual missing baryon problem. One is seen as fatal while the other is barely seen.

2. Cluster collision speeds

A novel thing the Bullet Cluster provides is a way to estimate the speed at which its subclusters collided. You can see the shock front in the X-ray gas in the picture above. The morphology of this feature is sensitive to the speed and other details of the collision. In order to reproduce it, the two subclusters had to collide head-on, in the plane of the sky (practically all the motion is transverse), and fast. I mean, really fast: nominally 4700 km/s. That is more than the virial speed of either cluster, and more than you would expect from dropping one object onto the other. How likely is this to happen?

There is now an enormous literature on this subject, which I won’t attempt to review. It was recognized early on that the high apparent collision speed was unlikely in LCDM. The chances of observing the bullet cluster even once in an LCDM universe range from merely unlikely (~10%) to completely absurd (< 3 x 10-9). Answers this varied follow from what aspects of both observation and theory are considered, and the annoying fact that the distribution of collision speed probabilities plummets like a stone so that slightly different estimates of the “true” collision speed make a big difference to the inferred probability. What the “true” gravitationally induced collision speed is is somewhat uncertain because the hydrodynamics of the gas plays a role in shaping the shock morphology. There is a long debate about this which bores me; it boils down to it being easy to explain a few hundred extra km/s but hard to get up to the extra 1000 km/s that is needed.

At its simplest, we can imagine the two subclusters forming in the early universe, initially expanding apart along with the Hubble flow like everything else. At some point, their mutual attraction overcomes the expansion, and the two start to fall together. How fast can they get going in the time allotted?

The Bullet Cluster is one of the most massive systems in the universe, so there is lots of dark mass to accelerate the subclusters towards each other. The object is less massive in MOND, even spotting it some unseen baryons, but the long-range force is stronger. Which effect wins?

Gary Angus wrote a code to address this simple question both conventionally and in MOND. Turns out, the longer range force wins this race. MOND is good at making things go fast. While the collision speed of the Bullet Cluster is problematic for LCDM, it is rather natural in MOND. Here is a comparison:

A reasonable answer falls out of MOND with no fuss and no muss. There is room for some hydrodynamical+ high jinx, but it isn’t needed, and the amount that is reasonable makes an already reasonable result more reasonable, boosting the collision speed from the edge of the observed band to pretty much smack in the middle. This is the sort of thing that keeps me puzzled: much as I’d like to go with the flow and just accept that it has to be dark matter that’s correct, it seems like every time there is a big surprise in LCDM, MOND just does it. Why? This must be telling us something.

3. Cluster formation times

Structure is predicted to form earlier in MOND than in LCDM. This is true for both galaxies and clusters of galaxies. In his thesis, Jay Franck found lots of candidate clusters at redshifts higher than expected. Even groups of clusters:

Figure 7 from Franck & McGaugh (2016). A group of four protocluster candidates at z = 3.5 that are proximate in space. The left panel is the sky association of the candidates, while the right panel shows their galaxy distribution along the LOS. The ellipses/boxes show the search volume boundaries (Rsearch = 20 cMpc, Δz ± 20 cMpc). Three of these (CCPC-z34-005, CCPC-z34-006, CCPC-z35-003) exist in a chain along the LOS stretching ≤120 cMpc. This may become a supercluster-sized structure at z = 0.

The cluster candidates at high redshift that Jay found are more common in the real universe than seen with mock observations made using the same techniques within the Millennium simulation. Their velocity dispersions are also larger than comparable simulated objects. This implies that the amount of mass that has assembled is larger than expected at that time in LCDM, or that speeds are boosted by something like MOND, or nothing has settled into anything like equilibrium yet. The last option seems most likely to me, but that doesn’t reconcile matters with LCDM, as we don’t see the same effect in the simulation.

MOND also predicts the early emergence of the cosmic web, which would explain the early appearance of very extended structures like the “big ring.” While some of these very large scale structures are probably not real, there seem to be a lot of such things being noted for all of them to be an illusion. The knee-jerk denials of all such structures reminds me of the shock cosmologists expressed at seeing quasars at redshifts as high as 4 (even 4.9! how can it be so?) or clusters are redshift 2, or the original CfA stickman, which surprised the bejeepers out of everybody in 1987. So many times I’ve been told that a thing can’t be true because it violates theoretician’s preconceptions, only for them to prove to be true, ultimately to be something the theorists expected all along.

Well, which is it?

So, as the title says, clusters ruin everything. The residual missing baryon problem that MOND suffers in clusters is both pernicious and persistent. It isn’t the outright falsification that many people presume it to be, but is sure don’t sit right. On the other hand, both the collision speeds of clusters (there are more examples now than just the Bullet Cluster) and the early appearance of clusters at high redshift is considerably more natural in MOND than In LCDM. So the data for clusters cuts both ways. Taking the most obvious interpretation of the Bullet Cluster data, this one object falsifies both LCDM and MOND.

As always, the conclusion one draws depends on how one weighs the different lines of evidence. This is always an invitation to the bane of cognitive dissonance, accepting that which supports our pre-existing world view and rejecting the validity of evidence that calls it into question. That’s why we have the scientific method. It was application of the scientific method that caused me to change my mind: maybe I was wrong to be so sure of the existence of cold dark matter? Maybe I’m wrong now to take MOND seriously? That’s why I’ve set criteria by which I would change my mind. What are yours?


*In the discussion associated with a debate held at KITP in 2018, one particle physicist said “We should just stop talking about rotation curves.” Straight-up said it out loud! No notes, no irony, no recognition that the dark matter paradigm faces problems beyond rotation curves.

+There are now multiple examples of colliding cluster systems known. They’re a mess (Abell 520 is also called “the train wreck cluster“), so I won’t attempt to describe them all. In Angus & McGaugh (2008) we did note that MOND predicted that high collision speeds would be more frequent than in LCDM, and I have seen nothing to make me doubt that. Indeed, Xavier Hernandez pointed out to me that supersonic shocks like that of the Bullet Cluster are often observed, but basically never occur in cosmological simulations.

91 thoughts on “Clusters of galaxies ruin everything

  1. Another problematic galaxy cluster is El Gordo:
    https://physicsworld.com/a/are-giant-galaxy-clusters-defying-standard-cosmology/

    Elena led a study on this recently, showing that the high mass and collision velocity so early in cosmic history is not compatible with LCDM:
    https://doi.org/10.3847/1538-4357/ace62a

    Galaxy clusters do seem to have formed earlier than expected in LCDM, but one does still have to understand the offset between the weak lensing and X-ray emission in the Bullet Cluster. It is not easy to do this without some form of collisionless matter.

  2. It’s very confusing. And frustrating. Quantum physics seems stuck on some fundamental questions and so does astro physics. I’ve noticed Sabine Hossenfelder flipping back and forth the last few years. As of her last video, she seems to be again leaning towards ΛCDM as a best fit to the data…

    1. Her book “Lost in Math” was published a few years ago.
      I have great respect for someone who is able to criticize their own group
      and wrote a rather entusiastic review.

      But in the meantime she has to live off her YouTube channel and fans
      and needs 1 to 2 sensations per week.

      In the summer, “Why is quantum mechanics non-local?” appeared on YouTube.
      Among other things, it is about entanglement.

      She explains entanglement with two envelopes,
      traveling in opposite directions (e.g. Bob and Alice). In each envelope
      is a slip of paper. One piece of paper has +1 on it, the other -1.
      If Bob now opens his envelope and finds +1.
      He then knows that Alice’s note says -1.
      You don’t need “spoky action at a distance”.
      In this video, entanglement is reduced to a simple correlation.

      This explanation is easy to understand, misleading and wrong.
      It only works in a 2-dimensional world.
      In this world, you could travel in one direction (1st dimension)
      and measure the spin perpendicular to it (2nd dimension), for example.
      There would only be spin UP or spin DOWN. Everything would be fine.

      Unfortunately (or fortunately), we live in a 3-dimensional world.
      Here you have the choice of 360° angles at which you can measure the spin.
      If Bob measures the spin at 0°, Alice can measure it at 30°.
      And now there is no simple correlation.

      I have stopped following her…

      1. @Dr. Freundt,

        Although I have not been able to find the site again, I once read an explanation of the difference in terms of “message destruction.”

        So, to explain, given two messages in a classical scenario, if one is destroyed en route to its destination, then its correlate may still arrive and be read. The explanation about which I request clarification stated that an entangled scenario always has either both messages arriving or no messages at all.

        Does this simplification sound accurate? I ask because the Alice and Bob descriptions always seem to confuse the mathematics with “word problem vagueness.”

        Meanwhile, your criticism seem more closely related to how the statistics in Stern-Gerlach experiments “recover” cosines in the limit. Obviously, the two are related. But, they are often portrayed separately in informal discourse.

        Thank you for any reply.

        1. I was referring to

          In the video, entanglement is repeatedly explained with a strict correlation, which does not work. If Bob measures spin up at 0 degrees, Alice can measure both spin up and spin down at 30 degrees.

          Your question: (I am referring to photons and measurement of polarization)
          If one photon flies to the left and is absorbed, the other still exists. Whether the absorption determines the polarization of the photon flying to the right, I cannot say. You would have to take a closer look at an experiment. What is certain is that if the polarization on the left is measured (better: determined by the measurement), the photon flying to the right has the required associated polarization instantaneously (without time delay). (If, as assumed above, there are different angles at which the polarization is measured, there are probabilities for the measurement result).

          A)
          To explain this, one can assume “spoky action at a distance”. In this case, the two sides will mysteriously coordinate.
          B)
          Since many individual measurements are made for the above experiment and then statistically evaluated, it can also be assumed that the individual measurements are not statistically independent. Then you assume that everything is already determined from the past. The keyword here is superdeterminism.
          There are experiments in which the angle for the polarization measurement was made dependent on the light of distant stars. The result is always the same.

          My point of view:
          Nature is already incapable of forming stable orbits of three bodies. But if everything depends on everything else, the result should still be very clear…! I can’t do anything with B). I like A) much better. I see a chance of finding an explanation for A).

          1. @Dr. Freundt

            Thank you for your explanation.

            Thanks to an answer on a stackexchange, I found the blog to which I had been referring. It is a 2020 post by Jake Distler,

            https://golem.ph.utexas.edu/~distler/blog/archives/003186.html

            The only expression I had to describe the latter part of his entry,

            “There’s zero chance that one of their photons made it through while the other got absorbed”

            had been “message destruction.” Hopefully, you will get a little chuckle from my ignorance if you look at his actual explanation.

            With regard to your Part A, I think a great deal about such matters. When I speak of “mathematics,” I speak of my attempts to understand how all of this “objects of thought” crap has undermined scientific investigation. It is difficult to commit the energy to learn paradigms which have led to problems.

            In the theory of ortholattices, the “atoms” of the lattice order correspond to orthogonal linear dimensions. So, if you look at an 8-element, 3-atom (Boolean) lattice, you have a representation of a 3-dimensional space.

            One representative of a class of orthomodular lattices is a “horizontal sum” of a 3-atom lattice and a 2-atom lattice. The nature of this construction is that the 3-atom block and the 2-atom block are jointly orthogonal.

            It seems to me that this describes the orthogonality of spin states associated with Stern-Gerlach experiments.

            When a Cartesian product is taken with a 2-element lattice, one obtains a 20-element lattice with a 4-atom Boolean suborder. I discovered this 20-element lattice in a book by Beran explaining how a quantum experiment is translated into an orthomodular lattice.

            It is true that this is far removed from the calculations of physics. But, the 10-element lattice I am describing looks nothing like the 4-atom Boolean lattice which would represent every calculation involving 4 algebraic dimensions.

            I focus on spin in Stern-Gerlach experiments because of a description of Shannon’s work in relation to Boltzmann’s,

            “Dr. Shannon’s work roots back, as von Neumann has pointed out, to Boltzmann’s observation, in some work on statistical physics (1894), that entropy is related to ‘missing information,’ in as much as it is related to the number of alternatives which remain possible to a physical system after all the macroscopically observable information concerning it has been recorded.”

            As a “non-physicist,” this looks very much to me as if using thermodynamics for an “arrow of time” has set the stage for “fudge factors” like unobservable dark matter.

            But, I am just a stupid guy who has to swing a sledgehammer to pay my bills. Alan Turing did not even include my kind of work among the “intelligent activities” that an intelligent machine could accomplish.

            With regard to “missing information,” the trigonometric functions are peculiarly suspicious because of Richardson’s theorem,

            https://en.m.wikipedia.org/wiki/Richardson%27s_theorem

            The theory of real closed fields is decidable — but arbitrary use of trigonometric functions is not.

            A recent Science News reported on Jonathan Oppenheim’s attempt to unify gravitation and quantum mechanics without “quantum hubris.”

            Some readers here might find interest,

            https://journals.aps.org/prx/abstract/10.1103/PhysRevX.13.041040

            As for that 10-element lattice, the only image I could find is sideways and labeled with New Age interpretation,

            https://www.pinterest.com/pin/395683517237936830/

            Thank you again for your response. Like you, I think choice A is resolvable. But translating my deliberations into physics would be a long way off.

            1. I left the following comment on GOLEM: #

              Sorry, but your explanation is wrong.
              If you rotate the polarization filters by 45 degrees, they are either aligned the same or offset by 90 degrees. (It’s not exactly clear to me whether they both rotate in the same direction or in opposite directions).
              In one case they have polarizers aligned the same way again, in the other case they are perpendicular to each other. This is the same situation as with horizontal and vertical polarization filters. And the photons behave in the same way as with correlation.
              If you want to find a difference, the polarization filters must have 0, 30 and 60 degrees, for example. #

              “There’s zero chance that one of their photons made it through while the other got absorbed”
              Yes, in relation to polarizers this is correct. I had assumed in your question that the absorption happens on one side by a black plate. Then the other should certainly be able to pass through a polarization filter with 50%.

              1. Thank you again for clarifying Jake Distler’s explanation.

                For what this is worth, Bell’s theorem clearly eliminates epistemic interpretations. What I say about mathematics has more to do with epistemic considerations.

                Historically, “necessity” and “possibility” have been studied on a “logical spectrum” with impossibility at one end. This is a modal logic which branches into “possible worlds.”

                Problematically, the semantics for this is topological. Moreover, “necessity” is fundamentally bound to continuity. So, the conditions used in the calculus lead to reasoning about “multiverses” and such.

                A better spectrum may come from Shannon’s work in the sense of a spectrum with “fallibility” at one end and “impossibility” at the other.

                Science should always be fallible. Mathematical deduction is a constraint with respect to impossibility. Measurement in science constrains infallibility.

                The following remark by Warren Weaver may correspond to your sensibility toward superdeterminism,

                “In the limiting case where one probability is unity (certainty) and all the others zero (impossibility), then [entropy] is zero (no uncertainty at all — no freedom of choice — no information)”

            2. I am looking for model objects which, if they are numerous enough, create our world. And yes, they are neither particles nor waves. If it works, it would be something really new. (And it would save me trying to understand quantum mechanics. Just as nobody today tries to explain Ptolemy’s epicycles by the flapping of angels’ wings…)

              1. @Dr. Freundt

                lol

                First, good luck.

                Second, on that, at least, we share the same “angst.”

                Lastly, you are fortunate that you did not have to begin by sorting out science from science fiction!

                Thank you again, very much.

  3. Stacy McGaugh wrote:

    Whatever the unseen mass is, is passed through unscathed, just like the galaxies. Anything with mass separated by lots of space will do that: stars, galaxies, cold dark matter particles, hard-to-see baryonic objects like brown dwarfs or black holes, or even massive [potentially sterile] neutrinos. All of those are logical possibilities, though none of them make a heck of a lot of sense.

    I know nothing about astronomy but feel strongly that sterile neutrinos should be included in the Standard Model of particle physics anyway. Therefore I would like to understand why they don’t make much sense in this respect.

    Let’s envision a combination of (a yet to be found relativistic version of) MOND with a particle model that includes sterile neutrinos. Looked at naively, this combination seems to have the potential to explain the offset from the diagonal in your MOND graph: Sterile neutrinos hardly interact at all, and it might take a whole galaxy cluster to bind them gravitationally. But within the cluster, the gravitation of single galaxies might not suffice to bind them. Hence the neutrinos would not significantly influence the pure-MONDian behaviour within each galaxy, but they would influence the behaviour of the cluster as a whole. (Whereas for the other options – stars, black holes, etc. – I would naively expect that they move around less freely in a cluster, thus have a harder time explaining the offset.)

    Have simulations been done of a MOND + sterile neutrinos scenario? Or is it unclear how to do such a simulation, because sterile neutrinos are expected to move fast, whereas MOND is a nonrelativistic theory?

    1. Sure, sterile neutrinos are a logical possibility. Simulations have even been done, by multiple different people, even. But they depend on properties of the hypothetical sterile neutrino (e.g., mass and number density) that we don’t know and can only speculate on, so the choices one makes in these simulations are difficult to generalize.

      1. Thanks to everyone for the replies. I have now looked very briefly at half a dozen relevant papers, from Angus 2009 to Wittenburg at al. 2023.

        Obviously I don’t really have a clue what I’m talking about here, but I would expect the dynamics of galaxies and galaxy clusters to be a much cleaner test of the proposed new physics (sterile neutrinos + some version of MOND) than structure formation. In the simulation of structure formation, presumably all kinds of simplifications have to be made, some of which may be oversimplifications; one might overlook or underestimate some important mechanism, even one that has nothing to do with the new physics but can be understood entirely in terms of known physics. (Also, more adjustable parameters are involved in cosmological simulations like structure formation than in galaxy and cluster dynamics, aren’t they?)

        Therefore I’m not too worried if the simulation of structure formation is off. That might sort itself out later in some way. Whereas the pure-MOND offset from the diagonal in your figure that occurs in clusters but not single galaxies looks to me like a relatively clean signal that wouldn’t go away by itself.

        How stable are the results – in particular for structure formation, but also for dynamics of galaxies vs clusters – under slight changes of the adjustable parameters? (That may be addressed in the papers, I just have to read them.) The stabler the results are, the more worried I am when something is off.

        As far as I understand the summaries in the articles I’ve skimmed (for instance Section 10.2 in the 2022 survey by Banik and Zhao), sterile-neutrino MOND explains the dynamics of galaxies and clusters well (and apparently also the shape of the CMB), provided the sterile-neutrino mass is 11 eV/c^2. Did I read that correctly? It seems to confirm the naive intuition from my first post above. And paints a much brighter picture than “Clusters of galaxies ruin everything”.

        You wrote: “I don’t buy the premise of sterile neutrinos having the high mass density we adopt in these papers.”

        1.) Which parameter exactly do you mean? Omega_ν, assumed to be 0.2653237 in Wittenburg et al.? (By the way, these values are taken from LambdaCDM parameter fits, aren’t they? I wouldn’t have expected parameters to be specified with such high precision in an astronomy context. The Planck Collaboration data have fewer significant digits. What’s going on here? Does the simulation react sensitively to the last two digits?)

        2.) You don’t buy the premise in the sense that you would find a lower value more plausible (closer to the baryon mass density?), or just in the sense that the value could be anything and you hate fudge factors because they ruin the explanatory power of the theory?

          1. Oops. I did try that, but I must have screwed up somehow when I copy&pasted. Sorry. (Anyway, I guessed correctly.)

            1. Nils’s paper is excellent and comprehensive. We tried something similar in https://arxiv.org/abs/1305.3651. Some results there were positive (early cluster formation), some negative (the wrong mass function arises).

              More generally, I don’t buy the premise of sterile neutrinos having the high mass density we adopt in these papers. It’s a thing to consider, but really we’re just using it as a proxy for our ignorance of the underlying theory, much as Lambda and CDM are used in conventional cosmology.

  4. “or nothing has settled into anything like equilibrium yet.”

    That was the first thought that occurred to me.

    How likely is it that clusters considered in your first figure are in dynamic equilibrium? Larger distances mean longer time scales, but the answer could still be “very” and I have no intuition about that.

    Assuming they are, it seems to me that, considering the repeated successes of MOND in the past, rather than doubt the theory we should take it as a prediction: there is baryonic mass yet to be observed. Among candidates that are not hypothetical, some must be harder than others to detect? If that is so, let’s build a new telescope! There is something there to be seen, as surely as if General Relativity told you so.

    1. An old saw among observers of clusters is that there is no such thing as a cluster in equilibrium. However, I don’t think that, by itself, can explain the magnitude and persistence of the discrepancy.

      If MOND were the standard model, then indeed, this would be a prediction that would launch a thousand observational programs to discover the baryons that have to be there. That is exactly the line of reasoning behind the casual attitude towards the misssing baryon problem LCDM suffers in individual galaxies. It is also part of the line of reasoning that launched a thousand dark matter experiments: it has to be there, it is just waiting for us to discover through hard work and lots and lots of grant money.

  5. Stacy, when you estimate MOND in the past, do you keep a_0 at the same value, or do you assume that it changes with the radius of the universe?

    1. I have assumed that a0 remains constant. This is consistent with TF data out to z ~ 1.5.

      If one scales a0 with the expansion of the universe, then less happens early on.

  6. What if we understood gravity wrong?

    What if all objects in the universe are growing in mass (and we have geological evidences on Earth such as very young oceans. Only about 180M years old)?

    How it would affect our gravitation equation if assume that the grows is happen at a constant rate and that rate is hidden in G constant?

    1. I don’t think we can scale our way out of problems this way. There are some pretty good constraints on the time variation of G; so good I’ve not thought about them in a long time.

      1. What constraints are you referring?

        In my model it’s about 3×10-16 1/sec. And it does not relate to time variation of G. It’s a constant rate of change of any mass.

        The physical meaning of dark matter effect is a drain funnel of gravitational energy. What causes an accelerated growth of galaxy mass.

        I know it sounds very weird however it leads to a very simple modification of gravity law with additional term with inverse square root dependence from the distance. It has a good agreement with Tully-Fisher relation.

        It was successfully applied in N-body simulations and allows building good mass models with acceptable Keplerian rotation curves for wide range of galaxies.

        1. What we need is a conceptual picture. Looking through the mathematics without one is like looking for a needle in a haystack. There’s are too many ‘conceptual variables’ to guess the mathematics, so to solve these puzzles we need to start to think conceptually. It’s unlikely to help to say ‘well what if this quantity varies over time’, or something like that. Now I’m not saying that’s all you do – you say ‘The physical meaning of dark matter effect is a drain funnel of gravitational energy’, perhaps you have a well fleshed-out conceptual picture underneath that, if so, good luck with it, that’s what we need. It needs to check a list of conceptual boxes (phenomenological ones), just as Milgrom’s theory checked a list of observational and mathematical ones. The picture I sent Stacy yesterday, whether or not it’s the true picture, at least it is one. It explains the direct link between the visible matter and the discrepancy, what happens in clusters, and why accelerations are boosted beyond a0. Also why the Newtonian pattern persists but is as if increasingly ‘compressed’, why the inner pattern is referred to in MOND to derive the outer one, and other things. The more candidate interpretations we have the better, even if they’re wrong – same with puzzles like QM (I talk to Carlo Rovelli in the second half of ‘the interactions avenue’ documentary about a conceptual interpretation for QM). We have to start thinking that way, and influencing each other to think that way, by coming up with ideas like that.

          1. I watched the video.
            Carlo is introduced as the head of the Quantum Gravity Theory Group in Marseille
            and has written over 200 publications.

            First, he praises quantum mechanics for its extraordinarily accurate results.
            Then he explains the double-slit experiment as it appears in all textbooks.
            (A) We imagine a photon as a particle,
            that passes through a double slit as a wave and interferes with itself behind it
            and produces an effect on a screen at a certain point.
            The usual wave-particle mechanism.

            He then presents his interpretation of quantum mechanics, the “Relational Interpretation” of quantum mechanics.
            It’s not wrong, but it doesn’t lead me to any new knowledge or insights.
            Everything looks more like a story to understand or reinterpret quantum mechanics.

            Then there’s the unpredictability of the world and entanglement and the Q&A…

            I found it disappointing, but sacrificing 37 minutes is okay.

            My point is this:
            A high school student with good grades at an exceptional high school would evaluate the double-slit experiment as follows:
            (B)

            1. we assume an electron (photon) is a wave, then the electron (photon) cannot have an effect on a small area
              on the screen. It must be a particle.
            2. we assume that an electron (photon) is a particle, then it cannot fly through both slits of a double slit at the same time.
              It follows from 1. and 2. that an electron (photon) is neither a wave nor a particle.

            It follows that quantum mechanics is wrong in its assumptions and has nothing to do with our world,
            except for the good results and there is no need to interpret or try to understand it.
            (No intelligence is capable of understanding a contradiction).

            Can you start with wrong premises and get right results ?
            YES. Every primary school student has all the necessary knowledge. Every mathematician knows this, but no physicist.

            Maybe Carlo has a big disadvantage compared to the high school student above:
            He has never left the mainstream.
            He never had the opportunity to get up from his desk, take a step back
            and look at his own field from a certain distance…

            1. We’re off topic, but what I’ll say applies to the mass discrepancy as well, and other puzzles generally (for those who take them that way). I think your reasoning is flawed, because your reference points imply a hidden false assumption. The false assumption is that there are no ‘unknowns’ in the picture – you reason as if the picture is already complete. You haven’t left holes in the jigsaw, or room for missing pieces, but without leaving holes, there’s nowhere for new ideas to land.

              We don’t know what a quantum wave is, or what the equivalent particle state is, so we can’t make deductions of that kind, in which you say ‘that can’t possibly be so’. Instead there’s a need to say matter seems to have two natures, why would that be? We can even switch between them, what’s going on?

              But if one takes an attitude that assumes we know it all already, it blocks progress – that kind of assumption is just beneath the surface of an enormous amount of stuff from the 20th century – it held us back in a disasterous way.

              The physicsts who made discoveries all started by admitting what we don’t know. They left holes in the jigsaw, with room for new ideas. Others paper over the cracks, good physicists are found staring into them. Progress is made by letting go of the comfortable idea that we what we have in front of us is complete in some way, and then you just look for clues. Asimov is meant to have said the most exciting phrase in physics is not ‘eureka’, but ‘that’s funny’.

              In my picture (which hopefully came from leaving holes), a quantum wave is a single particle seen many times. It’s a rotating disturbance in a cylindrical dimension, which being a dimension, has no fixed orientation in space, so it’s at many orientations at once. Matter has to go where the axes go, as it consists of vibrations in them. But it doesn’t know where they are, because where they are is just a set of possibilities.

              So an emitted photon spreads out into a wave because it takes all these possible paths, at different orientations. Until a local positioning for the axes is established, you get wavelike behaviour, and a superposition – many possibilities. The possibilities are for the orientation of the axis on which a single particle is travelling. In fact, this is the only thing we know of, apart from the wave function itself, which can be in a physical superposition of possibilities – we’ve taken the dimensions more literally for four decades, and if you do, that’s the only other thing that creates a superposition. So it’s at the very least a suspect.

              This fits with the energy units being equal and fixed within a particular wave, but they can go smaller in another wave. If each wave is the same particle seen many times, then naturally the energy units within it will all be fixed and equal. (Maybe this will save some of the 37 minutes of talk about it with Carlo Rovelli, but there’s a shortened version there as well.)

              You talk about a need to stand back, I think we need to stand back from the mathematics, and start seeing the wood, not the trees. We’ve reached a point where we need to do that, and start to get some pictures underneath what we have.

              1. Incidentally, you seem to have watched the wrong video, these things you mention don’t happen in it:

                ‘First, he praises quantum mechanics for its extraordinarily accurate results. Then he explains the double-slit experiment as it appears in all textbooks. He then presents his interpretation of quantum mechanics, the “Relational Interpretation” of quantum mechanics.’

                None of that happens! It’s just a conversation I had with Carlo Rovelli first about the idea that interactions can replace measurements in QM, because to make a measurement one has to cause an interaction, and then about my interpretation.

              2. I’ll briefly bring that discussion back to gravity related questions. Some have implied that conceptual pictures alone are not enough, I agree. I sometimes read things online that make me want to say ‘well show me an equation’. Some also imply that if it’s not expressed mathematically, it doesn’t exist. There I disagree, that holds back progress by discouraging exploration right where progress might be made, the conceptual side.

                The mathematics that’s needed is often two steps away. First get to the picture, then to new mathematics from there. Many are trying to go straight to the last step, and skip the middle one, but that tends not to work. Going via the intermediate step, it’s possible to get to good mathematics, and if the picture is good enough, one might even be able to prove it. Here’s some mathematics that was found that way.

                It was found via a picture that has very simple geometry at the Planck scale (or an equivalent scale), where many expect complicated geometry. It works as a near-proof, and takes about 10 minutes to check. It was published in March ’23, among those who’ve checked it, some have said it works, no-one has criticised it. I hope it’s of interest, thanks for the discussion.

                summary_PDF | gwwsdk1

              3. I fully agree with your comment about developing mathematics from images (or models).

                I have skimmed your essay.
                If I understand it correctly, you want to replace curvature with refraction to explain gravity.
                As far as I can see, you are using classical refraction as known from optics.

                However, the general theory of relativity is based on the special theory of relativity. In this theory, for example, there is no absolute simultaneity. In the essay, however, time is not taken into account in any way.

                Do you really want to reintroduce absolute simultaneity?
                That will be difficult. I think it’s impossible. How do you want to explain, for example, the cosmic muons that arrive on Earth?
                Has no one mentioned this?

                As far as I can see, the Planck scale only appears in the name, but otherwise plays no role, does it?

                I don’t think much of the Planck scale.
                Locations and lengths only make sense for macroscopic objects.
                An object is macroscopic if it has an length of at least a few nm in all 3 Dimensions.
                For electrons, such a size specification makes no sense at all.
                Quantum mechanics takes this into account in its results. The Psi function only provides probabilities of location. But quantum mechanics uses infinitesimal calculus and a continuous space as a basis, which is inconsistent with its results.
                This will probably lead to the downfall of quantum mechanics.

              4. Thank you for reading. It’s very far from being an essay – as it says, it’s a brief summary in outline of one bit of a paper. It’s from a theory that has been expressed elsewhere, in two books and two papers. There are a few ‘cherries on the cake’, and I posted one of them, when the discussion led to a need to show some mathematics.

                So don’t expect it to contain everything, and it can’t be criticised for leaving anything out! SR holds in the theory, or 99% of it. GR doesn’t, but it’s widely seen as a large-scale approximation nowadays anyway, as Raphael Bousso, Sabine Hossenfelder and others have said. This view was a taboo until recently, but it goes back much further.

                The paper is behind a paywall, I can post a link to a preprint if needed, let me know if so. About the Planck scale: ‘A conceptual and mathematical description of gravity follows, in which it arises at a very small scale. The Planck scale is referred to throughout, including in the name ‘Planck scale gravity’, but as with some other theories, the phrase ‘Planck scale’ is used for a very small undefined scale widely referred to as the Planck scale, perhaps at or near the literal Planck scale.’

                It’s worth pointing out that we don’t know what’s at the Planck scale. From the paper: ‘It seems to be vital to physics, but our ideas about that scale are unclear. String theory is dependent on supersymmetry, which unexpectedly has not been found – this leaves no reliable picture. In some views the world is chaotic there, making it hard to explain the order at larger scales. One should not assume, because of string theory, that geometry there is complicated.

                Yes, the theory uses classical refraction as known from optics, but it is applied to matter, and with a few very minimal but rather unexpected assumptions.

              5. By the way, for what it does, rather than being an essay, the 2nd para is:

                PSG predicts that any two points on any trajectory through any gravitational field will be found to be linked, via the law of refraction applied to matter. A random pair of points on the same trajectory are found to be connected in this way, via an equation the sides of which should always agree if refraction is at work. They always do, to approximately 16 decimal places, putting in numbers for any pair of points on the same trajectory.

              6. About absolute simultaneity: in my view this is also necessary for a satisfying explanation of quantum entanglement. One cannot define which particle of an entangled pair made the other’s wavefunction collapse, if there isn’t an absolute notion of time.

                The only alternative solutions I can see are 1. That entanglement is actually a wormhole in spacetime, or 2. That the wavefunction of an entangled pair is an inseparable entity influenced by its surroundings (of both particles). Which maybe comes down to a wormhole as well, giving just 1 alternative.

              7. I think we need to stand back from the mathematics, and start seeing the wood, not the trees

                The only way to start seeing the wood is to focus on the wood – not on an internal conception of the wood derived from abstract reasoning. The wood is phenomenological – it is physical reality. Any reconceptualization that does not begin with a consideration of those thing that are directly observable, detectable and measurable will wind up looking like Ptolemaic or Big Bang cosmology. The resulting model will be physically nonsensical.

                Over the last 100 years we have gained an enormous amount of information about the nature and scale of the Cosmos and yet our cosmological model is completely constrained by axioms that were adopted before any of that knowledge was acquired. Basically the standard model of cosmology is ignorant in its conception, of the nature of the Cosmos which is not, as assumed, a giant simultaneously expanding, homogeneous and isotropic gas bag.

                The standard model of particle physics is a similar mess, constrained by a peculiar “interpretation” that produces the empirically baseless concepts, superposition of states and wave-particle duality, despite the fact that a more realistic “interpretation”, Bohmian mechanics, has been known for 70 years. The consensus for some inexplicable reason prefers the incoherent quantum mumbo-jumbo of the Copenhagen interpretation where the wavefunction explains everything and nothing in a metaphysical babble that has nothing to do with physical reality.

                Constructing a model of physical reality based on mathematical or metaphysical considerations of the human imagination has proven, over the last half century, to be an abject failure. The two standard models do not remotely resemble the Cosmos as we observe it on both the macro and micro scales. Both models are chock-a-block with imaginary entities and events which are absent in empirical reality.

                The only way to properly do physics is to study physical reality and formulate models, first qualitative and then quantitative, on the basis of those studies. Attempting to formulate new physics models by rummaging around in the human imagination will only exacerbate the current mess.

              8. Simultaneity and my view of the world looks like this:

                • Location and time (and size and duration) are macroscopic properties of objects.
                  They make no sense for a microscopic object (electron, molecule, photon,…),
                  lose their strict meaning.
                  The behavior of an electron in the double-slit experiment depends on its velocity (momentum).
                  According to its momentum, we assign it a wavelength and calculate the result of the experiment.
                  However, the effective momentum does not simply depend on the electron, but on the relative movement of the electron and the double slit.
                  Even elementary particles that decay do not have a clock with them that measures their age.
                  It is rather the case that their environment destroys them with a certain probability within a certain time.
                • In quantum mechanics, there is no measurement in the classical sense.
                  What we call a measurement is an interaction that determines a certain property of an object.
                  In the case of entangled objects, the measurement of the polarization determines it.
                  Before the measurement, the polarization was undefined, non-existent.
                  The entangled objects coordinate during the “measurement” (at faster-than-light speed).
                • Causality, determinism, and logical contradictions are also macroscopic properties.
                • Simultaneity and the theory of relativity
                  For the observer at rest (B-rest), the events at entangled objects happen simultaneously.
                  But for an observer flying past (B +v), the event in the direction of flight happens sooner,
                  than the event behind. For an observer flying in the opposite direction (B -v), it is the other way round.
                  If the event were causal, all three would see a different causality.
                  In my opinion, you can only get out of this dilemma if there is no causality in the polarization determination in the case of entanglement…

                That all sounds unusual and strange.
                But everything else looks even stranger and doesn’t work.
                Quantum gravity – many have tried it.
                None have succeeded.

                If there is no size below 1 nm for individual objects,
                a lot of problems simply vanish…(*)

                (*) You can make objects smaller than 1 nm visible with an STM.
                But to do this you have to pin the atoms to a crystal (macroscopic solid). And then they are no longer individual objects.

  7. Do all the problems with MOND on the level of galaxy clusters also apply to the modified inertia MOND theories, or only the modified gravity MOND theories?

    1. Clusters are a problem for MOND in that the formula that works in galaxies doesn’t in clusters. It might, if you spot it an offset, hence the inference of missing baryons. But I don’t think it matters whether it is a modification of gravity or inertia here. I guess, conceivable, the hysteresis of orbits (which matters in modified inertia theories) could play a role since clusters are still falling together and haven’t settled into a pattern that lets an effective interpolation fcn emerge as in well-settled disk galaxies. But is seems like a big stretch that it would have the amplitude of the observed effect.

  8. “MOND works well in galaxies, but not in clusters of galaxies.”

    This is actually very similar to:

    “Clusters of quantum objects break simple quantum mechanics”, not by chance we have condensed matter physics and all other sciences dealing with large clusters of quantum objects.

    Higher complexity structures will exhibit new irreducible properties that can’t be reduced to the properties of their “elementary” components.

    Philip W. Anderson once again comes to mind.

      1. Obviously the universe is far from being uniform and/or homogeneous.

        A “Universal force law” always will be violated as the ideal assumptions taken for its existence are non existent in reality.

        Complexity always introduce new conditions that theories developed for simple systems always ignore or are unable to predict, in other words: complexity is always a boundary for the predictive/explanatory power of any theory. This seems to be the only “universal” constant.

    1. Regarding galaxy clusters: if we regard MOND as a “simplest possible modification to gravity that fits the rotation curve data”, there’s no inconsistency in doing the same on an even larger scale (galaxy clusters): another parameter to further modify the strength of gravity at that scale?

      Nothing suggests that the laws of our universe prefer simplicity.

      1. In fact “dark matter” is an expression of intellectual dishonesty: it basically creates the possibility of adding a limitless number of additional parameters to any particular stellar object, to fit its observed kinetic properties under unmodified Newtonian gravity.

        That approach itself is a per galaxy modification to gravity, even if it’s not called that: MOND could be equivalently expressed as a specific distribution of dark matter.

        So mathematically MOND is a (very small) subset of all possible dark matter distributions.

        1. As to how to modify MOND to match the galaxy cluster data (MMOND?): if a0 is the threshold acceleration for transition from the 1/r² Newtonian gravity regime to the 1/r MOND regime, we could add a second, ‘a1’ threshold to transition to an even stronger gravity regime: 1/√r.

          a1 would be sized according to the gravity cluster mass distribution data.

          1. BTW., this is not necessarily true:

            The one new thing that the Bullet Cluster did teach us is that whatever the missing mass is, it is collisionless.

            An alternative explanation would be for gravitons to have collisions & getting dampened substantially on such a huge scale as the interstellar gas mass of the Bullet Cluster.

            Ie. gravitons might be very weakly interacting, and if the graviton-baryon cross section area is larger in diffuse interstellar gas mass than in concentrated stellar mass.

            This might also explain ultra diffuse galaxies: if they have much more interstellar gas mass than the ~10% in regular galaxies, then the extra gas dampens the net gravity field enough to remove the MOND effect and slow down the rotation curve.

            1. Addendum: interstellar gas tends to be concentrated around galaxy centers – so “graviton-collision” could provide a MOND-alike correction to the rotation curve.

              Ie. there’s no baryonic dark matter, but a “negative mass effect” of clouds of gas distorting gravity fields: that of gravity centers, or that of itself (Bullet Cluster).

              Large ultra diffuse galaxies might just be large enough for the gas field to be small, thus their gravity field is mostly Newtonian.

  9. what about eMOND or modified gravity again for galaxies clusters perhaps by density or distance or even dark energy

    could dark energy of the scale of galaxies clusters modify MOND

    is there any modified gravity that will work for any and all galaxies clusters?

    1. eMOND was dreamt up to explain clusters, so maybe.

      Looking at the data, it feels more like a simple offset, which would be solved by extra mass.

      I don’t know that any of these solutions can be fully satisfactory. But you’re correct that dark energy may start to matter to the details of orbits in clusters. That could potentially screw up a lot without necessarily helping anything.

      1. perihelion of mercury was originally explain by extra mass plus Newton gravity

        GR explains perihelion of mercury by extra curvature not included in Newton gravity

        could galaxies clusters have extra curvature for reasons why maybe at such distance gravity strength levels off

  10. Would MOND work on galaxy clusters with a different a_0 ? if so, what value ? and why don’t you just accept that a_0 is simply different for galaxy clusters ? The law does not have to be universal, afterall, galaxy clusters are a different kind of system than single galaxies.

    1. Yes. You can see the required offset in the first figure in https://tritonstation.com/2021/02/05/the-fat-one-a-test-of-structure-formation-with-the-most-massive-cluster-of-galaxies/ It is about a factor of two higher than the value that works in galaxies.

      Ideally the law is universal, and we don’t get to pick a different a0 for every glitch that we encounter. Conceivably a0 is an effective value in the same sense that g=9.8 m/s/s is a constant at the surface of the Earth. But there needs to be some good reason to change it just so in clusters. We don’t really know what that is, though it is what theories like eMOND try to do.

      More generally, how seriously should we take this offset? If it happened in a theory we already believed, it would be no big deal. Since it happens in a theory that many people hate with a blind and ignorant rage, then obviously it is a falsification.

      1. could a0 be tied to the cosmological constant and perhaps some objectives quality like density or distance

        could galaxies have magnetic field to explain galaxies clusters or something like van der waals to explain why MOND falls short

        btw could you write more about eMOND and galaxies clusters in a full length post

        1. Not sure I have the patience for a full post on eMOND. The idea is that the Lagrangian depends on the depth of the potential as well as the acceleration scale. My first thought was “oh, sure, let’s not go there” but looking at the equation, it is not unreasonable. It is also not entirely clear that it works as hoped.

          1. “It is about a factor of two higher than the value that works in galaxies.”

            does then a0x2 work as well for *all* galaxies clusters?

            do galaxies clusters follow Baryonic Tully-Fisher Relation BTFR and radial acceleration relation (RAR)?

  11. In Full Speed In reverse!

    How do we properly penalize the model for cheating about its “prior” by peaking at past data?

    I suppose “peaking” was meant to be “peeking”?

  12. The Bullet Cluster is an example of a cluster consisting of two distinct subclusters. The hot gas cloud, itself sometimes split, nearly always lies around and between the subclusters. In such cases – not uncommon – it is impossible to account for the cloud’s lying between converging subclusters, so the inference is that they are diverging and had a common origin. But, within the paradigm of ‘hierarchical merging’, that too is impossible. Instead, they are interpreted to have previously passed through each other, to have crossed paths without merging and to have left the cloud stationary between them. No matter that they were converging – as a result of some inexplicable gravitational slingshot off-stage – at 4700 km/sec. That is the result of wearing ‘hierarchical merging’ spectacles. In this quasi-scientific world you can have your cake and eat it: if the evidence for divergence seems overwhelming, simply suspend the paradigm and posit convergence without merging.

    But how can both the galaxies and the dark matter of one subcluster have come out the other side without being displaced and disrupted by gravitational interaction with the other subcluster? Why does the gas show a bullet-like shock wave if it is lagging behind the mass it is supposedly slamming against? Since dark matter’s only property is to exert a gravitational force, why did the two halos not clump into one halo as they converged? Rather, the gas cloud is bullet-shaped because the galaxies – travelling faster – pushed through the gas.

    The observational evidence indicates that galaxy clusters originate from single superquasars. Hence there is nearly always an outsized BCG at the centre. Initially the superquasars/proto-BCGs simply split into two – hence there are frequently two BCGs and two subclusters. Over time the number of galaxies in a cluster increases. The initial number is just two. That is why at high redshift paired galaxies separated by < 30 kpc are exceedingly common and over time become progressively less common. They are diverging, not converging – and in a few inconvenient cases – passing through one another. Look at Webb images of galaxies at z > 6. As often as not you see one or more companion galaxy in the same frame. We’re not looking at a universe where galaxies are condensing out of primordial gas and everything flying apart because space is expanding. If they’re flying apart, it’s because superquasars are at the centre of reproduction. Reverse the automatic calculations of size that assume an expanding universe and make all objects smaller with increasing distance (on top of the normal distance-size relationship), and what you have is galaxies getting smaller over time – except BCGs, which even in the expanding universe model are as big at z = 1.3 as at z = 0.04.

    As you say, clusters ruin everything.

  13. @jeremyjr01

    But, one has to explain jumps in complexity — not just talk about them.

    So, for example, quantum mechanics led von Neumann and Birkhoff to develop the theory of orthomodular lattices. Every Boolean lattice is an orthomodular lattice. In general, orthomodular lattices are not distributive, whereas Boolean lattices are.

    The “free Boolean lattice on 2 generators” is the Boolean lattice used for propositional logic,

    https://en.m.wikipedia.org/wiki/File:Free-boolean-algebra-hasse-diagram.svg

    But, it is an orthomodular lattice because of group theory. As such, it can be labeled with Boolean 4-vectors,

    https://en.m.wikipedia.org/wiki/File:Hypercubeorder_binary.svg

    This has become “relevant” to speculations in physics in the sense that a 4×4 array of cells can have a Kummer configuration applied to it. Steven Cullinane maintains a site where the association of Boolean vectors and the 4×4 array is discussed,

    http://finitegeometry.org/sc/16/geometry.html

    The Kummer configuration associated with this array is described in the paper at the link,

    https://www.emis.de/journals/HOA/IJMMS/Volume2_2/281.pdf

    Now, for orthomodular lattices, the analogue for a free Boolean lattice is the free orthomodular lattice on 2 generators,

    https://cmp.felk.cvut.cz/~navara/FOML/beran_no.png

    Using a dihedral group of order 6, one can formulate a (96,6,4) design as an analogue for the Kummer configuration described in the paper above. When you apply it to the free orthomodular lattice, each 20-element design block distributes over 5 of the 16-element Boolean subblocks in the lattice illustration. Each of the 5 lattice subblocks receives a 4-set from 20-element design block.

    What appears to be the case is that the 21-point projective plane has a 16-element affine subplane. That affine subplane is described by 20 lines in 5 parallel classes.

    Upon further consideration, this structure appears to have this form because the complete graph K_5 and the complete bipartite graph K_3,3 are nonplanar.

    This is exactly what Dr. Freundt had spoken of with his criticism of Dr. Hossenfelder.

    Moreover, this serves to explain, somewhat, the danger of confusing physical theory with pure mathematics. The 4×4 array appears in Spekkens’ toy model,

    https://arxiv.org/abs/quant-ph/0401052

    for studying a 2-qubit system under an epistemic interpretation.

    Writing papers about “complexity” without details is very similar to writing papers in “physics” without measurements to decide what is legitimate. And, one need only look to Wolfram to understand that syntactic metamathematics and computability is easily confused with physics.

    I’m sorry, but mathematics is hard. And, its relationship to physics is just another difficult problem.

  14. could any one comment

    arXiv:2402.07159 (cross-list from astro-ph.GA) [pdf, ps, other]Central-surface-densities correlation in general MOND theoriesMordehai MilgromComments: 22 pagesSubjects: Astrophysics of Galaxies (astro-ph.GA); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics – Phenomenology (hep-ph)It is shown that the foundational axioms of MOND alone predict a strong correlation between a bulk measure of the baryonic surface density, ΣB, and the corresponding dynamical one, ΣD, of an isolated object, such as a galaxy. The correlation is encapsulated by its high- and low-ΣB behaviors. For ΣB≫ΣM≡a0/2πG (ΣM is the critical MOND surface density) one has ΣD≈ΣB. Their difference — which would be interpreted as the contribution of dark matter — is ΣP=ΣD−ΣB∼ΣM≪ΣB. In the deep-MOND limit, ΣB≪ΣM, one has ΣD∼(ΣMΣB)1/2. This is a primary prediction of MOND, shared by all theories that embody its basic tenets. Sharper correlations, even strict algebraic relations, ΣD(ΣB), are predicted in specific MOND theories, for specific classes of mass distribution — e.g., pure discs, or spherical systems — and for specific definitions of the surface densities. I proceed to discuss such tighter correlations for the central surface densities of axisymmetric galactic systems, Σ0B and Σ0D. Past work has demonstrated such relations for pure discs in the AQUAL and QUMOND theories. Here I consider them in broader classes of MOND theories. For most observed systems, Σ0D can not be determined directly at present, but, in many cases, a good proxy for it is the acceleration integral G≡∫∞0grdln r, where gr is the radial acceleration along a reflection symmetry of a system, such as a disc galaxy. G can be determined directly from the rotation curve. I discuss the extent to which G is a good proxy for Σ0D, and how the relation between them depends on system geometry, from pure discs, through disc-plus-bulge ones, to quasi-spherical systems.

    [


    1. Yes, MOND predicts a specific relation between the surface density of baryons and the inferred dynamical surface density. This is an elegant way of phrasing a prediction that has been built into it all along. The reason I became interested in MOND is because I noticed empirically a correlation between surface brightness and the amount of dark matter inferred; MOND was the only theory to make such a prediction in advance of its observation.

      1. i can’t click like

        Milgrom

        [Submitted on 11 Feb 2024]

        just a week ago but your interest was years ago

        so time is under to me

          1. MOND predicts a specific relation between the surface density of baryons and the inferred dynamical surface density

            thanks

            does the specific relation between the surface density of baryons and the inferred dynamical surface density work for galaxies clusters

            1. I would say that exactly this is the subject of this blog post. Clusters ruin everything – that is LCDM and MOND. So no, it does not work – as dr. McGaugh said, there is an offset.

              1. how similar are stars in a galaxies which follow a acceleration scale

                and galaxies in galaxies clusters

                when you apply calculated value

                could some additional modify equation like distance work with galaxies clusters

  15. Is this a legitimate detection of dark matter, or can some other mechanism explain this?

    HyeongHan, K., Jee, M. J., Cha, S., & Cho, H. (2024). Weak-lensing detection of intracluster filaments in the Coma cluster. Nature Astronomy, 1-7.

  16. This sounds like a low significance detection of a weak lensing signal associated with filaments in/around the Coma cluster. This is expected in both dark matter or MOND.

    1. An article that’s more in line with the subject of the previous post. A massive galaxy that formed its stars at z∼11 https://arxiv.org/abs/2308.05606 The summary does not hide the problem “This observation may point to the presence of undetected populations of early galaxies and the possibility of significant gaps in our understanding of early stellar populations, galaxy formation and/or the nature of dark matter. “

      Yours sincerely, Jean

      1. Yes, exactly. There are galaxies at z ~ 3 (still very early in time) with the signatures of old (age appropriate) stellar populations that would have had to get started before z=10. This is the problem.

  17. Historically, regarding the rotation curves, why was the proposal of dark “matter” settled upon, instead of the idea of dark “mass-energy”?

    The evidence just implies more spacetime curvature, but does it have to be mass causing it? Couldn’t it just as easily be energy?

    Energy also curves spacetime, and wouldn’t the energy appear dark to us if it were comprised of high enough or low enough frequency?

    Unfortunately we now use the term Dark Energy for an entirely different phenomenon – but why did this happen?

    1. You are correct that mass and energy are equivalent in Einstein’s E=mc^2 sense. In effect, mass is a super-concentrated form of energy. Therein lies the difference. One needs mass to curve spacetime enough to flatten rotation curves (and all the other phenomena we attribute to dark matter). If you instead have energy, it cannot be sufficiently concentrated to have the required effect.

      Another way to think of it: mass moves slowly – non-relativistically. Energy moves at the speed of light. It would not stay bound to a galaxy because the speed of light greatly exceeds the escape velocity. That’s what makes a black hole black: it is concentrated enough to have an escape velocity that exceeds the speed of light. Galaxies are very, very far from being that dense.

      1. Well, if not gravitationally bound energy, or energy as an expression of the galactic baryons, then what about a kind of Machian energy that derives from the rest of the universe in relation to the the galactic baryons?

        That’s not a precise idea, but it is just an example of a nonlocal source for an energy density that might curve spacetime equivalently to local dark matter.

  18. Dear Stacy,

    About two years ago, I think someone here on the blog rejected MOND because MOND violates conservation of momentum. You were so kind and point out:
    Felten: https://articles.adsabs.harvard.edu/pdf/1984ApJ…286….3F

    More precisely: only for F=ma is Newtonian momentum conservation satisfied.
    This means that Newton’s conservation of momentum is also violated in general relativity. Here the theory is rescued by redefining momentum and the more general momentum then again fulfills a conservation law.
    In ART, momentum is viewed completely differently.

    This is missing in MOND.
    Here one tries to fit MOND into the known world of ART.
    In my opinion, this does not work well.

    For a mass and Newtonian attraction, there is the image of the field lines
    and Gauss’s integral theorem: there are always just as many field lines passing through a closed surface.
    Field lines are generated by the mass and are not lost later.

    We have no such idea for MOND.
    I have no idea what happens there.
    We have this huge number of precision measurements
    but nobody can give an imagination.
    I am thinking about it.

    Best regards
    Stefan

    1. The link doesn’t work. But the only model I have in my head if you insist on field lines, is some polarization of space, like in some electric or magnetic materials. a_sub_zero would then be some energy scale where the phase transition happens. 

      1. I think space and time in the Newtonian sense are emergent phenomena.
        Comparable to the pressure, volume and temperature of a gas in kinetic gas theory.
        By the way: there is still no idea of relativistic space. In the past, this was called an ether. The word is banned nowadays. And nobody is looking for it anymore. Everything is hidden by mathematics. But mathematics doesn’t really provide a good explanation. The best saying I’ve read anywhere is:
        “For the moving twin, some times do not exist. That’s why he ages less…”

  19. could any one comment

    arXiv:2309.14270 (gr-qc)[Submitted on 25 Sep 2023 (v1), last revised 16 Feb 2024 (this version, v3)]

    MOND via Matrix GravityIvan G. Avramidi, Roberto NiardiMOND theory has arisen as a promising alternative to dark matter in explaining the collection of discrepancies that constitute the so-called missing mass problem. The MOND paradigm is briefly reviewed. It is shown that MOND theory can be incorporated in the framework of the recently proposed Matrix Gravity. In particular, we demonstrate that Matrix Gravity contains MOND as a particular case, which adds to the validity of Matrix Gravity and proves it is deserving of further inquiry.

    Comments: 31 pages, minor correctionsSubjects: General Relativity and Quantum Cosmology (gr-qc); Astrophysics of Galaxies (astro-ph.GA)Cite as: arXiv:2309.14270 [gr-qc]

    1. Looks very much like Milgrom’s BIMOND to me, but then taking the constraints (from lensing, LIGO etc.) less into account.

  20. so how is MOND calculated for Clusters of galaxies?

    I understand how MOND is calculated for stars in a single galaxy

    are individual galaxies treated like stars in Clusters of galaxies?

    do smaller galaxies orbit a single large galaxies in Clusters of galaxies?

    do Clusters of galaxies also have RAR and BTFR?

  21. Another fast-growing supermassive black hole https://theconversation.com/the-brightest-object-in-the-universe-is-a-black-hole-that-eats-a-star-a-day-222612 at a mass of 15-20 billion solar masses where we are seeing the light that left it over 12 billion years ago.

    An important comment from their work:

    If this is the brightest thing in the universe, why has it only been spotted now? In short, it’s because the universe is full of glowing black holes.

    The world’s telescopes produce so much data that astronomers use sophisticated machine learning tools to sift through it all. Machine learning, by its nature, tends to find things that are similar to what has been found before.

    This makes machine learning excellent at finding run-of-the-mill accretion discs around black holes – roughly a million have been detected so far – but not so good at spotting rare outliers like J0529-4351. In 2015, a Chinese team almost missed a remarkably fast-growing black hole picked out by an algorithm because it seemed too extreme to be real.

    The use of machine learning, while necessary for processing the extreme quantities of data that is now being produced in astronomy, also creates the risk of producing unplanned selection effects.

    1. Unplanned, unforeseen and unsolvable. It’s because AI is a program, an engine, it does not truly have intelligence – just pattern matching skills so extremely good that it bamboozles us all.

Comments are closed.