Required dark matter properties

Required dark matter properties

I was on vacation last week. As soon as I got back, the first thing I did was fall off my bike onto a tree stump, breaking my wrist. I’ll be okay, but I won’t be typing a lot. This post is being dictated to software; I hope I don’t have to do too much editing. I let the software generate the image above based on the prompt “dark matter properties illustrated” and I don’t think we should hold our breath for AI to help us out with this.

There were some good questions to the last post that I didn’t get to address. I went back and tried to answer some of them. Siriusactuary asked about the properties required for dark matter for galaxies vs. large scale structure. That’s a very deep question that requires a long answer with some historical perspective. Please bear with me as I attempt a quasi-coherent, off-the-cuff narrative that doesn’t invite a lot of editing, which it surely will.

I thought about this long and hard when I first encountered the problem. Which was almost thirty years ago now. So it is probably worth a short refresher.

We have been assuming all along, I think reasonably, that cosmological dark matter and galaxy dark matter are the same stuff, just different manifestations of the same problem. Perhaps they’re not, but there is a huge range of systems that show acceleration discrepancies, and it isn’t always trivial to split them into one camp or another. It seems common to talk about large and small scale problems, but I don’t think size is the right way to think about it. It’s more a difference between gravitationally bound systems that are in equilibrium and the dynamics of the expanding universe as an evolving entity that contains structures that develop within it.

The problem in bound systems is not just galaxy dynamics. It’s also clusters of galaxies. It’s also a star clusters that don’t show a discrepancy. The problem extends over a dynamic range of at least a billion in baryonic mass. It involves all sorts of dynamical questions where we do sometimes need to invoke dark matter or MOND or whatever. The evidence in bound systems is inevitably that when we apply the law of gravity as we know it to the stuff we can see, the visible baryons, then the dynamical mass doesn’t add up. We need something extra to explain the data.

The simple answer early on was that there was simply more mass there, i.e., dark matter. But that much is ambiguous. It could be that we infer the need for dark matter because the equations are inadequate and need to be generalized, i.e., something like MOND. But to start, at the beginning of the dark matter paradigm, there was no particular restriction on what the dark matter needed to be or what its properties needed to be. It could be baryonic, it could be non-baryonic. It could be black holes, brown dwarfs, all manner of things.

From a cosmological perspective, it became apparent in the early 1980s that we needed something extra – not just dark, but non-baryonic. By this time it was easy to believe because people like Vera Rubin and Albert Bosma had already established that we needed more than meets the eyes in galaxies. So dark matter was no longer a radical hypothesis, which it had been in 1970. The paradigm kinda snowballed – it had been around as a possibility since the 1930s, but it was only in the 1970s that it became firmly established dynamically. Even then it was like a factor of two and could be normal if hard to see baryons like brown dwarfs. By the early 1980s it was clear we needed more like a factor of ten, and it had to be something new: the cosmological constraint was that the gravitating mass density is greater than the baryon density allowed by big bang nucleuosynthesis. That means that there is a requirement on the nature of dark matter beyond there just being more mass.

The cosmic dark matter has to be something non-baryonic. That is to be say, it has to be some new kind of beast, presumably some kind of a particle that is not already in the standard model of particle physics. This was received with eagerness by particle physicists who felt that their standard model was complete and yet unsatisfactory and there should be something deeper and more to it. This was an indication in that direction. From a cosmological perspective, the key fact was that there was something more out there than met the eye. Gravitation gave a mass density then was higher than allowed in normal matter. Not only did you need dark matter, but you needed some kind of novel, new particle that’s not in the standard model of particle physics to be that dark matter.

The other cosmological imperative was to grow large scale structure. The initial condition that we see in the early universe is very smooth. That is the microwave background on the sky, with its very small temperature fluctuations, only one part in a hundred thousand. That’s the growth factor reached by redshift zero: structure has grown by a factor of a hundred thousand. Normal gravity will grow structure at a rate that is proportional to the rate at which the universe expands, which is basically a factor of a thousand since the microwave background was imprinted.

So we have another big discrepancy. We can only grow structure by a factor of a thousand, but we observe that it has grown by a factor of a hundred thousand. So we need something to goose the process. That something can be dark matter, provided that it does not interact with photons directly. It can be a form of particle that does not interact via the electromagnetic force. It can interacts through gravity and perhaps through the weak nuclear force, but not through the electromagnetic force.

Those are properties that are required of dark matter by cosmology. It has to be non-baryonic and not interact through electromagnetism. These properties are not necessary for galaxies. And that’s basically the picture that persists today. One additional constraint that we need from a cosmological perspective is that the dark matter needs to be slow-moving – dynamically cold so that structure can form. If you make it dynamically hot, like neutrinos that are born moving at very nearly the speed of light, those are not going to clump up and form structure even if they have a little mass.

So that was the origin of the cold dark matter paradigm. We needed some form of completely novel particle that had the right relic density – this is where the wimp miracle comes in. That worked fine for galaxies at the time. All you needed for galaxies early on was extra mass. It was cosmology that gave us these extra indications of what the dark matter needs to be.

We’ve learned a lot more about galaxies since then. I remember in the early nineties when I was still a staunch proponent of cold dark matter being approached at conferences by eminent dynamicists who confided in hushed tones so that the cosmologists wouldn’t hear that they thought the dark matter had to be baryonic, not non-baryonic.

I had come to this from the cosmological perspective that I had just described above. The total mass density had to be a lot bigger than the baryonic mass density. Therefore the dark matter had to be non-baryonic. To say otherwise was crazy talk, which is why they were speaking about it in hushed tones. But here were these very eminent people who were very quietly suggesting to me that their work on galaxies suggested that the dark matter had to be made a baryons not something non-baryonic. I asked why, and basically it boiled down to the fact that they could see clear connections between the dynamics and the baryons. It didn’t suffice just to have extra mass; the dark and luminous component seemed to know about each other*.

The data for galaxies showed that the stuff we could see, the distribution of stars and gas, was clearly and intimately related to the total distribution of mass, including the dark matter. This led to a number of ideas, that do not sit well with the cold dark matter paradigm. One was HI scaling: basically, if you took the distribution of atomic gas, and scaled it up by a factor of roughly 10, then that was a decent predictor of what the dark matter was doing. Given that, one could imagine that maybe the dark matter was some form of unseen baryons that follow the same distribution as the atomic gas. There was even an elaborate paradigm built up around very cold molecular gas to do this. That seemed problematic for me, because if you have cold molecular gas, it should clump up and form stars, and then you see it. Even if you didn’t see it in it’s cold form you need a lot of it. Interestingly, you do not violate the BBN baryon density, just in galaxies. But you would on a cosmic scale, if that was the only form of dark matter. So then we we need multiple forms of dark matter, which violates parsimony.

Another important and frequent point is the concept of maximum disk. This came up last time in the case of NGC 1277, where the inner regions of that galaxy have its dynamics completely explained by the stars that you see. This is a very common occurrence in high surface brightness galaxies. In regions where the stars are dense, that’s all the mass that you need. It’s only when you get out to a much larger radius, where the accelerations become low, that you needed something extra, the dark matter effect.

It was pretty clear and widely accepted that the inner regions of many bright galaxies were star dominated. You did not need much dark matter in the center, only at the edges. So you had this picture of a pseudoisothermal halo with a low density central core. But by the mid-nineties, a lot of simulations all showed that cold dark matter halos should have cusps: they predicted there to be a lot of dark matter near the centers of galaxies.

This contradicted the picture that had been established. And so people got into big arguments as to whether or not high-surface brightness galaxies were indeed maximal. The people who actually worked on galaxies said Yes, we have established that they are maximal – we only need stars in the central regions; the dark matter only becomes necessary farther out. People who were coming at it from the cosmological perspective without having worked on individual galaxies saw the results of the simulations, saw that there’s always a little room to trade off between the stellar mass and the dark mass by adjusting the mass to light ratio of the stars, and said galaxies cannot be maximal.

I was perplexed by this contradiction. You had a strong line of evidence that galaxies were maximal and their centers. You had a completely different line of evidence, a top down cosmological view of galaxies that said galaxies should not and could not be maximal in nurse centers. Which of those interpretations you believe seemed to depend on which camp you came out of.

I came out of both camps. I was working on low surface brightness galaxies at the time and was hopeful that they would help to resolve the issue. Instead they made it worse, sticking us with a fine-tuning problem. I could not solve this fine-tuning problem. It caused me many headaches. It was only after I had suffered those headaches that I began to worry about the dark matter paradigm. And then by chance, I heard a talk by this guy Milgrom who, in a few lines on the board, derived as a prediction all of the things that I was finding problematic to interpret in terms of dark matter. Basically, a model with dark matter has to look like MOND to satisfy the data.

That’s just silly, isn’t it?

MOND made predictions. Those predictions came true. What am I supposed to report? That it had these predictions com true – therefore it’s wrong?

I had made my own prediction based on dark matter. It failed. Other people had different predictions based on dark matter. Those also did not come true. Milgrom was only the only one to correctly predict ahead of time what low surface brightness galaxies would do.

If we insist on dark matter, what this means is that we need, for each and every galaxy, the precise that looks like MOND. I wrote the equation for the required effects of dark matter in all generality in McGaugh (2004). The improvements in the data over the subsequent decade enable this to be abbreviated to

gDM = gbar/(eโˆš(gbar/a0) -1).

This is in McGaugh et al. (2016), which is a well known paper (being in the top percentile of citation rates). So this should be well known, but the implication seems not to be, so let’s talk it through. gDM is the force per unit mass provided by the dark matter halo of a galaxy. This is related to the mass distribution of the dark matter – its radial density profile – through the Poisson equation. The dark matter distribution is entirely stipulated by the mass distribution of the baryons, represented here by gbar. That’s the only variable on the right hand side, a0 being Milgrom’s acceleration constant. So the distribution of what you see specifies the distribution of what you can’t.

This is not what we expect for dark matter. It’s not what naturally happens in any reasonable model, which is an NFW halo. That comes from dark matter-only simulations; it has literally nothing to do with gbar. So there is a big chasm to bridge right from the start: theory and observation are speaking different languages. Many dark matter models don’t specify gbar, let alone satisfy this constraint. Those that do only do so crudely – the baryons are hard to model. Still, dark matter is flexible; we have the freedom to make it work out to whatever distribution we need. But in the end, the best a dark matter model can hope to do is crudely mimic what MOND predicted in advance. If it doesn’t do that, it can be excluded. Even if it does do that, should we be impressed by the theory that only survives by mimicking its competitor?

The observed MONDian behavior makes no sense whatsoever in terms of the cosmological constraints in which the dark matter has to be non-baryonic and not interact directly with the baryons. The equation above implies that any dark matter must interact very closely with the baryons – a fact that is very much in the spirit of what earlier dynamicist had found, that the baryons and the dynamics are intimately connected. If you know the distribution of the baryons that you can see, you can predict what the distribution of the unseen stuff has to be.

And so that’s the property that galaxies require that is pretty much orthogonal to the cosmic requirements. There needs to be something about the nature of dark matter that always gives you MONDian behavior in galaxies. Being cold and non-interacting doesn’t do that. Instead, galaxy phenomenology suggests that there is a direct connection – some sort of direct interaction – between dark matter and baryons. That direct interaction is anathema to most ideas about dark matter, because if there’s a direct interaction between dark matter and baryons, it should be really easy to detect dark matter. They’re out there interacting all the time.

There have been a lot of half solutions. These include things like warm dark matter and self interacting dark matter and fuzzy dark matter. These are ideas that have been motivated by galaxy properties. But to my mind, they are the wrong properties. They are trying to create a central density core in the dark matter halo. That is at best a partial solution that ignores the detailed distribution that is written above. The inference of a core instead of a cusp in the dark matter profile is just a symptom. The underlying disease is that the data look like MOND.

MONDian phenomenology is a much higher standard to try to get a dark matter model to match than is a simple cored halo profile. We should be honest with ourselves that mimicking MOND is what we’re trying to achieve. Most workers do not acknowledge that, or even be aware that this is the underlying issue.

There are some ideas to try to build-in the required MONDian behavior while also satisfying the desires of cosmology. One is Blanchet’s dipole or dark matter. He imagined a polarizable dark medium that does react to the distribution of baryons so as to give the distribution of dark matter that gives MOND-like dynamics. Similarly, Khoury’s idea of superfluid dark matter does something related. It has a superfluid core in which you get MOND-like behavior. At larger scales it transitions to a non-superfluid mode, where it is just particle dark matter that reproduces the required behavior on cosmic scales.

I don’t find any of these models completely satisfactory. It’s clearly a hard thing to do. You’re trying to mash up two very different sets of requirements. With these exceptions, the galaxy-motivated requirement that there is some physical aspect of dark matter that somehow knows about the distribution of baryons and organizes itself appropriately is not being used to inform the construction of dark matter models. The people who do that work seem to be very knowledgeable about cosmological constraints, but their knowledge of galaxy dynamics seems to begin and end with the statement that rotation curves are flat and therefore we need dark matter. That sufficed 40 years ago, but we’ve learned a lot since then. It’s not good enough just to have extra mass. That doesn’t cut it.

So in summary, we have two very different requirements on the dark matter. From a cosmological perspective, we need it to be dynamically cold. Something non baryonic that does not interact with photons or easily with baryons.

From a galactic perspective, we need something that knows intimately about what the baryons are doing. And when one does one thing, the other does a corresponding thing that always adds up to looking like MOND. If it doesn’t add up to looking like MOND, then it’s wrong.

So that’s where we’re at right now. These two requirements are both imperative – and contradictory.


* There is a knee-jerk response to say “mass tells light where to go” that sound wise but is actually stupid. This is a form of misdirection that gives the illusion of deep thought without the bother of actually engaging in it.

Is NGC 1277 a problem for MOND?

Is NGC 1277 a problem for MOND?

Alert reader Dan Baeckstrรถm recently asked about NGC 1277, as apparently some people have been making this out to be some sort of death knell for MOND.

My first reaction was NGC who? There are lots of galaxies in the New General Catalog (new in 1888, even then drawing heavily on earlier work by the Herschels). I’m well acquainted with many individual galaxies, and can recall many dozens by name, but I do not know every single thing in the NGC. So I looked it up.

NGC 1277 in the Perseus cluster. Photo credit: NASA, ESA, M. Beasley, & P. Kehusmaa

NGC 1277 is a lenticular galaxy. Early type. Lots of old stars. These types of galaxies tend to be baryon dominated in their centers. One might even describe them as having a dearth of dark matter. This is expected in MOND, as the stars are sufficiently concentrated that these objects are in the high acceleration regime near their centers. The modification only appears when the acceleration drops below a0 = 1.2 x 10-10 m/s/s; when accelerations are above this scale, everything is Newtonian – no modification, no need for dark matter.

So, is NGC 1277 special in some way? Why does this come up now?

There is a recent paper on NGC 1277 by Comerรณn et al. that seems to be the source of the claims of a death knell. The title is The massive relic galaxy NGC 1277 is dark matter deficient. That sounds normal for this type of galaxy, but I guess if you disliked MOND without understanding it, you might misinterpret that title to mean there was no mass discrepancy at all, hence a problem for MOND. I guess. I’m an expert on the subject; I don’t know where non-experts get their delusions.

The science paper by Comerรณn et al. is a nice analysis of reasonably high quality observations of the kinematics of this galaxy. Not seeing what the worry is. Here is their Fig. 19, which summarizes the enclosed mass distribution:

Three-dimensional cumulative mass profiles of NGC 1277 (Fig. 19 of Comerรณn et al.) Stars and the central black hole account for everything within the observed radius; dark matter (colored bands) is not yet needed.

The first thing I did was eyeball this plot and calculate the circular speed of a test particle at 10 kpc near the edge of the plot. Newton taught us that V2 = GM/R, and the enclosed mass there looks to be just shy of 2 x 1011 solar masses, so V = 290 km/s. That’s big, but also normal for a massive galaxy like this. The corresponding centripetal acceleration V2/R is about 2a0. As expected, this galaxy is in the high acceleration regime, so MOND predicts Newtonian behavior. That means the stars suffice to explain the dynamics; no need for dark matter over this range of radii.

The second thing I did was check to see what Comerรณn et al. said about it themselves. They specifically address the issue, saying

One might be tempted to use the fact that NGC 1277 lacks detectable dark matter to speculate about the (in)existence of Milgromian dynamics (also known as MOND;ย Milgrom 1983) or other alternatives to the ฮ›CDM paradigm. Given a centrally concentrated baryonic mass ofย Mโ‹†โ€„โ‰ˆโ€„1.6โ€…ร—โ€…1011โ€†MโŠ™ย and an acceleration constantย a0โ€„=โ€„1.24โ€…ร—โ€…10โˆ’10ย m sโˆ’2ย (McGaugh 2011), a radiusย Rโ€„=โ€„13 kpc should be explored to be able to probe the fully Milgromian regime. This is about twice the radius that we cover and therefore our data do not permit studying the Milgromian regimeย 

Comerรณn et al. (2023)

which is what I just said. These observations do not probe the MOND regime, and do not test theory. So, in order to think this work poses a problem for MOND, you have to (i) not understand MOND and (ii) not bother to read the paper.

I wish I could say this was unusual. Unfortunately, it is only a bit sub-par for the course. A lot of people seem to hate MOND. I sympathize with that; I was really angry the first time it came up in my data. But I got over it: anger is not conducive to a rational assessment of the evidence. A lot of people seem to let their knee-jerk dislike of the idea completely override their sense of objectivity. All too often, they don’t even bother to do minimal fact checking.

As Romanowsky et al. pointed out, the dearth of dark matter near the centers of early type galaxies is something of a problem for the dark matter paradigm. As always, this depends on what dark matter actually predicts. The most obvious expectation is that galaxies form in cuspy dark matter halos with a high concentration of dark matter towards the center. The infall of baryons acts to further concentrate the central dark matter. So the nominal expectation is that there should be plenty of dark matter near the centers of galaxies rather than none at all. That’s not what we see here, so nominally NGC 1277 presents more of a challenge for the dark matter paradigm than it does for MOND. It makes no sense to call foul on one theory without bothering to check if the other fares better. But we seem to be well past sense and well into hypocrisy.

Checking in on Troubles with Dark Matter

Checking in on Troubles with Dark Matter

It is common to come across statements like “There is overwhelming astrophysical and cosmological evidence that most of the matter in our Universe is dark matter.” This is a gross oversimplification. The astronomical data that indicates the existence of acceleration discrepancies also test the ideas we come up with to explain them. I never considered MOND until I was persuaded by the data that there were serious problems with its interpretation in terms of dark matter.

The community seems to react to problems with the dark matter interpretation in one of several ways. Physicists often seem to simply ignore them, presuming that any problems are mere astronomical details that aren’t relevant to fundamental physics. Among more serious scientists, there is a tendency to bicker over solutions, settle on something (satisfactory or not), then forget that there was ever a problem.

Benoit Famaey and I wrote a long review for Living Reviews in Relativity about a decade ago. In it, we listed some of the problems that afflicted LCDM. It is instructive to review what those were, and examine what progress has been made. The following is based on section 4 of the review. I will skip over the discussion of coincidences, which remain an issue, to focus on specific astronomical problems.

Unobserved predictions

A problem for LCDM, and indeed, any theory, is when it makes predictions that are not confirmed. Here are a list of challenges stemming from observational reality deviating from the expectations or LCDM that we identified in our review, together with an assessment of whether they remain a concern.
The bulk flow challenge
Peculiar velocities of galaxy clusters are predicted to be on the order of 200 km/s in the ฮ›CDM model: as massive, recently formed objects, they should be nearly at rest with respect to the frame of the cosmic microwave background. Instead, they are observed to have bulk flows of order 1000 km/s.

This appears to remain a problem, and is related to the high collision speeds of objects like the bullet cluster, which basically shouldn’t exist.

The high-z clusters challenge
Structure formation is reputed to be one of the greatest strengths of LCDM, but the observers’ experience has consistently been to find more structure in place earlier than expected. This goes back at least to the 1987 CfA redshift survey stick man figure, which may seem normal now but surprised the bejeepers out of us at the time. It also includes clusters of galaxies, which appear at higher redshift than they should. At the time, we pointed out XMMU J2235.3-2557 with a mass of of โˆผ 4 ร— 1014 MโŠ™ at z = 1.4 as being very surprising.

More recently we have El Gordo, so this remains a problem.

The Local Void challenge
Peebles has been pointing out for a long time that voids are more empty than they should be, and do not contain the population of galaxies expected in LCDM. They’re too normal, too big, and gee it would help if structure formed faster. In our review, we pointed out that the โ€œLocal Voidโ€ hosts only 3 galaxies, which is much less than the expected โˆผ 20 for a typical similar void in ฮ›CDM.

I am not seeing much in the literature in the way of updates, so I guess this one has been forgotten and remains a problem.

The missing satellites challenge
LCDM predicts that there are many subhalos in every galactic halo, and one would naturally expect each of these to host a dwarf satellite galaxy. While galaxies like the Milky Way do have dwarf satellites, they number in the dozens when there should be thousands of subhalos. This is manifestly not the case.

The trick with this test is mapping the predicted number of halos to the corresponding galaxies that inhabit them. If there is a nonlinear relation between mass and light, then there can be fewer (or more) dwarf galaxies than halos. People seem to have decided that this problem has been solved.

It is not clear to me how the solutions map to the (contemporaneous with our review) Too Big To Fail problem in which the most massive predicted subhaloes are incompatible with hosting any of the known Milky Way satellites. It isn’t a simple nonlinearity in mass-to-light; some biggish subhalos simply don’t host galaxies, apparently, while many smaller ones do. That doesn’t make sense in terms of the many mass-dependent mechanisms that are invoked to suppress dwarf galaxy formation. Nevertheless, we are assured that it all works out.

The satellites phase-space correlation challenge
This is also known as the planes of satellites problem. At the time of our review, it had recently been recognized that the satellite galaxies of the Milky Way are observed to correlate in phase-space, lying in a seemingly rotation-supported disk. This is pretty much the opposite of what one expects in LCDM, in which subhalos are on randomly oriented, radial orbits.

The problem has gotten worse with more planes now being known around Andromeda and Centaurus A and other galaxies. There have been a steady stream of papers asserting that this is not a problem, but the “solution” seems to be to declare planes to be “common” if their incidence in simulations is a few percent. That is, they seem to agree with the observers who point out that this is a problem, and simply declare it not to be a problem.

The cusp-core challenge
The cusp-core problem is that cold dark matter halos are predicted to have cuspy central regions in which the density of dark matter rises continuously towards their centers, while fitting a dark matter mass distribution to observed galaxies prefers cored halos with a rougly constant density within some finite radius. This has a long history. Observers traditionally used the pseudoisothermal halo profile (with a constant density core) to fit rotation curve data. This was the standard model for a decade before CDM simulations predicted the presence of a central cusp. The pseudoisothermal halo continues to provide a better description of the data. The initial reaction of the theoretical community was to blame the data for not conforming to their predictions: they came up with a series of lame excuses (beam smearing, slit misplacement) for why the data were wrong. Serial improvements in the quality of data showed that these ideas were wrong, and effort switched from reality denial to model modification.

People generally seem to think this problem is solved through the use of baryon feedback to erase the cusps from galaxy halos. I do not find these explanations satisfactory, as they require a just-so fine-tuning to get things right. More generally, this is just one aspect of the challenge presented by galaxy kinematic data. This is what happens if you insist on fitting dark matter halos to data the looks like what MOND predicts. Lots of people seem to think that explaining the cusp-cpore problem solves everything, but this is just one piece of a more general problem, which is not restricted to the central regions. Ultimately, the question remains why MOND works at all in a universe run by dark matter.

I mention all this because it is the prototypical example of why one should take the claims of theorists to have solved a problem with a huge grain of salt. Here, the problem has been redefined into something more limited, then the limited problem has been solved in a seemingly-plausible yet unconvincing way, victory is declared, and the original, more difficult problem (MOND works when it should not) is forgotten or considered to be solved by extension.

The angular momentum challenge
During galaxy formation, the baryons sink to the centers of their dark matter halos. A persistent idea is that they spin up as they do so (like a figure skater pulling her arms in), ultimately establishing a rotationally supported equilibrium in which the galaxy disk is around ten or twenty times smaller than the dark matter halo that birthed it, depending on the initial spin of the halo. This is a seductively simple picture that still has many adherents despite never having really worked. In live simulations, in which baryonic and dark matter particles interact, there is a net transfer of angular momentum from the baryonic disk to the dark halo. This results in simulated disks being much too small.

This problem is solved by invoking just-so feedback again. Whether the feedback one needs to solve this problem is consistent with the feedback one needs to solve the cusp-core problem is unclear, in large part because different groups have different implementations of feedback that all do different things. At most one of them can be right. Given familiarity with the approximations involved, a more likely number is Zero.

The pure disk challenge
Structure forms hierarchically in CDM: small galaxies merge into larger ones. This process is hostile to the existence of dynamically cold, rotating disks, preferring instead to construct dynamically hot, spheroidal galaxies. All the merging destroys disks. Yet spiral galaxies are ubiquitous, and many late type galaxies have no central bulge component at all. At some point it was recognized that the existence of quiescent disks didn’t make a whole lot of sense in LCDM. To form such things, one needs to let gas dissipate and settle into a plane without getting torqued and bombarded by lots of lumps falling onto it from random directions. Indeed, it proved difficult to form large, bulgeless, thin disk galaxies in simulations.

The solution seems to be just-so feedback again, though I don’t see how that can preclude the dynamical chaos caused by merging dark matter halos regardless of what the baryons do.

The stability challenge
One of the early indications of the need for spiral galaxies to be embedded in dark matter halos was the stability of disks. Thin, dynamically cold spiral disks are everywhere around us, yet Newton can’t hold them together by himself: simulated spirals self destruct on a short timescale (a few orbits). A dark matter halo precludes this from happening by counterbalancing the self-gravity of the disk. This is a somewhat fine-tuned situation: too little halo, and a disk goes unstable; too much and disk self-gravity is suppressed – and spiral arms and bars along with it.

I recognized this as a potential test early on. Dark matter halos tend to over-stabilize low surface density disks against the formation of bars and spirals. You need a lot of dark matter to explain the rotation curve, but not too much so as to allow for spiral structure. These tensions can be contradictory, and the tension I anticipated long ago has been realized in subsequent analyses.

The low surface brightness spiral F568-1 (left) and its rotation curve (right). The heavy line indicates the stellar disk mass required to sustain the observed spiral arms; the light line shows what is reasonable for a normal stellar population for which the galaxy consistent with the BTFR and RAR. We can’t have it both ways; this is the predicted contradiction to invoking dark matter to explain both disk stability and kinematics.

I’m not aware of this problem being addressed in the context of cold dark matter models, much less solved. The problem is very much present in modern hydrodynamical simulations, as illustrated by this figure from the enormous review by Banik & Zhao:

The pattern speeds of bars as observed and simulated. Real bars are fast (R = 1) while simulated bars are slow (R > 2) due to the excessive dynamical friction from cuspy dark matter halos. (Fig. 21 from Banik & Zhao 2022).

The missing baryons challenge
The cosmic fraction of baryons – the ratio of normal matter to dark matter – is well known (16 ยฑ 1%). One might reasonably expect individual CDM halos to be in in possession of this universal baryon fraction: the sum of the stars and gas in a galaxy should be 16% of the total, mostly dark mass. However, most objects fall well short of this mark, with the only exception being the most massive clusters of galaxies. So where are all the baryons?

The answer seems to be that we don’t have to answer that. Initially, the poroblem was overcooling: low mass galaxies should turn more of their baryons into stars than is observed. Feedback was invoked to prevent that, and it seems to be widely accepted that feedback from those stars that do form heat much of the surrounding gas so it remains mixed in with the halo in some conveniently unobservable form, or that the feedback is so vigorous that it expells the excess baryons entirely. That the observed baryon fraction declines with declining mass is attributed to the lesser potential wells of smaller galaxies not being able to hang on to their baryons as well – they are more readily expelled. That sounds reasonable at a hand-waving level, but getting it right quantitatively presents a fine-tuning problem: the observed baryon fraction correlates strongly with mass with practically no scatter. One would expect feedback to be rather stochastic and result in a lot of scatter, but if it did it would propagate straight into the Tully-Fisher relation, which has practically no scatter. This fine-tuning problem is addressed by ignoring it.

The more things change

So those are the things that concerned us a decade ago. Looking back on them, there has been some progress on some items and less on others. Being generous, I would say there has at least been progress on the missing satellite problem, cusp-core, angular momentum, and pure disks. There has been no perceptible progress on the other problems, some of which (high-z clusters, disk stability) have gotten worse.

This is all written in the context of dark matter, with only passing reference to MOND. How does MOND fare for these same issues? MOND is good at making things move fast; it naturally predicts the scale of the bulk flows. It also predicted early structure formation, and is good at sweeping the voids clean. It has nothing to say about missing satellites. There are no subhalos that might be populated with dwarfs in MOND, so the question doesn’t arise. It might provide an explanation for the planes of satellites, but I am underwhelmed by this idea (or any others that I’ve heard for this particular problem). MOND is the underlying cause of the cusp-core problem, which arises entirely from trying to fit dark matter halos to galaxies that obey MOND. MOND suffers no angular momentum problem; what you see is what you get. It is noteworthy that angular momentum is not an additonal free parameter as there is no dark component with an unspecified quantity of it; it is specified entirely by the observed distribution of baryons and their motions. Similarly, making pure disks is not a problem for MOND. One can have hierarchical structure formation, but it is not required to the degree that it wipes out nascent disks in the way it did in LCDM simulations before steps were taken to make them stop doing that. Disk stability in MOND stems from the longer range of the force law rather than piling on dark matter; it is comparable for high surface brightness galaxies in both theories, but readily distinguishable for low surface brightness galaxies. This test clearly prefers MOND. Finally, the missing baryon problem doesn’t really pertain in MOND. Objects just have the baryons they have; only in rich clusters of galaxies is there a residual missing baryon problem (albeit a serious one!)

At a conservative count, that is four distinct items that have nothing to do with rotation curves where MOND performs better than LCDM. But go ahead, tell me again how MOND only explains rotation curves and nothing else.


This was basically just section 4.2 of the review. Section 4.3 was about unexpected observations – observations that were surprising in the context of LCDM. I think this post is been long enough, so I won’t go there except to say that these unexpected things were either predicted a priori by MOND, or follow so naturally from it that they could have been if the question had been posed. So it’s not just that MOND explains some things better than dark matter, it’s that it correctly predicted in advance things that were not predicted by dark matter, and that are often not well-explained by it.

The situation remains incommensurate.

The MOND at 40 conference

I’m back from the meeting in St. Andrews, and am mostly recovered from the jet lag and the hiking (it was hot and sunny, we did not pack for that!) and the driving on single-track roads like Mr. Toad. The A835 north from Ullapool provides some spectacular mountain views, but the A837 through Rosehall is more perilous carnival attraction than well-planned means of conveyance.

As expected, the most contentious issue was that of wide binaries. The divide was stark: there were two talks finding nary a hint of MONDian signal, just old Newton, and two talks claiming a clear MONDian signal. Nothing was resolved in the sense of one side convincing the other it was right, but there was progress in terms of [mostly] amicable discussion, with some sensible suggestions for how to proceed. One suggestion was that a neutral party should provide all the groups with several sets of mock data, one Newtonian, one MONDian, and one something else, to see if they all recovered the right answers. That’s a good test in principle, but it is a hassle to do in practice, as it is highly nontrivial to produce realistic mock Gaia data, so no one was leaping at the opportunity to stick their hand in this particular bear trap.

Xavier Hernandez made the excellent point that one should check that one’s method recovers Newtonian behavior for close binaries before making any claims to require/exclude such behavior for wide binaries. Neither MOND nor dark matter predicts any deviation from Newtonian behavior where stars are orbiting each other well in excess of a0, of which there are copious examples, so they provide a touchstone on which all should agree. He also convinced me that it was a Good Idea to have radial velocities as well as proper motions. This limits the sample size, but it helps immensely to insure that sample binaries are indeed bound pairs of binary stars. Doing this, he finds MOND-like behavior.

Previously, I linked to a talk by Indranil Banik, who found Newtonian behavior. This led to an exchange with Kyu-Hyun Chae, who has now posted an update to his own analysis in which he finds MONDian behavior. It is a clear signal, and if correct, could be the smoking gun for MOND. It wouldn’t be the first one; that honor probably goes to NGC 1560, and there have been plenty of other smoking guns since then. The trick seems to be finding something than cannot be explained with dark matter, and this could play that role since dark matter shouldn’t be relevant to binary stars. But dark matter is pretty much the ultimate Rube Goldberg machine of science, so we’ll see explanation people come up with, should they need to do so.

At present, the facts of the matter are still in dispute, so that’s the first thing to get straight.


Thanks to everyone I met at the conference who told me how useful this blog is. That’s good to know. Communication is inefficient at best, counterproductive at worst, and most often practically nonexistent. So it is good to hear that this does some small good.

Commentary on Wide Binaries

Last time, I commented on the developing situation with binary stars as a test of MOND. I neglected to enable comments for that post, so have done so now.

Indranil Banik has shared his perspective on wide binaries in a talk on the subject that is available on Youtube, included below.

Indranil and his collaborators are not seeing a MOND effect in wide binaries. Others have, as I discussed in the previous post. After the video posted above, Indranil comments on the work of Kyu-Hyun Chae:

Regarding the article by Chae (https://arxiv.org/abs/2305.04613), equation 7 of MNRAS 506, 2269โ€“2295 (2021) shows that the relative velocity is limited such that the v_tilde parameter (ratio of relative velocity within the sky plane to the Newtonian circular velocity at the projected separation) is at most 1 for 5 M_Sun binaries and in general is sqrt(5 M_Sun/M) for a binary of total mass M. This means v_tilde only goes up to 2 for M = 1.25 M_Sun, but more generally it goes up to a higher value at lower mass. Since the main signal in MOND is a broader v_tilde distribution at lower acceleration and a lower mass reduces the acceleration, this can lead to an artificial signal whereby lower mass systems have a larger rms v_tilde. Now a simple rms statistic is not exactly what Chae did, but this does highlight the kind of problem that can arise. Indeed, the v_tilde distribution prepared by Chae for the article in its figure 25 does show a rather sharp decline in the v_tilde distribution – there is not much of an extended tail, even less than in the model! This is obviously not due to measurement errors and contaminating effects like chance alignments, which would broaden the tail further. Rather, it is due to the upper limit to v_tilde imposed from the sample selection. This just means the underlying sample used is not well suited to the wide binary test, since it was quite clear a priori that the main signal for MOND would be in the region of v_tilde = 1-1.5 or so. One possibility is to try and restrict the analysis to a narrower range of binary total mass to try and alleviate the above concern, in which case the upper limit to v_tilde would be perhaps above 2 for the full sample used. There is however another issue in that lower accelerations generally correspond to higher separations and thus lower orbital velocities, so the fractional uncertainty in the velocity is likely to be larger. Thus, the v_tilde distribution is likely to be broader at low accelerations. This can be counteracted by having low errors across the board, but then the key quantity is the uncertainty on v_tilde. This aspect is not handled very rigorously – it is assumed that if the proper motions are accurate to better than 1%, then v_tilde will be sufficiently well known. But if the tangential velocity is about 20 km/s, a 1% error means an error of 200 m/s on the velocity of each star, so the relative velocity has an uncertainty of about 280 m/s. This is quite large compared to typical wide binary relative velocities, which are generally a few hundred m/s. Without doing a more detailed analysis, perhaps one thing to do would be to change this 1% requirement to 0.5% or 1.5% and see what happens. I am therefore not convinced that the MOND signal claimed by Chae is genuine.

I. Banik

Kyu-Hyun Chae responded to that, but apparently many people are not able to see his response on Youtube. I cannot. So I asked him about it, and he shares it here:

Since Indranil sent this concern to me in person, I’m replying here. No cut on v_tilde is used in my analysis because it is a gravity test. I did not use equation 7 of El-Badry et al. (MNRAS 506, 2269โ€“2295 (2021)) to cut out high v_tilde data, so there are some (though relatively small number of) data points above equation (7). I removed chance alignment cases by requiring R < 0.01 (El-Badry et al. convincingly show that R can be used to effectively remove chance alignment cases). This is the main reason why there is no high velocity tail. I have already considered varying proper motion (PM) relative errors: there are three cases PM rel error < 0.01 (nominal case), <0.003 (smaller case), and <0.2 (larger case). The conclusion on gravity anomaly (MOND signal) is the same in all three cases although the fitted f_multi (multiplicity fraction) varies. We can have more discussion in the st Andrews June meeting. I’m sure it will take some time but you will be convinced that my results are correct.

K.-H. Chae

He also shares this figure:

This is how the science sausage is made. As yet, there is no consensus.

Wide Binary Weirdness

My last post about the Milky Way was intended to be a brief introduction to our home galaxy in order to motivate the topic of binary stars. There’s too much interesting to say about the Milky Way as a galaxy, so I never got past that. Even now I feel the urge to say more, like with this extended rotation curve that I included in my contribution to the proceedings of IAU 379.

The RAR-based model rotation curve of the Milky Way extrapolated to large radii (note the switch to a logarithmic scale at 20 kpc!) for comparison to the halo stars of Bird et al (2022) and the globular clusters of Watkins et al (2019). The location of the solar system is noted by the red circle.

But instead I want to talk about data for binary stars from the Gaia mission. Gaia has been mapping the positions and proper motions of stars in the local neighborhood with unprecedented accuracy. These can be used to measure distances via trigonometric parallax, and speeds along the sky. The latter once seemed impossible to obtain in numbers with much precision; thanks to Gaia such data now outnumber radial (line of sight) velocities of comparable accuracy from spectra. That is a mind-boggling statement to anyone who has worked in the field; for all of my career (and that of any living astronomer), radial velocities have vastly outnumbered comparably well-measured proper motions. Gaia has flipped that forever-reality upside down in a few short years. It’s third data release was in June of 2022; this provides enough information to identify binary stars, and we’ve had enough time to start (and I do mean start) sorting through the data.

OK, why are binary stars interesting to the missing mass (really the acceleration discrepancy) problem? In principle, they allow us to distinguish between dark matter and modified gravity theories like MOND. If galactic mass discrepancies are caused by a diffuse distribution of dark matter, gravity is normal, and binary stars should orbit each other as Newton predicts, no matter their separation: the dark matter is too diffuse to have an impact on such comparatively tiny scales. If instead the force law changes at some critical scale, then the orbital speeds of widely separated binary pairs that exceed this scale should get a boost relative to the Newtonian case.

The test is easy to visualize for a single binary system. Imagine two stars orbiting one another. When they’re close, they orbits as Newton predicts. This is, after all, how we got Newtonian gravity – as an explanation for Kepler’s Laws or planetary motion. Ours is a lonely star, not a binary, but that makes no difference to gravity: Jupiter (or any other planet) is an adequate stand-in. Newton’s universal law of gravity (with tiny tweaks from Einstein) is valid as far out in the solar system as we’ve been able to probe. For scale, Pluto is about 40 AU out (where Earth, by definition, is 1 AU from the sun).

Let’s start with a pair of stars orbiting at a distance that is comfortably in the Newtonian regime, say with a separation of 40 AU. If we know the mass of the stars, we can calculate what their orbital speed will be. Now imagine gradually separating the stars so they are farther and farther apart. For any new separation s, we can predict what the new orbital speed will be. According to Newton, this will decline in a Keplerian fashion, v ~ 1/โˆšs. This will continue indefinitely if Newton remains forever the law of the land. If instead the force law changes at some critical scale sc, then we would expect to see a change when the separation exceeds that scale. Same binary pair, same mass, but relatively faster speed – a faster speed that on galaxy scales leads to the inference of dark matter. In essence, we want to check if binary stars also have flat rotation curves if we look far enough out.

We have long known that simply changing the force law at some length scale sc does not work. In MOND, the critical scale is an acceleration, a0. This will map to a different sc for binary stars of different masses. For the sun, the critical acceleration scale is reached at sc โ‰ˆ 7000 AU โ‰ˆ 0.034 parsecs (pc), about a tenth of a light-year. That’s a lot bigger than the solar system (40 AU) but rather smaller than the distance to the next star (1.3 pc = 4.25 light-years). So it is conceivable that there are wide binaries in the solar neighborhood for which this test can be made – pairs of stars with separations large enough to probe the MOND regime without being so far apart that they inevitably get broken up by random interactions with unrelated stars.

Gaia is great for identifying binaries, and space is big. There are thousands of wide binaries within 200 pc of the sun where Gaia can obtain excellent measurements. That’s not a big piece of the galaxy – it is a patch roughly the size of the red circle in the rotation curve plot above – but it is still a heck of a lot of stars. A signal should emerge, and a number of papers have now appeared that attempt this exercise. And ooooo-buddy, am I confused. Frequent readers will have noticed that it has been a long time between posts. There are lots of reasons for this, but a big one is that every time I think I understand what is going on here, another paper appears with a different result.

OK, first, what do we expect? Conventionally, binaries should show Keplerian behavior whatever their separation. Dark matter is not dense enough locally to have any perceptible impact. In MOND, one might expect an effect analogous to the flattening of rotation curves, hence higher velocities than predicted by Newton. And that’s correct, but it isn’t quite that simple.

In MOND, there is the External Field Effect (EFE) in which the acceleration from distant sources can matter to the motion of a local system. This violates the strong but not the weak Equivalence Principle. In MOND, all accelerative tugs matter, whereas conventionally only local effects matter.

This is important here, as we live in a relatively high acceleration neighborhood that is close to a0. The acceleration the sun feels towards the Galactic center is about 1.8 a0. This applies to all the stars in the solar neighborhood, so even if one finds a binary pair that is widely separated enough for the force of one star on another to be less than a0, they both feel the 1.8 a0 of the greater Galaxy. A lot of math intervenes, with the net effect being that the predicted boost over Newton is less than it would have been in the absence of this effect. There is still a boost, but its predicted amplitude is less than one might naively hope.

The location of the solar system along the radial acceleration relation is roughly (gbar, gobs) = (1.2, 1.8) a0. At this acceleration, the effects of MOND are just beginning to appear, and the external field of the Galaxy can affect local binary stars.

One of the first papers to address this is Hernandez et al (2022). They found a boost in speed that looks like MOND but is not MOND. Rather, it is consistent with the larger speed that is predicted by MOND in the absence of the EFE. This implies that the radial acceleration relation depicted above is absolute, and somehow more fundamental than MOND. This would require a new theory that is very similar to MOND but lacks the EFE, which seems necessary in other situations. Weird.

A thorough study has independently been made by Pittordis & Sutherland (2023). I heard a talk by them over Zoom that motivated the previous post to set the stage for this one. They identify a huge sample of over 73,000 wide binaries within 300 pc of the sun. Contrary to Hernandez et al., they find no boost at all. The motions of binaries appear to remain perfectly Keplerian. There is no hint of MOND-like effects. Different.

OK, so that is pretty strong evidence against MOND, as Indranil Banik was describing to me at the IAU meeting in Potsdam, which is why I knew to tune in for the talk by Pittordis. But before I could write this post, yet another paper appeared. This preprint by Kyu-Hyun Chae splits the difference. It finds a clear excess over the Newtonian expectation that is formally highly significant. It is also about right for what is expected in MOND with the EFE, in particular with the AQUAL flavor of MOND developed by Bekenstein & Milgrom (1984).

So we have one estimate that is MOND-like but too much for MOND, one estimate that is straight-laced Newton, and one estimate that is so MOND that it can start to discern flavors of MOND.

I really don’t know what to make of all this. The test is clearly a lot more complicated than I made it sound. One does not get to play God with a single binary pair; one instead has to infer from populations of binaries of different mass stars whether a statistical excess in orbital velocity occurs at wide separations. This is challenging for lots of reasons.

For example, we need to know the mass of each star in each binary. This can be gauged by the mass-luminosity relation – how bright a main sequence star is depends on its mass – but this must be calibrated by binary stars. OK, so, it should be safe to use close binaries that are nowhere near the MOND limit, but it can still be challenging to get this right for completely mundane, traditional astronomical reasons. It remains challenging to confidently infer the properties of impossibly distant physical objects that we can never hope to visit, much less subject to laboratory scrutiny.

Another complication is the orientation and eccentricity of orbits. The plane of the orbit of each binary pair will be inclined to our line of sight so that the velocity we measure is only a portion of the full velocity. We do not have any way to know what the inclination of any one wide binary is; it is hard enough to identify them and get a relative velocity on the plane of the sky. So we have to resort to statistical estimates. The same goes for the eccentricities of the orbits: not all orbits are circles; indeed, most are not. The orbital speed depends on where an object is along its elliptical orbit, as Kepler taught us. So yet again we must make some statistical inference about the distribution of eccentricities. These kinds of estimates are both doable and subject to going badly wrong.

The net effect is that we wind up looking at distributions of relative velocities, and trying to perceive whether there is an excess high-velocity tail over and above the Newtonian expectation. This is far enough from my expertise that I do not feel qualified to judge between the works cited above. It takes time to sort these things out, and hopefully we can all come to agreement on what it is that we’re seeing. Right now, we’re not all seeing eye-to-eye.

There is a whole session devoted to this topic at the upcoming meeting on MOND. The primary protagonists will be there, so hopefully some progress can be made. At least it should be entertaining.

A few words about the Milky Way

A few words about the Milky Way

I recently traveled to my first international meeting since the Covid pandemic began. It was good to be out in the world again. It also served as an excellent reminder of the importance of in-person interactions. On-line interactions are not an adequate substitute. I’d like to be able to recount all that I learned there, but it is too much. This post will touch on one of the much-discussed topics, our own Milky Way Galaxy.

When I put on a MOND hat, there are a few observations that puzzle me. The most persistent of these include the residual mass discrepancy in clusters, the cosmic microwave background, and the vertical motions of stars in the Milky Way disk. Though much hyped, the case for galaxies lacking dark matter does not concern me much: the examples I’ve seen so far appear to be part of the normal churn of early results that are likely to regress toward the norm as the data improve. I’ve seen this movie literally hundreds of times. I’m more interested in understanding the forest than a few outlying trees.

The Milky Way is a normal galaxy – it is part of the forest. It is easy to get lost in the leaves when one has access to data for millions going on billions of individual stars. These add up to a normal spiral galaxy, and we know a lot about external spirals that can help inform our picture of our own home.

For example, by assuming that the Milky Way falls along the radial acceleration relation defined by other spiral galaxies, I was able to build a mass model of its surface density profile. The resulting mass distribution is considerably more detailed than the usual approach of assuming a smooth exponential disk, which would be a straight line in the right-hand plot below. With the level of detail becoming available from missions like the Gaia satellite, it is necessary to move beyond such approximations.

Left: Spiral structure in the Milky Way traced by regions of gas ionized by young stars (HII regions, in red) and by the birthplaces of giant molecular clouds (GMCs, in blue). Right: the azimuthally-averaged surface density profile of stars inferred from the rotation curve of the Milky Way using the Radial Acceleration Relation. The features inferred kinematically correspond to the spiral arms known from star counts, providing a local example of Renzo’s Rule.

This model was built before Gaia data became available, and is not informed by it. Rather, I took the terminal velocities measured by McClure-Griffiths & Dickey, which provide the estimate of the Milky Way rotation curve that is most directly comparable to what we measure in external spirals, and worked out the surface density profile using the radial acceleration relation. The resulting model possesses bumps and wiggles like those we see corresponding to spiral arms in external galaxies. And indeed, it turns out that the locations of these features correspond with known spiral arms. Those are independent observations: one is from the kinematics of interstellar gas, the other from traditional star counts.

The model turns out to have a few further virtues. It matches the enclosed mass profile of the inner bulge/bar region of the Galaxy without any attempt at a specific fit. It reconciles the rotation curve measured with stars using Gaia data with that measured using gas in the interstellar medium – a subtle difference that was nevertheless highly significant. It successfully predicts that the rotation curve beyond the solar radius would not be perfectly flat, but rather decline at a specific rate – and exactly that rate was subsequently measured using Gaia. These are the sort of results that inclines one to believe that the underlying physics has to be MOND. Inferring maps of the mass distribution with this level of detail is simply not possible using a dark matter model.

The rotation curve of the Milky as observed in interstellar gas (light grey) and as fit to the radial acceleration relation (blue line). Only the region from 3 to 8 kpc has been fit; the rest follows. This matches well stellar observations from the inner, barred region of the Milky Way (dark grey squares: Portail et al. 2017) and the gradual decline of the outer rotation curve (black squares: Eilers et al. 2019) once corrected for the presence of bumps and wiggles due to spiral arms. These require taking numerical derivatives for use in the Jeans equation; the red squares show the conventional result obtained when neglecting this effect by assuming a smooth exponential surface density profile. See McGaugh (2008 [when the method was introduced and the bulge/bar model for the inner region was built], 2016 [the main fitting paper], 2018 [an update to the distance to the Galactic center], 2019 [including bumps & wiggles in the Gaia analysis]).

Great, right? It is. It also makes a further prediction: we can use the mass model to predict the vertical motions of stars perpendicular to the Milky Way’s disk.

Most of the kinetic energy of stars orbiting in the solar neighborhood is invested in circular motion: the vast majority of stars are orbiting in the same direction in the same plane at nearly the same speed. There is some scatter, of course, but radial motions due to orbital eccentricities represent a small portion of the kinetic energy budget. As stars go round and round, the also bob up and down, oscillating perpendicular to the plane of the disk. The energy invested in these vertical motions is also small, which is why the disk of the Milky Way is thin.

View of the Milky Way in the infrared provided by the COBE satellite. The dust lanes that afflict optical light are less severe at these wavelengths, revealing that the stellar disk of the Milky Way is thin but for the peanut-shaped bulge/bar at the center.

Knowing the surface density profile of the Milky Way disk, we can predict the vertical motions. In the context of dark matter, most of the restoring force that keeps stars near the central plane is provided by the stars themselves – the dark matter halo is quasi-spherical, and doesn’t contribute much to the restoring force of the disk. In MOND, the stars and gas are all there is. So the prediction is straightforward (if technically fraught) in both paradigms. Here is a comparison of both predictions with data from Bovy & Rix (2013).

The dynamical surface density implied by vertical motions (data from Bovy & Rix 2013). The dark blue line is the prediction of the model surface density described above – assuming Newtonian gravity. The light blue line is the naive prediction of MOND.

Looks great again, right? The dark blue line goes right through the data with zero fitting. The only exception is in the radial range 5.5 to 6.4 kpc, which turns out to be where the stars probing the vertical motion are maximally different from the gas informing the prediction: we’re looking at different Galactic longitudes, right where there is or is not a spiral arm, so perhaps we should get a different answer in this range. Theory gives us the right answer, no muss, no fuss.

Except, hang on – the line that fits is the Newtonian prediction. The prediction of MOND overshoots the data. It gets the shape right, but the naive MOND prediction is for more vertical motion than we see.

By the “naive” MOND prediction, I mean that we assume that MOND gives the same boost in the vertical direction as it does in the radial direction. This is the obvious first thing to try, but it is not necessarily what happens in all possible MOND theories. Indeed, there are some flavors of modified inertia in which it should not. However, one would expect some boost, and in these data there appears to be none. We get the right answer with just Newton and stars. There’s not even room for much dark matter.

I hope Gaia helps us sort this out. I worry that it will provide so much information that we risk missing the big picture for all the leaves.

This leaves us in a weird predicament. The radial force is extraordinarily well-described by MOND, which reveals details that we could never hope to access if all we know about is dark matter. But if we spot Newtonian gravity this non-Newtonian information from the radial motion, it predicts the correct vertical motion. It’s like we have MOND in one direction and Newton in another.

This makes no sense, so is one of the things that worries me most about MOND. It is not encouraging for dark matter either – we don’t get to spot ourselves MOND in the radial direction then pretend that dark matter did it. At present, it feels like we are up the proverbial creek without a paddle.

Can’t be explained by science!

Can’t be explained by science!

This clickbait title is inspired by the clickbait title of a recent story about high redshift galaxies observed by JWST. To speak in the same vernacular:

LOL!

What they mean, as I’ve discussed many times here, is that it is difficult to explain these observations in LCDM. LCDM does not encompass all of science. Science* predicted exactly this.

This story is one variation on the work of Labbe et al. that has been making the rounds since it appeared in Nature in late February. The concern is that these high redshift galaxies are big and bright. They got too big too soon.

Six high redshift galaxies from the JWST CEERS survey, as reported by Labbe et al. (2023). Not much to look at, but bear in mind that these objects are pushing the edge of the observable universe. By that standard, they are both bright and disarmingly obvious.

The work of Labbe et al. was one of the works informing the first concerns to emerge from JWST. Concerns were also raised about the credibility of those data. Are these galaxies really as massive as claimed, and at such high redshift? Let’s compare before and after publication:

Stellar masses and redshifts of galaxies from Labbe et al. The pink squares are the initial estimates that appeared in their first preprint in July 2022. The black squares with error bars are from the version published in February 2023. The shaded regions represent where galaxies are too massive too early for LCDM. The lighter region is where very few galaxies were expected to exist; the darker region is a hard no.

The results here are mixed. On the one hand, we were right to be concerned about the initial analysis. This was based in part on a ground-based calibration of the telescope before it was launched. That’s not the same as performance on the sky, which is usually a bit worse than in the lab. JWST breaks that mold, as it is actually performing better than expected. That means the bright-looking galaxies aren’t quite as intrinsically bright as was initially thought.

The correct calibration reduces both the masses and the redshifts of these galaxies. The change isn’t subtle: galaxies are less massive (the mass scale is logarithmic!) and at lower redshift than initially thought. Amusingly, only one galaxy is above redshift 9 when the early talking point was big galaxies at z = 10. (There are other credible candidates for that.) Nevertheless, the objects are clearly there, and bright (i.e., massive). They are also early. We like to obsess about redshift, but there is an inverse relation between redshift and time, so there is not much difference in clock time between z = 7 and 10. Redshift 10 is just under 500 million years after the big bang; redshift 7 just under 750 million years. Those are both in the first billion years out of a current age of over thirteen billion years. The universe was still in its infancy for both.

Regardless of your perspective on cosmic time scales, the observed galaxies remain well into LCDM’s danger zone, even with the revised calibration. They are no longer fully in the no-go zone, so I’m sure we’ll see lots of papers explaining how the danger zone isn’t so dangerous after all, and that we should have expected it all along. That’s why it matters more what we predict before an observation than after the answer is known.


*I emphasize science here because one of the reactions I get when I point out that this was predicted is some variation on “That doesn’t count! [because I don’t understand the way it was done.]” And yet, the predictions made and published in advance of the observations keep coming true. It’s almost as if there might be something to this so-called scientific method.

On the one hand, I understand the visceral negative reaction. It is the same reaction I had when MOND first reared its ugly head in my own data for low surface brightness galaxies. This is apparently a psychological phase through which we must pass. On the other hand, the community seems stuck in this rut: it is high time to get past it. I’ve been trying to educate a reluctant audience for over a quarter century now. I know how it pains them because I shared that pain. I got over it. If you’re a scientist still struggling to do so, that’s on you.

There are some things we have to figure out for ourselves. If you don’t believe me, fine, but then get on with doing it yourself instead of burying your head in the sand. The first thing you have to do is give MOND a chance. When I allowed that possibility, I suddenly found myself working less hard than when I was desperately trying to save dark matter. If you come to the problem sure MOND is wrong+, you’ll always get the answer you want.

+I’ve been meaning to write a post (again) about the very real problems MOND suffers in clusters of galaxies. This is an important concern. It is also just one of hundreds of things to consider in the balance. We seem willing to give LCDM infinite mulligans while any problem MOND encounters is immediately seen as fatal. If we hold them to the same standard, both are falsified. If all we care about is explanatory power, LCDM always has that covered. If we care more about successful a priori predictions, MOND is less falsified than LCDM.

There is an important debate to be had on these issues, but we’re not having it. Instead, I frequently encounter people whose first response to any mention of MOND is to cite the bullet cluster in order to shut down discussion. They are unwilling to accept that there is a debate to be had, and are inevitably surprised to learn that LCDM has trouble explaining the bullet cluster too, let alone other clusters. It’s almost as if they are just looking for an excuse to not have to engage in serious thought that might challenge their belief system.

Ask and receive

Ask and receive

I want to start by thanking those of you who have contributed to maintaining this site. This is not a money making venture, but it does help offset the cost of operations.

The title is not related to this, but rather to a flood of papers addressing the questions posed in recent posts. I was asking last time “take it where?” because it is hard to know what cosmology under UT will look like. In particular, how does structure formation work? We need a relativistic theory to progress further than we already have.

There are some papers that partially address this question. Very recently, there have been a whole slew of them. That’s good! It is also a bit overwhelming – I cannot keep up! Here I note a few recent papers that touch on structure formation in MOND. This is an incomplete list, and I haven’t had the opportunity to absorb much of it.

First, there is a paper by Milgrom with his relativistic BIMOND theory. It shows some possibility of subtle departures from FLRW along the lines of what I was describing with UT. Intriguingly, it explicitly shows that the assumptions we made to address structure formation with plain MOND should indeed hold. This is important because a frequent excuse employed to avoid acknowledging MOND’s predictions is that they don’t count if there is no relativistic theory. This is more a form of solution aversion rather than a serious scientific complaint, but people sure lean hard into it. So go read Milgrom’s papers.

Another paper I was looking forward to but didn’t know was in the offing is a rather general treatment of structure formation in relativistic extensions of MOND. There does seem to be some promise for assessing what could work in theories like AeST, and how it relates to earlier work. As a general treatment, there are a lot of options to sort through. Doing so will take a lot of effort by a lot of people over a considerable span of time.

There is also work on gravitational waves, and a variation dubbed a khronometric theory. I, well, I know what both of them are talking about to some extent, and yet some of what they say is presently incomprehensible to me. Clearly I have a lot still to learn. That’s a good problem to have.

I have been thinking for a while now that what we need is a period of a theoretical wild west. People need to try ideas, work through their consequences, and see what works and what does not. Ultimately, most ideas will fail, as there can only be one correct depiction of reality (I sure hope). It will take a lot of work and angst and bickering before we get there: this is perhaps only the beginning of what has already been a long journey for those of us who have been paying attention.

New and stirring things are belittled because if they are not belittled, the humiliating question arises, ‘Why then are you not taking part in them?’

H. G. Wells

Take it where?

Take it where?

I had written most of the post below the line before an exchange with a senior colleague who accused me of asking us to abandon General Relativity (GR). Anyone who read the last post knows that this is the opposite of true. So how does this happen?

Much of the field is mired in bad ideas that seemed like good ideas in the 1980s. There has been some progress, but the idea that MOND is an abandonment of GR I recognize as a misconception from that time. It arose because the initial MOND hypothesis suggested modifying the law of inertia without showing a clear path to how this might be consistent with GR. GR was built on the Equivalence Principle (EP), the equivalence1 of gravitational charge with inertial mass. The original MOND hypothesis directly contradicted that, so it was a fair concern in 1983. It was not by 19842. I was still an undergraduate then, so I don’t know the sociology, but I get the impression that most of the community wrote MOND off at this point and never gave it further thought.

I guess this is why I still encounter people with this attitude, that someone is trying to rob them of GR. It’s feels like we’re always starting at square one, like there has been zero progress in forty years. I hope it isn’t that bad, but I admit my patience is wearing thin.

I’m trying to help you. Don’t waste you’re entire career chasing phantoms.

What MOND does ask us to abandon is the Strong Equivalence Principle. Not the Weak EP, nor even the Einstein EP. Just the Strong EP. That’s a much more limited ask that abandoning all of GR. Indeed, all flavors of EP are subject to experimental test. The Weak EP has been repeatedly validated, but there is nothing about MOND that implies platinum would fall differently from titanium. Experimental tests of the Strong EP are less favorable.

I understand that MOND seems impossible. It also keeps having its predictions come true. This combination is what makes it important. The history of science is chock full of ideas that were initially rejected as impossible or absurd, going all the way back to heliocentrism. The greater the cognitive dissonance, the more important the result.


Continuing the previous discussion of UT, where do we go from here? If we accept that maybe we have all these problems in cosmology because we’re piling on auxiliary hypotheses to continue to be able to approximate UT with FLRW, what now?

I don’t know.

It’s hard to accept that we don’t understand something we thought we understood. Scientists hate revisiting issues that seem settled. Feels like a waste of time. It also feels like a waste of time continuing to add epicycles to a zombie theory, be it LCDM or MOND or the phoenix universe or tired light or whatever fantasy reality you favor. So, painful as it may be, one has find a little humility to step back and take account of what we know empirically independent of the interpretive veneer of theory.

As I’ve said before, I think we do know that the universe is expanding and passed through an early hot phase that bequeathed us the primordial abundances of the light elements (BBN) and the relic radiation field that we observe as the cosmic microwave background (CMB). There’s a lot more to it than that, and I’m not going to attempt to recite it all here.

Still, to give one pertinent example, BBN only works if the expansion rate is as expected during the epoch of radiation domination. So whatever is going on has to converge to that early on. This is hardly surprising for UT since it was stipulated to contain GR in the relevant limit, but we don’t actually know how it does so until we work out what UT is – a tall order that we can’t expect to accomplish overnight, or even over the course of many decades without a critical mass of scientists thinking about it (and not being vilified by other scientists for doing so).

Another example is that the cosmological principle – that the universe is homogeneous and isotropic – is observed to be true in the CMB. The temperature is the same all over the sky to one part in 100,000. That’s isotropy. The temperature is tightly coupled to the density, so if the temperature is the same everywhere, so is the density. That’s homogeneity. So both of the assumptions made by the cosmological principle are corroborated by observations of the CMB.

The cosmological principle is extremely useful for solving the equations of GR as applied to the whole universe. If the universe has a uniform density on average, then the solution is straightforward (though it is rather tedious to work through to the Friedmann equation). If the universe is not homogeneous and isotropic, then it becomes a nightmare to solve the equations. One needs to know where everything was for all of time.

Starting from the uniform condition of the CMB, it is straightforward to show that the assumption of homogeneity and isotropy should persist on large scales up to the present day. “Small” things like galaxies go nonlinear and collapse, but huge volumes containing billions of galaxies should remain in the linear regime and these small-scale variations average out. One cubic Gigaparsec will have the same average density as the next as the next, so the cosmological principle continues to hold today.

Anyone spot the rub? I said homogeneity and isotropy should persist. This statement assumes GR. Perhaps it doesn’t hold in UT?

This aspect of cosmology is so deeply embedded in everything that we do in the field that it was only recently that I realized it might not hold absolutely – and I’ve been actively contemplating such a possibility for a long time. Shouldn’t have taken me so long. Felten (1984) realized right away that a MONDian universe would depart from isotropy by late times. I read that paper long ago but didn’t grasp the significance of that statement. I did absorb that in the absence of a cosmological constant (which no one believed in at the time), the universe would inevitably recollapse, regardless of what the density was. This seems like an elegant solution to the flatness/coincidence problem that obsessed cosmologists at the time. There is no special value of the mass density that provides an over/under line demarcating eternal expansion from eventual recollapse, so there is no coincidence problem. All naive MOND cosmologies share the same ultimate fate, so it doesn’t matter what we observe for the mass density.

MOND departs from isotropy for the same reason it forms structure fast: it is inherently non-linear. As well as predicting that big galaxies would form by z=10, Sanders (1998) correctly anticipated the size of the largest structures collapsing today (things like the local supercluster Laniakea) and the scale of homogeneity (a few hundred Mpc if there is a cosmological constant). Pretty much everyone who looked into it came to similar conclusions.

But MOND and cosmology, as we know it in the absence of UT, are incompatible. Where LCDM encompasses both cosmology and the dynamics of bound systems (dark matter halos3), MOND addresses the dynamics of low acceleration systems (the most common examples being individual galaxies) but says nothing about cosmology. So how do we proceed?

For starters, we have to admit our ignorance. From there, one has to assume some expanding background – that much is well established – and ask what happens to particles responding to a MONDian force-law in this background, starting from the very nearly uniform initial condition indicated by the CMB. From that simple starting point, it turns out one can get a long way without knowing the details of the cosmic expansion history or the metric that so obsess cosmologists. These are interesting things, to be sure, but they are aspects of UT we don’t know and can manage without to some finite extent.

For one, the thermal history of the universe is pretty much the same with or without dark matter, with or without a cosmological constant. Without dark matter, structure can’t get going until after thermal decoupling (when the matter is free to diverge thermally from the temperature of the background radiation). After that happens, around z = 200, the baryons suddenly find themselves in the low acceleration regime, newly free to respond to the nonlinear force of MOND, and structure starts forming fast, with the consequences previously elaborated.

But what about the expansion history? The geometry? The big questions of cosmology?

Again, I don’t know. MOND is a dynamical theory that extends Newton. It doesn’t address these questions. Hence the need for UT.

I’ve encountered people who refuse to acknowledge4 that MOND gets predictions like z=10 galaxies right without a proper theory for cosmology. That attitude puts the cart before the horse. One doesn’t look for UT unless well motivated. That one is able to correctly predict 25 years in advance something that comes as a huge surprise to cosmologists today is the motivation. Indeed, the degree of surprise and the longevity of the prediction amplify the motivation: if this doesn’t get your attention, what possibly could?

There is no guarantee that our first attempt at UT (or our second or third or fourth) will work out. It is possible that in the search for UT, one comes up with a theory that fails to do what was successfully predicted by the more primitive theory. That just lets you know you’ve taken a wrong turn. It does not mean that a correct UT doesn’t exist, or that the initial prediction was some impossible fluke.

One candidate theory for UT is bimetric MOND. This appears to justify the assumptions made by Sanders’s early work, and provide a basis for a relativistic theory that leads to rapid structure formation. Whether it can also fit the acoustic power spectrum of the CMB as well as LCDM and AeST has yet to be seen. These things take time and effort. What they really need is a critical mass of people working on the problem – a community that enjoys the support of other scientists and funding institutions like NSF. Until we have that5, progress will remain grudgingly slow.


1The equivalence of gravitational charge and inertial mass means that the m in F=GMm/d2 is identically the same as the m in F=ma. Modified gravity changes the former; modified inertia the latter.

2Bekenstein & Milgrom (1984) showed how a modification of Newtonian gravity could avoid the non-conservation issues suffered by the original hypothesis of modified inertia. They also outlined a path towards a generally covariant theory that Bekenstein pursued for the rest of his life. That he never managed to obtain a completely satisfactory version is often cited as evidence that it can’t be done, since he was widely acknowledged as one of the smartest people in the field. One wonders why he persisted if, as these detractors would have us believe, the smart thing to do was not even try.

3The data for galaxies do not look like the dark matter halos predicted by LCDM.

4I have entirely lost patience with this attitude. If a phenomena is correctly predicted in advance in the literature, we are obliged as scientists to take it seriously+. Pretending that it is not meaningful in the absence of UT is just an avoidance strategy: an excuse to ignore inconvenient facts.

+I’ve heard eminent scientists describe MOND’s predictive ability as “magic.” This also seems like an avoidance strategy. I, for one, do not believe in magic. That it works as well as it doesthat it works at all – must be telling us something about the natural world, not the supernatural.

5There does exist a large and active community of astroparticle physicists trying to come up with theories for what the dark matter could be. That’s good: that’s what needs to happen, and we should exhaust all possibilities. We should do the same for new dynamical theories.