Commentary on Wide Binaries

Last time, I commented on the developing situation with binary stars as a test of MOND. I neglected to enable comments for that post, so have done so now.

Indranil Banik has shared his perspective on wide binaries in a talk on the subject that is available on Youtube, included below.

Indranil and his collaborators are not seeing a MOND effect in wide binaries. Others have, as I discussed in the previous post. After the video posted above, Indranil comments on the work of Kyu-Hyun Chae:

Regarding the article by Chae (https://arxiv.org/abs/2305.04613), equation 7 of MNRAS 506, 2269–2295 (2021) shows that the relative velocity is limited such that the v_tilde parameter (ratio of relative velocity within the sky plane to the Newtonian circular velocity at the projected separation) is at most 1 for 5 M_Sun binaries and in general is sqrt(5 M_Sun/M) for a binary of total mass M. This means v_tilde only goes up to 2 for M = 1.25 M_Sun, but more generally it goes up to a higher value at lower mass. Since the main signal in MOND is a broader v_tilde distribution at lower acceleration and a lower mass reduces the acceleration, this can lead to an artificial signal whereby lower mass systems have a larger rms v_tilde. Now a simple rms statistic is not exactly what Chae did, but this does highlight the kind of problem that can arise. Indeed, the v_tilde distribution prepared by Chae for the article in its figure 25 does show a rather sharp decline in the v_tilde distribution – there is not much of an extended tail, even less than in the model! This is obviously not due to measurement errors and contaminating effects like chance alignments, which would broaden the tail further. Rather, it is due to the upper limit to v_tilde imposed from the sample selection. This just means the underlying sample used is not well suited to the wide binary test, since it was quite clear a priori that the main signal for MOND would be in the region of v_tilde = 1-1.5 or so. One possibility is to try and restrict the analysis to a narrower range of binary total mass to try and alleviate the above concern, in which case the upper limit to v_tilde would be perhaps above 2 for the full sample used. There is however another issue in that lower accelerations generally correspond to higher separations and thus lower orbital velocities, so the fractional uncertainty in the velocity is likely to be larger. Thus, the v_tilde distribution is likely to be broader at low accelerations. This can be counteracted by having low errors across the board, but then the key quantity is the uncertainty on v_tilde. This aspect is not handled very rigorously – it is assumed that if the proper motions are accurate to better than 1%, then v_tilde will be sufficiently well known. But if the tangential velocity is about 20 km/s, a 1% error means an error of 200 m/s on the velocity of each star, so the relative velocity has an uncertainty of about 280 m/s. This is quite large compared to typical wide binary relative velocities, which are generally a few hundred m/s. Without doing a more detailed analysis, perhaps one thing to do would be to change this 1% requirement to 0.5% or 1.5% and see what happens. I am therefore not convinced that the MOND signal claimed by Chae is genuine.

I. Banik

Kyu-Hyun Chae responded to that, but apparently many people are not able to see his response on Youtube. I cannot. So I asked him about it, and he shares it here:

Since Indranil sent this concern to me in person, I’m replying here. No cut on v_tilde is used in my analysis because it is a gravity test. I did not use equation 7 of El-Badry et al. (MNRAS 506, 2269–2295 (2021)) to cut out high v_tilde data, so there are some (though relatively small number of) data points above equation (7). I removed chance alignment cases by requiring R < 0.01 (El-Badry et al. convincingly show that R can be used to effectively remove chance alignment cases). This is the main reason why there is no high velocity tail. I have already considered varying proper motion (PM) relative errors: there are three cases PM rel error < 0.01 (nominal case), <0.003 (smaller case), and <0.2 (larger case). The conclusion on gravity anomaly (MOND signal) is the same in all three cases although the fitted f_multi (multiplicity fraction) varies. We can have more discussion in the st Andrews June meeting. I’m sure it will take some time but you will be convinced that my results are correct.

K.-H. Chae

He also shares this figure:

This is how the science sausage is made. As yet, there is no consensus.

Wide Binary Weirdness

My last post about the Milky Way was intended to be a brief introduction to our home galaxy in order to motivate the topic of binary stars. There’s too much interesting to say about the Milky Way as a galaxy, so I never got past that. Even now I feel the urge to say more, like with this extended rotation curve that I included in my contribution to the proceedings of IAU 379.

The RAR-based model rotation curve of the Milky Way extrapolated to large radii (note the switch to a logarithmic scale at 20 kpc!) for comparison to the halo stars of Bird et al (2022) and the globular clusters of Watkins et al (2019). The location of the solar system is noted by the red circle.

But instead I want to talk about data for binary stars from the Gaia mission. Gaia has been mapping the positions and proper motions of stars in the local neighborhood with unprecedented accuracy. These can be used to measure distances via trigonometric parallax, and speeds along the sky. The latter once seemed impossible to obtain in numbers with much precision; thanks to Gaia such data now outnumber radial (line of sight) velocities of comparable accuracy from spectra. That is a mind-boggling statement to anyone who has worked in the field; for all of my career (and that of any living astronomer), radial velocities have vastly outnumbered comparably well-measured proper motions. Gaia has flipped that forever-reality upside down in a few short years. It’s third data release was in June of 2022; this provides enough information to identify binary stars, and we’ve had enough time to start (and I do mean start) sorting through the data.

OK, why are binary stars interesting to the missing mass (really the acceleration discrepancy) problem? In principle, they allow us to distinguish between dark matter and modified gravity theories like MOND. If galactic mass discrepancies are caused by a diffuse distribution of dark matter, gravity is normal, and binary stars should orbit each other as Newton predicts, no matter their separation: the dark matter is too diffuse to have an impact on such comparatively tiny scales. If instead the force law changes at some critical scale, then the orbital speeds of widely separated binary pairs that exceed this scale should get a boost relative to the Newtonian case.

The test is easy to visualize for a single binary system. Imagine two stars orbiting one another. When they’re close, they orbits as Newton predicts. This is, after all, how we got Newtonian gravity – as an explanation for Kepler’s Laws or planetary motion. Ours is a lonely star, not a binary, but that makes no difference to gravity: Jupiter (or any other planet) is an adequate stand-in. Newton’s universal law of gravity (with tiny tweaks from Einstein) is valid as far out in the solar system as we’ve been able to probe. For scale, Pluto is about 40 AU out (where Earth, by definition, is 1 AU from the sun).

Let’s start with a pair of stars orbiting at a distance that is comfortably in the Newtonian regime, say with a separation of 40 AU. If we know the mass of the stars, we can calculate what their orbital speed will be. Now imagine gradually separating the stars so they are farther and farther apart. For any new separation s, we can predict what the new orbital speed will be. According to Newton, this will decline in a Keplerian fashion, v ~ 1/√s. This will continue indefinitely if Newton remains forever the law of the land. If instead the force law changes at some critical scale sc, then we would expect to see a change when the separation exceeds that scale. Same binary pair, same mass, but relatively faster speed – a faster speed that on galaxy scales leads to the inference of dark matter. In essence, we want to check if binary stars also have flat rotation curves if we look far enough out.

We have long known that simply changing the force law at some length scale sc does not work. In MOND, the critical scale is an acceleration, a0. This will map to a different sc for binary stars of different masses. For the sun, the critical acceleration scale is reached at sc ≈ 7000 AU ≈ 0.034 parsecs (pc), about a tenth of a light-year. That’s a lot bigger than the solar system (40 AU) but rather smaller than the distance to the next star (1.3 pc = 4.25 light-years). So it is conceivable that there are wide binaries in the solar neighborhood for which this test can be made – pairs of stars with separations large enough to probe the MOND regime without being so far apart that they inevitably get broken up by random interactions with unrelated stars.

Gaia is great for identifying binaries, and space is big. There are thousands of wide binaries within 200 pc of the sun where Gaia can obtain excellent measurements. That’s not a big piece of the galaxy – it is a patch roughly the size of the red circle in the rotation curve plot above – but it is still a heck of a lot of stars. A signal should emerge, and a number of papers have now appeared that attempt this exercise. And ooooo-buddy, am I confused. Frequent readers will have noticed that it has been a long time between posts. There are lots of reasons for this, but a big one is that every time I think I understand what is going on here, another paper appears with a different result.

OK, first, what do we expect? Conventionally, binaries should show Keplerian behavior whatever their separation. Dark matter is not dense enough locally to have any perceptible impact. In MOND, one might expect an effect analogous to the flattening of rotation curves, hence higher velocities than predicted by Newton. And that’s correct, but it isn’t quite that simple.

In MOND, there is the External Field Effect (EFE) in which the acceleration from distant sources can matter to the motion of a local system. This violates the strong but not the weak Equivalence Principle. In MOND, all accelerative tugs matter, whereas conventionally only local effects matter.

This is important here, as we live in a relatively high acceleration neighborhood that is close to a0. The acceleration the sun feels towards the Galactic center is about 1.8 a0. This applies to all the stars in the solar neighborhood, so even if one finds a binary pair that is widely separated enough for the force of one star on another to be less than a0, they both feel the 1.8 a0 of the greater Galaxy. A lot of math intervenes, with the net effect being that the predicted boost over Newton is less than it would have been in the absence of this effect. There is still a boost, but its predicted amplitude is less than one might naively hope.

The location of the solar system along the radial acceleration relation is roughly (gbar, gobs) = (1.2, 1.8) a0. At this acceleration, the effects of MOND are just beginning to appear, and the external field of the Galaxy can affect local binary stars.

One of the first papers to address this is Hernandez et al (2022). They found a boost in speed that looks like MOND but is not MOND. Rather, it is consistent with the larger speed that is predicted by MOND in the absence of the EFE. This implies that the radial acceleration relation depicted above is absolute, and somehow more fundamental than MOND. This would require a new theory that is very similar to MOND but lacks the EFE, which seems necessary in other situations. Weird.

A thorough study has independently been made by Pittordis & Sutherland (2023). I heard a talk by them over Zoom that motivated the previous post to set the stage for this one. They identify a huge sample of over 73,000 wide binaries within 300 pc of the sun. Contrary to Hernandez et al., they find no boost at all. The motions of binaries appear to remain perfectly Keplerian. There is no hint of MOND-like effects. Different.

OK, so that is pretty strong evidence against MOND, as Indranil Banik was describing to me at the IAU meeting in Potsdam, which is why I knew to tune in for the talk by Pittordis. But before I could write this post, yet another paper appeared. This preprint by Kyu-Hyun Chae splits the difference. It finds a clear excess over the Newtonian expectation that is formally highly significant. It is also about right for what is expected in MOND with the EFE, in particular with the AQUAL flavor of MOND developed by Bekenstein & Milgrom (1984).

So we have one estimate that is MOND-like but too much for MOND, one estimate that is straight-laced Newton, and one estimate that is so MOND that it can start to discern flavors of MOND.

I really don’t know what to make of all this. The test is clearly a lot more complicated than I made it sound. One does not get to play God with a single binary pair; one instead has to infer from populations of binaries of different mass stars whether a statistical excess in orbital velocity occurs at wide separations. This is challenging for lots of reasons.

For example, we need to know the mass of each star in each binary. This can be gauged by the mass-luminosity relation – how bright a main sequence star is depends on its mass – but this must be calibrated by binary stars. OK, so, it should be safe to use close binaries that are nowhere near the MOND limit, but it can still be challenging to get this right for completely mundane, traditional astronomical reasons. It remains challenging to confidently infer the properties of impossibly distant physical objects that we can never hope to visit, much less subject to laboratory scrutiny.

Another complication is the orientation and eccentricity of orbits. The plane of the orbit of each binary pair will be inclined to our line of sight so that the velocity we measure is only a portion of the full velocity. We do not have any way to know what the inclination of any one wide binary is; it is hard enough to identify them and get a relative velocity on the plane of the sky. So we have to resort to statistical estimates. The same goes for the eccentricities of the orbits: not all orbits are circles; indeed, most are not. The orbital speed depends on where an object is along its elliptical orbit, as Kepler taught us. So yet again we must make some statistical inference about the distribution of eccentricities. These kinds of estimates are both doable and subject to going badly wrong.

The net effect is that we wind up looking at distributions of relative velocities, and trying to perceive whether there is an excess high-velocity tail over and above the Newtonian expectation. This is far enough from my expertise that I do not feel qualified to judge between the works cited above. It takes time to sort these things out, and hopefully we can all come to agreement on what it is that we’re seeing. Right now, we’re not all seeing eye-to-eye.

There is a whole session devoted to this topic at the upcoming meeting on MOND. The primary protagonists will be there, so hopefully some progress can be made. At least it should be entertaining.

A few words about the Milky Way

A few words about the Milky Way

I recently traveled to my first international meeting since the Covid pandemic began. It was good to be out in the world again. It also served as an excellent reminder of the importance of in-person interactions. On-line interactions are not an adequate substitute. I’d like to be able to recount all that I learned there, but it is too much. This post will touch on one of the much-discussed topics, our own Milky Way Galaxy.

When I put on a MOND hat, there are a few observations that puzzle me. The most persistent of these include the residual mass discrepancy in clusters, the cosmic microwave background, and the vertical motions of stars in the Milky Way disk. Though much hyped, the case for galaxies lacking dark matter does not concern me much: the examples I’ve seen so far appear to be part of the normal churn of early results that are likely to regress toward the norm as the data improve. I’ve seen this movie literally hundreds of times. I’m more interested in understanding the forest than a few outlying trees.

The Milky Way is a normal galaxy – it is part of the forest. It is easy to get lost in the leaves when one has access to data for millions going on billions of individual stars. These add up to a normal spiral galaxy, and we know a lot about external spirals that can help inform our picture of our own home.

For example, by assuming that the Milky Way falls along the radial acceleration relation defined by other spiral galaxies, I was able to build a mass model of its surface density profile. The resulting mass distribution is considerably more detailed than the usual approach of assuming a smooth exponential disk, which would be a straight line in the right-hand plot below. With the level of detail becoming available from missions like the Gaia satellite, it is necessary to move beyond such approximations.

Left: Spiral structure in the Milky Way traced by regions of gas ionized by young stars (HII regions, in red) and by the birthplaces of giant molecular clouds (GMCs, in blue). Right: the azimuthally-averaged surface density profile of stars inferred from the rotation curve of the Milky Way using the Radial Acceleration Relation. The features inferred kinematically correspond to the spiral arms known from star counts, providing a local example of Renzo’s Rule.

This model was built before Gaia data became available, and is not informed by it. Rather, I took the terminal velocities measured by McClure-Griffiths & Dickey, which provide the estimate of the Milky Way rotation curve that is most directly comparable to what we measure in external spirals, and worked out the surface density profile using the radial acceleration relation. The resulting model possesses bumps and wiggles like those we see corresponding to spiral arms in external galaxies. And indeed, it turns out that the locations of these features correspond with known spiral arms. Those are independent observations: one is from the kinematics of interstellar gas, the other from traditional star counts.

The model turns out to have a few further virtues. It matches the enclosed mass profile of the inner bulge/bar region of the Galaxy without any attempt at a specific fit. It reconciles the rotation curve measured with stars using Gaia data with that measured using gas in the interstellar medium – a subtle difference that was nevertheless highly significant. It successfully predicts that the rotation curve beyond the solar radius would not be perfectly flat, but rather decline at a specific rate – and exactly that rate was subsequently measured using Gaia. These are the sort of results that inclines one to believe that the underlying physics has to be MOND. Inferring maps of the mass distribution with this level of detail is simply not possible using a dark matter model.

The rotation curve of the Milky as observed in interstellar gas (light grey) and as fit to the radial acceleration relation (blue line). Only the region from 3 to 8 kpc has been fit; the rest follows. This matches well stellar observations from the inner, barred region of the Milky Way (dark grey squares: Portail et al. 2017) and the gradual decline of the outer rotation curve (black squares: Eilers et al. 2019) once corrected for the presence of bumps and wiggles due to spiral arms. These require taking numerical derivatives for use in the Jeans equation; the red squares show the conventional result obtained when neglecting this effect by assuming a smooth exponential surface density profile. See McGaugh (2008 [when the method was introduced and the bulge/bar model for the inner region was built], 2016 [the main fitting paper], 2018 [an update to the distance to the Galactic center], 2019 [including bumps & wiggles in the Gaia analysis]).

Great, right? It is. It also makes a further prediction: we can use the mass model to predict the vertical motions of stars perpendicular to the Milky Way’s disk.

Most of the kinetic energy of stars orbiting in the solar neighborhood is invested in circular motion: the vast majority of stars are orbiting in the same direction in the same plane at nearly the same speed. There is some scatter, of course, but radial motions due to orbital eccentricities represent a small portion of the kinetic energy budget. As stars go round and round, the also bob up and down, oscillating perpendicular to the plane of the disk. The energy invested in these vertical motions is also small, which is why the disk of the Milky Way is thin.

View of the Milky Way in the infrared provided by the COBE satellite. The dust lanes that afflict optical light are less severe at these wavelengths, revealing that the stellar disk of the Milky Way is thin but for the peanut-shaped bulge/bar at the center.

Knowing the surface density profile of the Milky Way disk, we can predict the vertical motions. In the context of dark matter, most of the restoring force that keeps stars near the central plane is provided by the stars themselves – the dark matter halo is quasi-spherical, and doesn’t contribute much to the restoring force of the disk. In MOND, the stars and gas are all there is. So the prediction is straightforward (if technically fraught) in both paradigms. Here is a comparison of both predictions with data from Bovy & Rix (2013).

The dynamical surface density implied by vertical motions (data from Bovy & Rix 2013). The dark blue line is the prediction of the model surface density described above – assuming Newtonian gravity. The light blue line is the naive prediction of MOND.

Looks great again, right? The dark blue line goes right through the data with zero fitting. The only exception is in the radial range 5.5 to 6.4 kpc, which turns out to be where the stars probing the vertical motion are maximally different from the gas informing the prediction: we’re looking at different Galactic longitudes, right where there is or is not a spiral arm, so perhaps we should get a different answer in this range. Theory gives us the right answer, no muss, no fuss.

Except, hang on – the line that fits is the Newtonian prediction. The prediction of MOND overshoots the data. It gets the shape right, but the naive MOND prediction is for more vertical motion than we see.

By the “naive” MOND prediction, I mean that we assume that MOND gives the same boost in the vertical direction as it does in the radial direction. This is the obvious first thing to try, but it is not necessarily what happens in all possible MOND theories. Indeed, there are some flavors of modified inertia in which it should not. However, one would expect some boost, and in these data there appears to be none. We get the right answer with just Newton and stars. There’s not even room for much dark matter.

I hope Gaia helps us sort this out. I worry that it will provide so much information that we risk missing the big picture for all the leaves.

This leaves us in a weird predicament. The radial force is extraordinarily well-described by MOND, which reveals details that we could never hope to access if all we know about is dark matter. But if we spot Newtonian gravity this non-Newtonian information from the radial motion, it predicts the correct vertical motion. It’s like we have MOND in one direction and Newton in another.

This makes no sense, so is one of the things that worries me most about MOND. It is not encouraging for dark matter either – we don’t get to spot ourselves MOND in the radial direction then pretend that dark matter did it. At present, it feels like we are up the proverbial creek without a paddle.

Can’t be explained by science!

Can’t be explained by science!

This clickbait title is inspired by the clickbait title of a recent story about high redshift galaxies observed by JWST. To speak in the same vernacular:

LOL!

What they mean, as I’ve discussed many times here, is that it is difficult to explain these observations in LCDM. LCDM does not encompass all of science. Science* predicted exactly this.

This story is one variation on the work of Labbe et al. that has been making the rounds since it appeared in Nature in late February. The concern is that these high redshift galaxies are big and bright. They got too big too soon.

Six high redshift galaxies from the JWST CEERS survey, as reported by Labbe et al. (2023). Not much to look at, but bear in mind that these objects are pushing the edge of the observable universe. By that standard, they are both bright and disarmingly obvious.

The work of Labbe et al. was one of the works informing the first concerns to emerge from JWST. Concerns were also raised about the credibility of those data. Are these galaxies really as massive as claimed, and at such high redshift? Let’s compare before and after publication:

Stellar masses and redshifts of galaxies from Labbe et al. The pink squares are the initial estimates that appeared in their first preprint in July 2022. The black squares with error bars are from the version published in February 2023. The shaded regions represent where galaxies are too massive too early for LCDM. The lighter region is where very few galaxies were expected to exist; the darker region is a hard no.

The results here are mixed. On the one hand, we were right to be concerned about the initial analysis. This was based in part on a ground-based calibration of the telescope before it was launched. That’s not the same as performance on the sky, which is usually a bit worse than in the lab. JWST breaks that mold, as it is actually performing better than expected. That means the bright-looking galaxies aren’t quite as intrinsically bright as was initially thought.

The correct calibration reduces both the masses and the redshifts of these galaxies. The change isn’t subtle: galaxies are less massive (the mass scale is logarithmic!) and at lower redshift than initially thought. Amusingly, only one galaxy is above redshift 9 when the early talking point was big galaxies at z = 10. (There are other credible candidates for that.) Nevertheless, the objects are clearly there, and bright (i.e., massive). They are also early. We like to obsess about redshift, but there is an inverse relation between redshift and time, so there is not much difference in clock time between z = 7 and 10. Redshift 10 is just under 500 million years after the big bang; redshift 7 just under 750 million years. Those are both in the first billion years out of a current age of over thirteen billion years. The universe was still in its infancy for both.

Regardless of your perspective on cosmic time scales, the observed galaxies remain well into LCDM’s danger zone, even with the revised calibration. They are no longer fully in the no-go zone, so I’m sure we’ll see lots of papers explaining how the danger zone isn’t so dangerous after all, and that we should have expected it all along. That’s why it matters more what we predict before an observation than after the answer is known.


*I emphasize science here because one of the reactions I get when I point out that this was predicted is some variation on “That doesn’t count! [because I don’t understand the way it was done.]” And yet, the predictions made and published in advance of the observations keep coming true. It’s almost as if there might be something to this so-called scientific method.

On the one hand, I understand the visceral negative reaction. It is the same reaction I had when MOND first reared its ugly head in my own data for low surface brightness galaxies. This is apparently a psychological phase through which we must pass. On the other hand, the community seems stuck in this rut: it is high time to get past it. I’ve been trying to educate a reluctant audience for over a quarter century now. I know how it pains them because I shared that pain. I got over it. If you’re a scientist still struggling to do so, that’s on you.

There are some things we have to figure out for ourselves. If you don’t believe me, fine, but then get on with doing it yourself instead of burying your head in the sand. The first thing you have to do is give MOND a chance. When I allowed that possibility, I suddenly found myself working less hard than when I was desperately trying to save dark matter. If you come to the problem sure MOND is wrong+, you’ll always get the answer you want.

+I’ve been meaning to write a post (again) about the very real problems MOND suffers in clusters of galaxies. This is an important concern. It is also just one of hundreds of things to consider in the balance. We seem willing to give LCDM infinite mulligans while any problem MOND encounters is immediately seen as fatal. If we hold them to the same standard, both are falsified. If all we care about is explanatory power, LCDM always has that covered. If we care more about successful a priori predictions, MOND is less falsified than LCDM.

There is an important debate to be had on these issues, but we’re not having it. Instead, I frequently encounter people whose first response to any mention of MOND is to cite the bullet cluster in order to shut down discussion. They are unwilling to accept that there is a debate to be had, and are inevitably surprised to learn that LCDM has trouble explaining the bullet cluster too, let alone other clusters. It’s almost as if they are just looking for an excuse to not have to engage in serious thought that might challenge their belief system.

Ask and receive

Ask and receive

I want to start by thanking those of you who have contributed to maintaining this site. This is not a money making venture, but it does help offset the cost of operations.

The title is not related to this, but rather to a flood of papers addressing the questions posed in recent posts. I was asking last time “take it where?” because it is hard to know what cosmology under UT will look like. In particular, how does structure formation work? We need a relativistic theory to progress further than we already have.

There are some papers that partially address this question. Very recently, there have been a whole slew of them. That’s good! It is also a bit overwhelming – I cannot keep up! Here I note a few recent papers that touch on structure formation in MOND. This is an incomplete list, and I haven’t had the opportunity to absorb much of it.

First, there is a paper by Milgrom with his relativistic BIMOND theory. It shows some possibility of subtle departures from FLRW along the lines of what I was describing with UT. Intriguingly, it explicitly shows that the assumptions we made to address structure formation with plain MOND should indeed hold. This is important because a frequent excuse employed to avoid acknowledging MOND’s predictions is that they don’t count if there is no relativistic theory. This is more a form of solution aversion rather than a serious scientific complaint, but people sure lean hard into it. So go read Milgrom’s papers.

Another paper I was looking forward to but didn’t know was in the offing is a rather general treatment of structure formation in relativistic extensions of MOND. There does seem to be some promise for assessing what could work in theories like AeST, and how it relates to earlier work. As a general treatment, there are a lot of options to sort through. Doing so will take a lot of effort by a lot of people over a considerable span of time.

There is also work on gravitational waves, and a variation dubbed a khronometric theory. I, well, I know what both of them are talking about to some extent, and yet some of what they say is presently incomprehensible to me. Clearly I have a lot still to learn. That’s a good problem to have.

I have been thinking for a while now that what we need is a period of a theoretical wild west. People need to try ideas, work through their consequences, and see what works and what does not. Ultimately, most ideas will fail, as there can only be one correct depiction of reality (I sure hope). It will take a lot of work and angst and bickering before we get there: this is perhaps only the beginning of what has already been a long journey for those of us who have been paying attention.

New and stirring things are belittled because if they are not belittled, the humiliating question arises, ‘Why then are you not taking part in them?’

H. G. Wells

Take it where?

Take it where?

I had written most of the post below the line before an exchange with a senior colleague who accused me of asking us to abandon General Relativity (GR). Anyone who read the last post knows that this is the opposite of true. So how does this happen?

Much of the field is mired in bad ideas that seemed like good ideas in the 1980s. There has been some progress, but the idea that MOND is an abandonment of GR I recognize as a misconception from that time. It arose because the initial MOND hypothesis suggested modifying the law of inertia without showing a clear path to how this might be consistent with GR. GR was built on the Equivalence Principle (EP), the equivalence1 of gravitational charge with inertial mass. The original MOND hypothesis directly contradicted that, so it was a fair concern in 1983. It was not by 19842. I was still an undergraduate then, so I don’t know the sociology, but I get the impression that most of the community wrote MOND off at this point and never gave it further thought.

I guess this is why I still encounter people with this attitude, that someone is trying to rob them of GR. It’s feels like we’re always starting at square one, like there has been zero progress in forty years. I hope it isn’t that bad, but I admit my patience is wearing thin.

I’m trying to help you. Don’t waste you’re entire career chasing phantoms.

What MOND does ask us to abandon is the Strong Equivalence Principle. Not the Weak EP, nor even the Einstein EP. Just the Strong EP. That’s a much more limited ask that abandoning all of GR. Indeed, all flavors of EP are subject to experimental test. The Weak EP has been repeatedly validated, but there is nothing about MOND that implies platinum would fall differently from titanium. Experimental tests of the Strong EP are less favorable.

I understand that MOND seems impossible. It also keeps having its predictions come true. This combination is what makes it important. The history of science is chock full of ideas that were initially rejected as impossible or absurd, going all the way back to heliocentrism. The greater the cognitive dissonance, the more important the result.


Continuing the previous discussion of UT, where do we go from here? If we accept that maybe we have all these problems in cosmology because we’re piling on auxiliary hypotheses to continue to be able to approximate UT with FLRW, what now?

I don’t know.

It’s hard to accept that we don’t understand something we thought we understood. Scientists hate revisiting issues that seem settled. Feels like a waste of time. It also feels like a waste of time continuing to add epicycles to a zombie theory, be it LCDM or MOND or the phoenix universe or tired light or whatever fantasy reality you favor. So, painful as it may be, one has find a little humility to step back and take account of what we know empirically independent of the interpretive veneer of theory.

As I’ve said before, I think we do know that the universe is expanding and passed through an early hot phase that bequeathed us the primordial abundances of the light elements (BBN) and the relic radiation field that we observe as the cosmic microwave background (CMB). There’s a lot more to it than that, and I’m not going to attempt to recite it all here.

Still, to give one pertinent example, BBN only works if the expansion rate is as expected during the epoch of radiation domination. So whatever is going on has to converge to that early on. This is hardly surprising for UT since it was stipulated to contain GR in the relevant limit, but we don’t actually know how it does so until we work out what UT is – a tall order that we can’t expect to accomplish overnight, or even over the course of many decades without a critical mass of scientists thinking about it (and not being vilified by other scientists for doing so).

Another example is that the cosmological principle – that the universe is homogeneous and isotropic – is observed to be true in the CMB. The temperature is the same all over the sky to one part in 100,000. That’s isotropy. The temperature is tightly coupled to the density, so if the temperature is the same everywhere, so is the density. That’s homogeneity. So both of the assumptions made by the cosmological principle are corroborated by observations of the CMB.

The cosmological principle is extremely useful for solving the equations of GR as applied to the whole universe. If the universe has a uniform density on average, then the solution is straightforward (though it is rather tedious to work through to the Friedmann equation). If the universe is not homogeneous and isotropic, then it becomes a nightmare to solve the equations. One needs to know where everything was for all of time.

Starting from the uniform condition of the CMB, it is straightforward to show that the assumption of homogeneity and isotropy should persist on large scales up to the present day. “Small” things like galaxies go nonlinear and collapse, but huge volumes containing billions of galaxies should remain in the linear regime and these small-scale variations average out. One cubic Gigaparsec will have the same average density as the next as the next, so the cosmological principle continues to hold today.

Anyone spot the rub? I said homogeneity and isotropy should persist. This statement assumes GR. Perhaps it doesn’t hold in UT?

This aspect of cosmology is so deeply embedded in everything that we do in the field that it was only recently that I realized it might not hold absolutely – and I’ve been actively contemplating such a possibility for a long time. Shouldn’t have taken me so long. Felten (1984) realized right away that a MONDian universe would depart from isotropy by late times. I read that paper long ago but didn’t grasp the significance of that statement. I did absorb that in the absence of a cosmological constant (which no one believed in at the time), the universe would inevitably recollapse, regardless of what the density was. This seems like an elegant solution to the flatness/coincidence problem that obsessed cosmologists at the time. There is no special value of the mass density that provides an over/under line demarcating eternal expansion from eventual recollapse, so there is no coincidence problem. All naive MOND cosmologies share the same ultimate fate, so it doesn’t matter what we observe for the mass density.

MOND departs from isotropy for the same reason it forms structure fast: it is inherently non-linear. As well as predicting that big galaxies would form by z=10, Sanders (1998) correctly anticipated the size of the largest structures collapsing today (things like the local supercluster Laniakea) and the scale of homogeneity (a few hundred Mpc if there is a cosmological constant). Pretty much everyone who looked into it came to similar conclusions.

But MOND and cosmology, as we know it in the absence of UT, are incompatible. Where LCDM encompasses both cosmology and the dynamics of bound systems (dark matter halos3), MOND addresses the dynamics of low acceleration systems (the most common examples being individual galaxies) but says nothing about cosmology. So how do we proceed?

For starters, we have to admit our ignorance. From there, one has to assume some expanding background – that much is well established – and ask what happens to particles responding to a MONDian force-law in this background, starting from the very nearly uniform initial condition indicated by the CMB. From that simple starting point, it turns out one can get a long way without knowing the details of the cosmic expansion history or the metric that so obsess cosmologists. These are interesting things, to be sure, but they are aspects of UT we don’t know and can manage without to some finite extent.

For one, the thermal history of the universe is pretty much the same with or without dark matter, with or without a cosmological constant. Without dark matter, structure can’t get going until after thermal decoupling (when the matter is free to diverge thermally from the temperature of the background radiation). After that happens, around z = 200, the baryons suddenly find themselves in the low acceleration regime, newly free to respond to the nonlinear force of MOND, and structure starts forming fast, with the consequences previously elaborated.

But what about the expansion history? The geometry? The big questions of cosmology?

Again, I don’t know. MOND is a dynamical theory that extends Newton. It doesn’t address these questions. Hence the need for UT.

I’ve encountered people who refuse to acknowledge4 that MOND gets predictions like z=10 galaxies right without a proper theory for cosmology. That attitude puts the cart before the horse. One doesn’t look for UT unless well motivated. That one is able to correctly predict 25 years in advance something that comes as a huge surprise to cosmologists today is the motivation. Indeed, the degree of surprise and the longevity of the prediction amplify the motivation: if this doesn’t get your attention, what possibly could?

There is no guarantee that our first attempt at UT (or our second or third or fourth) will work out. It is possible that in the search for UT, one comes up with a theory that fails to do what was successfully predicted by the more primitive theory. That just lets you know you’ve taken a wrong turn. It does not mean that a correct UT doesn’t exist, or that the initial prediction was some impossible fluke.

One candidate theory for UT is bimetric MOND. This appears to justify the assumptions made by Sanders’s early work, and provide a basis for a relativistic theory that leads to rapid structure formation. Whether it can also fit the acoustic power spectrum of the CMB as well as LCDM and AeST has yet to be seen. These things take time and effort. What they really need is a critical mass of people working on the problem – a community that enjoys the support of other scientists and funding institutions like NSF. Until we have that5, progress will remain grudgingly slow.


1The equivalence of gravitational charge and inertial mass means that the m in F=GMm/d2 is identically the same as the m in F=ma. Modified gravity changes the former; modified inertia the latter.

2Bekenstein & Milgrom (1984) showed how a modification of Newtonian gravity could avoid the non-conservation issues suffered by the original hypothesis of modified inertia. They also outlined a path towards a generally covariant theory that Bekenstein pursued for the rest of his life. That he never managed to obtain a completely satisfactory version is often cited as evidence that it can’t be done, since he was widely acknowledged as one of the smartest people in the field. One wonders why he persisted if, as these detractors would have us believe, the smart thing to do was not even try.

3The data for galaxies do not look like the dark matter halos predicted by LCDM.

4I have entirely lost patience with this attitude. If a phenomena is correctly predicted in advance in the literature, we are obliged as scientists to take it seriously+. Pretending that it is not meaningful in the absence of UT is just an avoidance strategy: an excuse to ignore inconvenient facts.

+I’ve heard eminent scientists describe MOND’s predictive ability as “magic.” This also seems like an avoidance strategy. I, for one, do not believe in magic. That it works as well as it doesthat it works at all – must be telling us something about the natural world, not the supernatural.

5There does exist a large and active community of astroparticle physicists trying to come up with theories for what the dark matter could be. That’s good: that’s what needs to happen, and we should exhaust all possibilities. We should do the same for new dynamical theories.

Imagine if you can

Imagine if you can

Imagine if you are able that General Relativity (GR) is correct yet incomplete. Just as GR contains Newtonian gravity in the appropriate limit, imagine that GR itself is a limit of some still more general theory that we don’t yet know about. Let’s call it Underlying Theory (UT) for short. This is essentially the working hypothesis of quantum gravity, but here I want to consider a more general case in which the effects of UT are not limited to the tiny netherworld of the Planck scale. Perhaps UT has observable consequences on very large scales, or a scale that is not length-based at all. What would that look like, given that we only know GR?

For starters, it might mean that the conventional Friedmann-Robertson-Walker (FRW) cosmology derived from GR is only a first approximation to the cosmology of the unknown deeper theory UT. In the first observational tests, FRW will look great, as the two are practically indistinguishable. As the data improve though, awkward problems might begin to crop up. What and where we don’t know, so our first inclination will not be to infer the existence of UT, but rather to patch up FRW with auxiliary hypotheses. Since the working presumption here is that GR is a correct limit, FRW will continue be a good approximation, and early departures will seem modest: they would not be interpreted as signs of UT.

What do we expect for cosmology anyway? A theory is only as good as its stated predictions. After Hubble established in the 1920s that galaxies external to the Milky Way existed and that the universe was expanding, it became clear that this was entirely natural in GR. Indeed, what was not natural was a static universe, the desire for which had led Einstein to introduce the cosmological constant (his “greatest blunder”).

A wide variety of geometries and expansion histories are possible with FRW. But there is one obvious case that stands out, that of Einstein-de Sitter (EdS, 1932). EdS has a matter density Ωm exactly equal to unity, balancing on the divide between a universe that expands forever (Ωm < 1) and one that eventually recollapses (Ωm > 1). The particular case Ωm = 1 is the only natural scale in the theory. It is also the only FRW model with a flat geometry, in the sense that initially parallel beams of light remain parallel indefinitely. These properties make it special in a way that obsessed cosmologists for many decades. (In retrospect, this obsession has the same flavor as the obsession the Ancients had with heavenly motions being perfect circles*.) A natural cosmology would therefor be one in which Ωm = 1 in normal matter (baryons).

By the 1970s, it was clear that there was no way you could have Ωm = 1 in baryons. There just wasn’t enough normal matter, either observed directly, or allowed by Big Bang Nucleosynthesis. Despite the appeal of Ωm = 1, it looked like we lived in an open universe with Ωm < 1.

This did not sit well with many theorists, who obsessed with the flatness problem. The mass density parameter evolves if it is not identically equal to one, so it was really strange that we should live anywhere close to Ωm = 1, even Ωm = 0.1, if the universe was going to spend eternity asymptoting to Ωm → 0. It was a compelling argument, enough to make most of us accept (in the early 1980s) the Inflationary model of the early universe, as Inflation gives a natural mechanism to drive Ωm → 1. The bulk of this mass could not be normal matter, but by then flat rotation curves had been discovered, along with a ton of other evidence that a lot of matter was dark. A third element that came in around the same time was another compelling idea, supersymmetry, which gave a natural mechanism by which the unseen mass could be non-baryonic. The confluence of these revelations gave us the standard cold dark matter (SCDM) cosmological model. It was EdS with Ωm = 1 mostly in dark matter. We didn’t know what the dark matter was, but we had a good idea (WIMPs), and it just seemed like a matter of tracking them down.

SCDM was absolutely Known for about a decade, pushing two depending on how you count. We were very reluctant to give it up. But over the course of the 1990s, it became clear [again] that Ωm < 1. What was different was a willingness, even a desperation, to accept and rehabilitate Einstein’s cosmological constant. This seemed to solve all cosmological problems, providing a viable concordance cosmology that satisfied all then-available data, salvaged Inflation and a flat geometry (Ωm + ΩΛ = 1, albeit at the expense of the coincidence problem, which is worse in LCDM than it is in open models), and made predictions that came true for the accelerated expansion rate and the location of the first peak of the acoustic power spectrum. This was a major revelation that led to Nobel prizes and still resonates today in the form of papers trying to suss out the nature of this so-called dark energy.

What if the issue is even more fundamental? Taking a long view, subsuming many essential details, we’ve gone from a natural cosmology (EdS) to a less natural one (an open universe with a low density in baryons) to SCDM (EdS with lots of non-baryonic dark matter) to LCDM. Maybe these are just successive approximations we’ve been obliged to make in order for FLRW** to mimic UT? How would we know?

One clue might be if the concordance region closed. Here is a comparison of a compilation of constraints assembled by students in my graduate cosmology course in 2002 (plus 2003 WMAP) with 2018 Planck parameters:

The shaded regions were excluded by the sum of the data available in 2003. The question I wondered then was whether the small remaining white space was indeed the correct answer, or merely the least improbable region left before the whole picture was ruled out. Had we painted ourselves into a corner?

If we take these results and the more recent Planck fits at face value, yes: nothing is left, the window has closed. However, other things change over time as well. For example, I’d grant a higher upper limit to Ωm than is illustrated above. The rotation curve line represents an upper limit that no longer pertains if dark matter halos are greatly modified by feedback. We were trying to avoid invoking that deus ex machina then, but there’s no helping it now.

Still, you can see in this diagram what we now call the Hubble tension. To solve that within the conventional FLRW framework, we have to come up with some new free parameter. There are lots of ideas that invoke new physics.

Maybe the new physics is UT? Maybe we have to keep tweaking FLRW because cosmology has reached a precision such that FLRW is no longer completely adequate as an approximation to UT? But if we are willing to add new parameters via “new physics” made up to address each new problem (dark matter, dark energy, something new and extra for the Hubble tension) so we can keep tweaking it indefinitely, how would we ever recognize that all we’re doing is approximating UT? If only there were different data that suggested new physics in an independent way.

Attitude matters. If we think both LCDM and the existence of dark matter is proven beyond a reasonable doubt, as clearly many physicists do, then any problem that arises is just a bit of trivia to sort out. Despite the current attention being given to the Hubble tension, I’d wager that most of the people not writing papers about it are presuming that the problem will go away: traditional measures of the Hubble constant will converge towards the Planck value. That might happen (or appear to happen through the magic of confirmation bias), and I would expect that myself if I hadn’t worked on H0 directly. It’s a lot easier to dismiss such things when you haven’t been involved enough to know how hard they are to dismiss***.

That last sentence pretty much sums up the community’s attitude towards MOND. That led me to pose the question of the year earlier. I have not heard any answers, just excuses to not have to answer. Still, these issues are presumably not unrelated. That MOND has so many predictions – even in cosmology – come true is itself an indication of UT. From that perspective, it is not surprising that we have to keep tweaking FLRW. Indeed, from this perspective, parameters like ΩCDM are chimeras lacking in physical meaning. They’re just whatever they need to be to fit whatever subset of the data is under consideration. That independent observations pretty much point to the same value is far compelling evidence in favor of LCDM than the accuracy of a fit to any single piece of information (like the CMB) where ΩCDM can be tuned to fit pretty much any plausible power spectrum. But is the stuff real? I make no apologies for holding science to a higher standard than those who consider a fit to the CMB data to be a detection.

It has taken a long time for cosmology to get this far. One should take a comparably long view of these developments, but we generally do not. Dark matter was already received wisdom when I was new to the field, unquestionably so. Dark energy was new in the ’90s but has long since been established as received wisdom. So if we now have to tweak it a little to fix this seemingly tiny tension in the Hubble constant, that seems incremental, not threatening to the pre-existing received wisdom. From the longer view, it looks like just another derailment in an excruciatingly slow-moving train wreck.

So I ask again: what would falsify FLRW cosmology? How do we know when to think outside this box, and not just garnish its edges?


*The obsession with circular motion continued through Copernicus, who placed the sun at the center of motion rather than the earth, but continued to employ epicycles. It wasn’t until over a half century later that Kepler finally broke with this particular obsession. In retrospect, we recognize circular motion as a very special case of the many possibilities available with elliptical orbits, just as EdS is only one possible cosmology with a flat geometry once we admit the possibility of a cosmological constant.

**FLRW = Friedmann-Lemaître-Robertson-Walker. I intentionally excluded Lemaître from the early historical discussion because he (and the cosmological constant) were mostly excluded from considerations at that time. Mostly.

Someone with a longer memory than my own is Jim Peebles. I happened to bump into him while walking across campus while in Princeton for a meeting in early 2019. (He was finally awarded a Nobel prize later that year; it should have been in association with the original discovery of the CMB). On that occasion, he (unprompted) noted an analogy between the negative attitude towards the cosmological constant that was prevalent in the community pre-1990s to that for MOND now. NOT that he was in any way endorsing MOND; he was just noting that the sociology had the same texture, and could conceivably change on a similar timescale.

***Note that I am not dismissing the Planck results or any other data; I am suggesting the opposite: the data have become so good that it is impossible to continue to approximate UT with tweaks to FLRW (hence “new physics”). I’m additionally pointing out that important new physics has been staring us in the face for a long time.

Early Galaxy Formation and the Hubble Constant Tension

Early Galaxy Formation and the Hubble Constant Tension

Cosmology is challenged at present by two apparently unrelated problems: the apparent formation of large galaxies at unexpectedly high redshift observed by JWST, and the tension between the value of the Hubble constant obtained by traditional methods and that found in multi-parameter fits to the acoustic power spectrum of the cosmic microwave background (CMB).

Maybe they’re not unrelated?

The Hubble Tension

Early results in precision cosmology from WMAP obtained estimates of the Hubble constant h = 0.73 ± 0.03 [I adopt the convention h = H0/(100 km s-1 Mpc-1) so as not to have to have to write the units every time.] This was in good agreement with contemporaneous local estimates from the Hubble Space Telescope Key Project to Measure the Hubble Constant: h = 0.72 ± 0.08. This is what Hubble was built to do. It did it, and the vast majority of us were satisfied* at the time that it had succeeded in doing so.

Since that time, a tension has emerged as accuracy has improved. Precise local measures** give h = 0.73 ± 0.01 while fits to the Planck CMB data give h = 0.6736 ± 0.0054. This is around the 5 sigma threshold for believing there is a real difference. Our own results exclude h < 0.705 at 95% confidence. A value as low as 67 is right out.

Given the history of the distance scale, it is tempting to suppose that local measures are at fault. This seems to be the prevailing presumption, and it is just a matter of figuring out what went wrong this time. Of course, things can go wrong with the CMB too, so this way of thinking raises the ever-present danger of confirmation bias, ever a scourge in cosmology. Looking at the history of H0 determinations, it is not local estimates of H0 but rather those from CMB fits that have diverged from the concordance region.

The cosmic mass density parameter and Hubble constant. These covary in CMB fits along the line Ωmh3 = 0.09633 ± 0.00029 (red). Also shown are best-fit values from CMB experiments over time, as labeled (WMAP3 is the earliest shown; Planck2018 the most recent). These all fall along the line of constant Ωmh3, but have diverged over time from concordance with local data. There are many examples of local constraints; for illustration I show examples from Cole et al. (2005), Mohayaee & Tully (2005), Tully et al. (2016), and Riess et al. (2001). The divergence has occurred as finer angular scales have been observed in the CMB power spectrum and correspondingly higher multiples ℓ have been incorporated into fits.


The divergence between local and CMB-determined H0 has occurred as finer angular scales have been observed in the CMB power spectrum and correspondingly higher multiples ℓ have been incorporated into fits. That suggests that the issue resides in the high-ℓ part of the CMB data*** rather than in some systematic in the local determinations. Indeed, if one restricts the analysis of the Planck (“TT”) data to ℓ < 801, one obtains h = 0.70 ± 0.02 (see their Fig. 22), consistent with earlier CMB estimates as well as with local ones.

Photons must traverse the entire universe to reach us from the surface of last scattering. Along the way, they are subject to 21 cm absorption by neutral hydrogen, Thomson scattering by free electrons after reionization, blue and redshifting from traversing gravitational potentials in an expanding universe (the late ISW effect, aka the Rees-Sciama effect), and deflection by gravitational lensing. Lensing is a subtle effect that blurs the surface of last scattering and adds a source of fluctuations not intrinsic to it. The amount of lensing can be calculated from the growth rate of structure; anomalously fast galaxy formation would induce extra power at high ℓ.

Early Galaxy Formation

JWST observations evince the early emergence of massive galaxies at z ≈ 10. This came as a great surprise theoretically, but the empirical result extends previous observations that galaxies grew too big too fast. Taking the data at face value, more structure appears to exist in the early universe than anticipated in the standard calculation. This would cause excess lensing and an anomalous source of power on fine scales. This would be a real, physical anomaly (new physics), not some mistake in the processing of CMB data (which may of course happen, just as with any other sort of data). Here are the Planck data:

Unbinned Planck data with the best-fit power spectrum (red line) and a model (blue line) with h=0.73 and Ωm adjusted to maintain constant Ωmh3. The ratio of the models is shown at bottom, that with = 0.67 divided by the model with h = 0.73. The difference is real; h = 0.67 gives the better fit****. The ratio illustrates the subtle need for slightly greater power with increasing ℓ than provided by the model with h = 0.73. Perhaps this high-ℓ power has a contribution from anomalous gravitational lensing that skews the fit and drives the Hubble tension.

If excess lensing by early massive galaxies occurs but goes unrecognized, fits to the CMB data would be subtly skewed. There would be more power at high ℓ than there should be. Fitting this extra power would drive up Ωm and other relevant parameters*****. In response, it would be necessary to reduce h to maintain a constant Ωmh3. This would explain the temporal evolution of the best fit values, so I posit that this effect may be driving the Hubble tension.

The early formation of massive galaxies would represent a real, physical anomaly. This is unexpected in ΛCDM but not unanticipated. Sanders (1998) explicitly predicted the formation of massive galaxies by z = 10. Excess gravitational lensing by these early galaxies is a natural consequence of his prediction. Other things follow as well: early reionization, an enhanced ISW/Rees-Sciama effect, and high redshift 21 cm absorption. In short, everything that is puzzling about the early universe from the ΛCDM perspective was anticipated and often explicitly predicted in advance.

The new physics driving the prediction of Sanders (1998) is MOND. This is the same driver of anomalies in galaxy dynamics, and perhaps now also of the Hubble tension. These predictive successes must be telling us something, and highlight the need for a deeper theory. Whether this finally breaks ΛCDM or we find yet another unsatisfactory out is up to others to decide.


*Indeed, the ± 0.08 rather undersells the accuracy of the result. I quote that because the Key Project team gave it as their bottom line. However, if you read the paper, you see statements like h = 0.71 ± 0.02 (random) ± 0.06 (systematic). The first is the statistical error of the experiment, while the latter is an estimate of how badly it might go wrong (e.g., susceptibility to a recalibration of the Cepheid scale). With the benefit of hindsight, we can say now that the Cepheid calibration has not changed that much: they did indeed get it right to something more like ± 0.02 than ± 0.08.

**An intermediate value is given by Freedman (2021): h = 0.698 ± 0.006, which gives the appearance of a tension between Cepheid and TRGB calibrations. However, no such tension is seen between Cepheid and TRGB calibrators of the baryonic Tully-Fisher relation, which gives h = 0.751 ± 0.023. This suggests that the tension is not between the Cepheid and TRGB method so much as it is between applications of the TRGB method by different groups.

***I recall being at a conference when the Planck data were fresh where people were visibly puzzled at the divergence of their fit from the local concordance region. It was obvious to everyone that this had come about when the high ℓ data were incorporated. We had no idea why, and people were reluctant to contradict the Authority of the CMB fit, but it didn’t sit right. Since that time, the Planck result has been normalized to the point where I hear its specific determination of cosmic parameters used interchangeably with ΛCDM. And indeed, the best fit is best for good reason; determinations that are in conflict with Planck are either wrong or indicate new physics.

****The sharp eye will also notice a slight offset in the absolute scale. This is fungible with the optical depth due to reionization, which acts as a light fog covering the whole sky: higher optical depth τ depresses the observed amplitude of the CMB. The need to fit the absolute scale as well as the tip in the shape of the power spectrum would explain another temporal evolution in the best-fit CMB parameters, that of declining optical depth from WMAP and early (2013) Planck (τ = 0.09) to 2018 Planck (τ = 0.0544).

*****The amplitude of the power spectrum σ8 would also be affected. Perhaps unsurprisingly, there is also a tension between local and CMB determinations of this parameter. All parameters must be fit simultaneously, so how it comes out in the wash depends on the details of the history of the nonlinear growth of structure. Such a calculation is beyond the scope of this note. Indeed, I hope someone else takes up the challenge, as I tire of solving all the problems only to have them ignored. Better if everyone else comes to grip with this for themselves.

Let’s just ignore it

Let’s just ignore it

Something that Sabine Hossenfelder noted recently on Twitter resonated with me:

This is a very real problem in academia, and I don’t doubt that it is a common feature of many human endeavors. Part of it is just that people don’t know enough to know what they don’t know. That is to say, so much has been written that it can be hard to find the right reference to put any given fever dream promptly to never-ending sleep. However, that’s not the real problem.

The problem is exactly what Sabine says it is. People keep pushing ideas that have been debunked. Why let facts get in the way of a fancy idea?

There are lots of examples of this in my own experience. Indeed, I’ve encountered it so often that I’ve concluded that there is no result so obvious that some bozo won’t conclude exactly the opposite.

I spent a lot of my early career working in the context of non-baryonic dark matter. For a long time, I was enthusiastic about it, but I’ve become skeptical. I continue to work on it, just in case. But I soured on it for good reasons, reasons I have explained repeatedly in exhaustive detail. Some people appreciate this level of detail, but most do not. This is the sort of thing Sabine is talking about. People don’t engage seriously with these problems.

Maybe I’m wrong to be skeptical of dark matter? I could accept that – one cannot investigate this wide universe in which we find ourselves without sometimes coming to the wrong conclusions. Has it been demonstrated that the concerns I raised were wrong? No. Rather than grapple with the problems raised, people have simply ignored them – or worse, assert that they aren’t problems at all without demonstrating anything of the sort. Heck, I’ve even seen people take lists of problems and spin them as virtues.

To give one very quick example, consider the physical interpretation of the Tully-Fisher relation. This has varied over time, and there are many flavors. But usually it is supposed that the luminosity is set by the stellar mass, and the rotation speed by the dark matter mass. If we (reasonably) presume that the stellar mass is proportional to the dark mass, viola – Tully-Fisher. This all sounds perfectly plausible, so most people don’t think any harder about it. No problem at al.

Well, one small problem: this explanation does not work. The velocity is not uniquely set by the dark matter halo. In the range of radii accessible to measurement, the contribution of the baryonic mass is non-negligible in high surface brightness galaxies. If that sounds a little technical, it is. One has to cope at this level to play in the sandbox.

Once we appreciate that we cannot just ignore the baryons, explaining Tully-Fisher becomes a lot harder – in particular, the absence of surface brightness residuals. Higher surface brightness galaxies should rotate faster at a given mass, but they don’t. The easy way to fix this is to suppose that the baryonic mass is indeed negligible, but this leads straight to a contradiction with the diversity of rotation curves following from the central density relation. The kinematics know about the shape of the baryonic mass distribution, not just its total. Solving all these problems simultaneously becomes a game of cosmic whack-a-mole: fixing one aspect of the problem makes another worse. All too often, people are so focused on one aspect of a problem that they don’t realize that their fix comes at the expense of something else. It’s like knocking a hole in one side of a boat to obtain material to patch a hole in the other side of the same boat.

Except they are sure. Problem solved! is what people want to hear, so that’s what they hear. Nobody bothers to double check whether the “right” answer in indeed right when it agrees with their preconceptions. And there is always someone willing to make that assertion.

What we have here is a failure to communicate

What we have here is a failure to communicate

Kuhn noted that as paradigms reach their breaking point, there is a divergence of opinions between scientists about what the important evidence is, or what even counts as evidence. This has come to pass in the debate over whether dark matter or modified gravity is a better interpretation of the acceleration discrepancy problem. It sometimes feels like we’re speaking about different topics in a different language. That’s why I split the diagram version of the dark matter tree as I did:

Evidence indicating acceleration discrepancies in the universe and various flavors of hypothesized solutions.

Astroparticle physicists seem to be well-informed about the cosmological evidence (top) and favor solutions in the particle sector (left). As more of these people entered the field in the ’00s and began attending conferences where we overlapped, I recognized gaping holes in their knowledge about the dynamical evidence (bottom) and related hypotheses (right). This was part of my motivation to develop an evidence-based course1 on dark matter, to try to fill in the gaps in essential knowledge that were obviously being missed in the typical graduate physics curriculum. Though popular on my campus, not everyone in the field has the opportunity to take this course. It seems that the chasm has continued to grow, though not for lack of attempts at communication.

Part of the problem is a phase difference: many of the questions that concern astroparticle physicists (structure formation is a big one) were addressed 20 years ago in MOND. There is also a difference in texture: dark matter rarely predicts things but always explains them, even if it doesn’t. MOND often nails some predictions but leaves other things unexplained – just a complete blank. So they’re asking questions that are either way behind the curve or as-yet unanswerable. Progress rarely follows a smooth progression in linear time.

I have become aware of a common construction among many advocates of dark matter to criticize “MOND people.” First, I don’t know what a “MOND person” is. I am a scientist who works on a number of topics, among them both dark matter and MOND. I imagine the latter makes me a “MOND person,” though I still don’t really know what that means. It seems to be a generic straw man. Users of this term consistently paint such a luridly ridiculous picture of what MOND people do or do not do that I don’t recognize it as a legitimate depiction of myself or of any of the people I’ve met who work on MOND. I am left to wonder, who are these “MOND people”? They sound very bad. Are there any here in the room with us?

I am under no illusions as to what these people likely say when I am out of ear shot. Someone recently pointed me to a comment on Peter Woit’s blog that I would not have come across on my own. I am specifically named. Here is a screen shot:

From a reply to a post of Peter Woit on December 8, 2022. I omit the part about right-handed neutrinos as irrelevant to the discussion here.

This concisely pinpoints where the field2 is at, both right and wrong. Let’s break it down.

let me just remind everyone that the primary reason to believe in the phenomenon of cold dark matter is the very high precision with which we measure the CMB power spectrum, especially modes beyond the second acoustic peak

This is correct, but it is not the original reason to believe in CDM. The history of the subject matters, as we already believed in CDM quite firmly before any modes of the acoustic power spectrum of the CMB were measured. The original reasons to believe in cold dark matter were (1) that the measured, gravitating mass density exceeds the mass density of baryons as indicated by BBN, so there is stuff out there with mass that is not normal matter, and (2) large scale structure has grown by a factor of 105 from the very smooth initial condition indicated initially by the nondetection of fluctuations in the CMB, while normal matter (with normal gravity) can only get us a factor of 103 (there were upper limits excluding this before there was a detection). Structure formation additionally imposes the requirement that whatever the dark matter is moves slowly (hence “cold”) and does not interact via electromagnetism in order to evade making too big an impact on the fluctuations in the CMB (hence the need, again, for something non-baryonic).

When cold dark matter became accepted as the dominant paradigm, fluctuations in the CMB had not yet been measured. The absence of observable fluctuations at a larger level sufficed to indicate the need for CDM. This, together with Ωm > Ωb from BBN (which seemed the better of the two arguments at the time), sufficed to convince me, along with most everyone else who was interested in the problem, that the answer had3 to be CDM.

This all happened before the first fluctuations were observed by COBE in 1992. By that time, we already believed firmly in CDM. The COBE observations caused initial confusion and great consternation – it was too much! We actually had a prediction from then-standard SCDM, and it had predicted an even lower level of fluctuations than what COBE observed. This did not cause us (including me) to doubt CDM (thought there was one suggestion that it might be due to self-interacting dark matter); it seemed a mere puzzle to accommodate, not an anomaly. And accommodate it we did: the power in the large scale fluctuations observed by COBE is part of how we got LCDM, albeit only a modest part. A lot of younger scientists seem to have been taught that the power spectrum is some incredibly successful prediction of CDM when in fact it has surprised us at nearly every turn.

As I’ve related here before, it wasn’t until the end of the century that CMB observations became precise enough to provide a test that might distinguish between CDM and MOND. That test initially came out in favor of MOND – or at least in favor of the absence of dark matter: No-CDM, which I had suggested as a proxy for MOND. Cosmologists and dark matter advocates consistently omit this part of the history of the subject.

I had hoped that cosmologists would experience the same surprise and doubt and reevaluation that I had experienced when MOND cropped up in my own data when it cropped up in theirs. Instead, they went into denial, ignoring the successful prediction of the first-to-second peak amplitude ratio, or, worse, making up stories that it hadn’t happened. Indeed, the amplitude of the second peak was so surprising that the first paper to measure it omitted mention of it entirely. Just didn’t talk about it, let alone admit that “Gee, this crazy prediction came true!” as I had with MOND in LSB galaxies. Consequently, I decided that it was better to spend my time working on topics where progress could be made. This is why most of my work on the CMB predates “modes beyond the second peak” just as our strong belief in CDM also predated that evidence. Indeed, communal belief in CDM was undimmed when the modes defining the second peak were observed, despite the No-CDM proxy for MOND being the only hypothesis to correctly predict it quantitatively a priori.

That said, I agree with clayton’s assessment that

CDM thinks [the second and third peak] should be about the same

That this is the best evidence now is both correct and a much weaker argument than it is made out to be. It sounds really strong, because a formal fit to the CMB data require a dark matter component at extremely high confidence – something approaching 100 sigma. This analysis assumes that dark matter exist. It does not contemplate that something else might cause the same effect, so all it really does, yet again, is demonstrate that General Relativity cannot explain cosmology when restricted to the material entities we concretely know to exist.

Given the timing, the third peak was not a strong element of my original prediction, as we did not yet have either a first or second peak. We hadn’t yet clearly observed peaks at all, so what I was doing was pretty far-sighted, but I wasn’t thinking that far ahead. However, the natural prediction for the No-CDM picture I was considering was indeed that the third peak should be lower than the second, as I’ve discussed before.

The No-CDM model (blue line) that correctly predicted the amplitude of the second peak fails to predict that of the third. Data from the Planck satellite; model line from McGaugh (2004); figure from McGaugh (2015).

In contrast, in CDM, the acoustic power spectrum of the CMB can do a wide variety of things:

Acoustic power spectra calculated for the CMB for a variety of cosmic parameters. From Dodelson & Hu (2002).

Given the diversity of possibilities illustrated here, there was never any doubt that a model could be fit to the data, provided that oscillations were observed as expected in any of the theories under consideration here. Consequently, I do not find fits to the data, though excellent, to be anywhere near as impressive as commonly portrayed. What does impress me is consistency with independent data.

What impresses me even more are a priori predictions. These are the gold standard of the scientific method. That’s why I worked my younger self’s tail off to make a prediction for the second peak before the data came out. In order to make a clean test, you need to know what both theories predict, so I did this for both LCDM and No-CDM. Here are the peak ratios predicted before there were data to constrain them, together with the data that came after:

The ratio of the first-to-second (left) and second-to-third peak (right) amplitude ratio in LCDM (red) and No-CDM (blue) as predicted by Ostriker & Steinhardt (1995) and McGaugh (1999). Subsequent data as labeled.

The left hand panel shows the predicted amplitude ratio of the first-to-second peak, A1:2. This is the primary quantity that I predicted for both paradigms. There is a clear distinction between the predicted bands. I was not unique in my prediction for LCDM; the same thing can be seen in other contemporaneous models. All contemporaneous models. I was the only one who was not surprised by the data when they came in, as I was the only one who had considered the model that got the prediction right: No-CDM.

The same No-CDM model fails to correctly predict the second-to-third peak ratio, A2:3. It is, in fact, way off, while LCDM is consistent with A2:3, just as Clayton says. This is a strong argument against No-CDM, because No-CDM makes a clear and unequivocal prediction that it gets wrong. Clayton calls this

a stone-cold, qualitative, crystal clear prediction of CDM

which is true. It is also qualitative, so I call it weak sauce. LCDM could be made to fit a very large range of A2:3, but it had already got A1:2 wrong. We had to adjust the baryon density outside the allowed range in order to make it consistent with the CMB data. The generous upper limit that LCDM might conceivably have predicted in advance of the CMB data was A1:2 < 2.06, which is still clearly less than observed. For the first years of the century, the attitude was that BBN had been close, but not quite right – preference being given to the value needed to fit the CMB. Nowadays, BBN and the CMB are said to be in great concordance, but this is only true if one restricts oneself to deuterium measurements obtained after the “right” answer was known from the CMB. Prior to that, practically all of the measurements for all of the important isotopes of the light elements, deuterium, helium, and lithium, all concurred that the baryon density Ωbh2 < 0.02, with the consensus value being Ωbh2 = 0.0125 ± 0.0005. This is barely half the value subsequently required to fit the CMBbh2 = 0.0224 ± 0.0001). But what’s a factor of two among cosmologists? (In this case, 4 sigma.)

Taking the data at face value, the original prediction of LCDM was falsified by the second peak. But, no problem, we can move the goal posts, in this case by increasing the baryon density. The successful prediction of the third peak only comes after the goal posts have been moved to accommodate the second peak. Citing only the comparable size of third peak to the second while not acknowledging that the second was too small elides the critical fact that No-CDM got something right, a priori, that LCDM did not. No-CDM failed only after LCDM had already failed. The difference is that I acknowledge its failure while cosmologists elide this inconvenient detail. Perhaps the second peak amplitude is a fluke, but it was a unique prediction that was exactly nailed and remains true in all subsequent data. That’s a pretty remarkable fluke4.

LCDM wins ugly here by virtue of its flexibility. It has greater freedom to fit the data – any of the models in the figure of Dodelson & Hu will do. In contrast. No-CDM is the single blue line in my figure above, and nothing else. Plausible variations in the baryon density make hardly any difference: A1:2 has to have the value that was subsequently observed, and no other. It passed that test with flying colors. It flunked the subsequent test posed by A2:3. For LCDM this isn’t even a test, it is an exercise in fitting the data with a model that has enough parameters5 to do so.

There were a number of years at the beginning of the century during which the No-CDM prediction for the A1:2 was repeatedly confirmed by multiple independent experiments, but before the third peak was convincingly detected. During this time, cosmologists exhibited the same attitude that Clayton displays here: the answer has to be CDM! This warrants mention because the evidence Clayton cites did not yet exist. Clearly the as-yet unobserved third peak was not the deciding factor.

In those days, when No-CDM was the only correct a priori prediction, I would point out to cosmologists that it had got A1:2 right when I got the chance (which was rarely: I was invited to plenty of conferences in those days, but none on the CMB). The typical reaction was usually outright denial6 though sometimes it warranted a dismissive “That’s not a MOND prediction.” The latter is a fair criticism. No-CDM is just General Relativity without CDM. It represented MOND as a proxy under the ansatz that MOND effects had not yet manifested in a way that affected the CMB. I expected that this ansatz would fail at some point, and discussed some of the ways that this should happen. One that’s relevant today is that galaxies form early in MOND, so reionization happens early, and the amplitude of gravitational lensing effects is amplified. There is evidence for both of these now. What I did not anticipate was a departure from a damping spectrum around L=600 (between the second and third peaks). That’s a clear deviation from the prediction, which falsifies the ansatz but not MOND itself. After all, they were correct in noting that this wasn’t a MOND prediction per se, just a proxy. MOND, like Newtonian dynamics before it, is relativity adjacent, but not itself a relativistic theory. Neither can explain the CMB on their own. If you find that an unsatisfactory answer, imagine how I feel.

The same people who complained then that No-CDM wasn’t a real MOND prediction now want to hold MOND to the No-CDM predicted power spectrum and nothing else. First it was the second peak isn’t a real MOND prediction! then when the third peak was observed it became no way MOND can do this! This isn’t just hypocritical, it is bad science. The obvious way to proceed would be to build on the theory that had the greater, if incomplete, predictive success. Instead, the reaction has consistently been to cherry-pick the subset of facts that precludes the need for serious rethinking.

This brings us to sociology, so let’s examine some more of what Clayton has to say:

Any talk I’ve ever seen by McGaugh (or more exotic modified gravity people like Verlinde) elides this fact, and they evade the questions when I put my hand up to ask. I have invited McGaugh to a conference before specifically to discuss this point, and he just doesn’t want to.

Now you’re getting personal.

There is so much to unpack here, I hardly know where to start. By saying I “elide this fact” about the qualitatively equality of the second and third peak, Clayton is basically accusing me of lying by omission. This is pretty rich coming from a community that consistently elides the history I relate above, and never addresses the question raised by MOND’s predictive power.

Intellectual honesty is very important to me – being honest that MOND predicted what I saw in low surface brightness where my own prediction was wrong is what got me into this mess in the first place. It would have been vastly more convenient to pretend that I never heard of MOND (at first I hadn’t7) and act like that never happened. That would be an lie of omission. It would be a large lie, a lie that denies an important aspect of how the world works (what we’re supposed to uncover through science), the sort of lie that cleric Paul Gerhardt may have had in mind when he said

When a man lies, he murders some part of the world.

Paul Gerhardt

Clayton is, in essence, accusing me of exactly that by failing to mention the CMB in talks he has seen. That might be true – I give a lot of talks. He hasn’t been to most of them, and I usually talk about things I’ve done more recently than 2004. I’ve commented explicitly on this complaint before

There’s only so much you can address in a half hour talk. [This is a recurring problem. No matter what I say, there always seems to be someone who asks “why didn’t you address X?” where X is usually that person’s pet topic. Usually I could do so, but not in the time allotted.]

– so you may appreciate my exasperation at being accused of dishonesty by someone whose complaint is so predictable that I’ve complained before about people who make this complaint. I’m only human – I can’t cover all subjects for all audiences every time all the time. Moreover, I do tend to choose to discuss subjects that may be news to an audience, not simply reprise the greatest hits they want to hear. Clayton obviously knows about the third peak; he doesn’t need to hear about it from me. This is the scientific equivalent of shouting Freebird! at a concert.

It isn’t like I haven’t talked about it. I have been rigorously honest about the CMB, and certainly have not omitted mention of the third peak. Here is a comment from February 2003 when the third peak was only tentatively detected:

Page et al. (2003) do not offer a WMAP measurement of the third peak. They do quote a compilation of other experiments by Wang et al. (2003). Taking this number at face value, the second to third peak amplitude ratio is A2:3 = 1.03 +/- 0.20. The LCDM expectation value for this quantity was 1.1, while the No-CDM expectation was 1.9. By this measure, LCDM is clearly preferable, in contradiction to the better measured first-to-second peak ratio.

Or here, in March 2006:

the Boomerang data and the last credible point in the 3-year WMAP data both have power that is clearly in excess of the no-CDM prediction. The most natural interpretation of this observation is forcing by a mass component that does not interact with photons, such as non-baryonic cold dark matter.

There are lots like this, including my review for CJP and this talk given at KITP where I had been asked to explicitly take the side of MOND in a debate format for an audience of largely particle physicists. The CMB, including the third peak, appears on the fourth slide, which is right up front, not being elided at all. In the first slide, I tried to encapsulate the attitudes of both sides:

I did the same at a meeting in Stony Brook where I got a weird vibe from the audience; they seemed to think I was lying about the history of the second peak that I recount above. It will be hard to agree on an interpretation if we can’t agree on documented historical facts.

More recently, this image appears on slide 9 of this lecture from the cosmology course I just taught (Fall 2022):

I recognize this slide from talks I’ve given over the past five plus years; this class is the most recent place I’ve used it, not the first. On some occasions I wrote “The 3rd peak is the best evidence for CDM.” I do not recall which all talks I used this in; many of them were likely colloquia for physics departments where one has more time to cover things than in a typical conference talk. Regardless, these apparently were not the talks that Clayton attended. Rather than it being the case that I never address this subject, the more conservative interpretation of the experience he relates would be that I happened not to address it in the small subset of talks that he happened to attend.

But do go off, dude: tell everyone how I never address this issue and evade questions about it.

I have been extraordinarily patient with this sort of thing, but I confess to a great deal of exasperation at the perpetual whataboutism that many scientists engage in. It is used reflexively to shut down discussion of alternatives: dark matter has to be right for this reason (here the CMB); nothing else matters (galaxy dynamics), so we should forbid discussion of MOND. Even if dark matter proves to be correct, the CMB is being used an excuse to not address the question of the century: why does MOND get so many predictions right? Any scientist with a decent physical intuition who takes the time to rub two brain cells together in contemplation of this question will realize that there is something important going on that simply invoking dark matter does not address.

In fairness to McGaugh, he pointed out some very interesting features of galactic DM distributions that do deserve answers. But it turns out that there are a plurality of possibilities, from complex DM physics (self interactions) to unmodelable SM physics (stellar feedback, galaxy-galaxy interactions). There are no such alternatives to CDM to explain the CMB power spectrum.

Thanks. This is nice, and why I say it would be easier to just pretend to never have heard of MOND. Indeed, this succinctly describes the trajectory I was on before I became aware of MOND. I would prefer to be recognized for my own work – of which there is plenty – than an association with a theory that is not my own – an association that is born of honestly reporting a surprising observation. I find my reception to be more favorable if I just talk about the data, but what is the point of taking data if we don’t test the hypotheses?

I have gone to great extremes to consider all the possibilities. There is not a plurality of viable possibilities; most of these things do not work. The specific ideas that are cited here are known not work. SIDM apears to work because it has more free parameters than are required to describe the data. This is a common failing of dark matter models that simply fit some functional form to observed rotation curves. They can be made to fit the data, but they cannot be used to predict the way MOND can.

Feedback is even worse. Never mind the details of specific feedback models, and think about what is being said here: the observations are to be explained by “unmodelable [standard model] physics.” This is a way of saying that dark matter claims to explain the phenomena while declining to make a prediction. Don’t worry – it’ll work out! How can that be considered better than or even equivalent to MOND when many of the problems we invoke feedback to solve are caused by the predictions of MOND coming true? We’re just invoking unmodelable physics as a deus ex machina to make dark matter models look like something they are not. Are physicists straight-up asserting that it is better to have a theory that is unmodelable than one that makes predictions that come true?

Returning to the CMB, are there no “alternatives to CDM to explain the CMB power spectrum”? I certainly do not know how to explain the third peak with the No-CDM ansatz. For that we need a relativistic theory, like Beklenstein‘s TeVeS. This initially seemed promising, as it solved the long-standing problem of gravitational lensing in MOND. However, it quickly became clear that it did not work for the CMB. Nevertheless, I learned from this that there could be more to the CMB oscillations than allowed by the simple No-CDM ansatz. The scalar field (an entity theorists love to introduce) in TeVeS-like theories could play a role analogous to cold dark matter in the oscillation equations. That means that what I thought was a killer argument against MOND – the exact same argument Clayton is making – is not as absolute as I had thought.

Writing down a new relativistic theory is not trivial. It is not what I do. I am an observational astronomer. I only play at theory when I can’t get telescope time.

Comic from the Far Side by Gary Larson.

So in the mid-00’s, I decided to let theorists do theory and started the first steps in what would ultimately become the SPARC database (it took a decade and a lot of effort by Jim Schombert and Federico Lelli in addition to myself). On the theoretical side, it also took a long time to make progress because it is a hard problem. Thanks to work by Skordis & Zlosnik on a theory they [now] call AeST8, it is possible to fit the acoustic power spectrum of the CMB:

CMB power spectrum observed by Planck fit by AeST (Skordis & Zlosnik 2021).

This fit is indistinguishable from that of LCDM.

I consider this to be a demonstration, not necessarily the last word on the correct theory, but hopefully an iteration towards one. The point here is that it is possible to fit the CMB. That’s all that matters for our current discussion: contrary to the steady insistence of cosmologists over the past 15 years, CDM is not the only way to fit the CMB. There may be other possibilities that we have yet to figure out. Perhaps even a plurality of possibilities. This is hard work and to make progress we need a critical mass of people contributing to the effort, not shouting rubbish from the peanut gallery.

As I’ve done before, I like to take the language used in favor of dark matter, and see if it also fits when I put on a MOND hat:

As a galaxy dynamicist, let me just remind everyone that the primary reason to believe in MOND as a physical theory and not some curious dark matter phenomenology is the very high precision with which MOND predicts, a priori, the dynamics of low-acceleration systems, especially low surface brightness galaxies whose kinematics were practically unknown at the time of its inception. There is a stone-cold, quantitative, crystal clear prediction of MOND that the kinematics of galaxies follows uniquely from their observed baryon distributions. This is something CDM profoundly and irremediably gets wrong: it predicts that the dark matter halo should have a central cusp9 that is not observed, and makes no prediction at all for the baryon distribution, let alone does it account for the detailed correspondence between bumps and wiggles in the baryon distribution and those in rotation curves. This is observed over and over again in hundreds upon hundreds of galaxies, each of which has its own unique mass distribution so that each and every individual case provides a distinct, independent test of the hypothesized force law. In contrast, CDM does not even attempt a comparable prediction: rather than enabling the real-world application to predict that this specific galaxy will have this particular rotation curve, it can only refer to the statistical properties of galaxy-like objects formed in numerical simulations that resemble real galaxies only in the abstract, and can never be used to directly predict the kinematics of a real galaxy in advance of the observation – an ability that has been demonstrated repeatedly by MOND. The simple fact that the simple formula of MOND is so repeatably correct in mapping what we see to what we get is to me the most convincing way to see that we need a grander theory that contains MOND and exactly MOND in the low acceleration limit, irrespective of the physical mechanism by which this is achieved.

That is stronger language than I would ordinarily permit myself. I do so entirely to show the danger of being so darn sure. I actually agree with clayton’s perspective in his quote; I’m just showing what it looks like if we adopt the same attitude with a different perspective. The problems pointed out for each theory are genuine, and the supposed solutions are not obviously viable (in either case). Sometimes I feel like we’re up the proverbial creek without a paddle. I do not know what the right answer is, and you should be skeptical of anyone who is sure that he does. Being sure is the sure road to stagnation.


1It may surprise some advocates of dark matter that I barely touch on MOND in this course, only getting to it at the end of the semester, if at all. It really is evidence-based, with a focus on the dynamical evidence as there is a lot more to this than seems to be appreciated by most physicists*. We also teach a course on cosmology, where students get the material that physicists seem to be more familiar with.

*I once had a colleague who was is a physics department ask how to deal with opposition to developing a course on galaxy dynamics. Apparently, some of the physicists there thought it was not a rigorous subject worthy of an entire semester course – an attitude that is all too common. I suggested that she pointedly drop the textbook of Binney & Tremaine on their desks. She reported back that this technique proved effective.

2I do not know who clayton is; that screen name does not suffice as an identifier. He claims to have been in contact with me at some point, which is certainly possible: I talk to a lot of people about these issues. He is welcome to contact me again, though he may wish to consider opening with an apology.

3One of the hardest realizations I ever had as a scientist was that both of the reasons (1) and (2) that I believed to absolutely require CDM assumed that gravity was normal. If one drops that assumption, as one must to contemplate MOND, then these reasons don’t require CDM so much as they highlight that something is very wrong with the universe. That something could be MOND instead of CDM, both of which are in the category of who ordered that?

4In the early days (late ’90s) when I first started asking why MOND gets any predictions right, one of the people I asked was Joe Silk. He dismissed the rotation curve fits of MOND as a fluke. There were 80 galaxies that had been fit at the time, which seemed like a lot of flukes. I mention this because one of the persistent myths of the subject is that MOND is somehow guaranteed to magically fit rotation curves. Erwin de Blok and I explicitly showed that this was not true in a 1998 paper.

5I sometimes hear cosmologists speak in awe of the thousands of observed CMB modes that are fit by half a dozen LCDM parameters. This is impressive, but we’re fitting a damped and driven oscillation – those thousands of modes are not all physically independent. Moreover, as can be seen in the figure from Dodelson & Hu, some free parameters provide more flexibility than others: there is plenty of flexibility in a model with dark matter to fit the CMB data. Only with the Planck data do minor tensions arise, the reaction to which is generally to add more free parameters, like decoupling the primordial helium abundance from that of deuterium, which is anathema to standard BBN so is sometimes portrayed as exciting, potentially new physics.

For some reason, I never hear the same people speak in equal awe of the hundreds of galaxy rotation curves that can be fit by MOND with a universal acceleration scale and a single physical free parameter, the mass-to-light ratio. Such fits are over-constrained, and every single galaxy is an independent test. Indeed, MOND can predict rotation curves parameter-free in cases where gas dominates so that the stellar mass-to-light ratio is irrelevant.

How should we weigh the relative merit of these very different lines of evidence?

6On a number of memorable occasions, people shouted “No you didn’t!” On smaller number of those occasions (exactly two), they bothered to look up the prediction in the literature and then wrote to apologize and agree that I had indeed predicted that.

7If you read this paper, part of what you will see is me being confused about how low surface brightness galaxies could adhere so tightly to the Tully-Fisher relation. They should not. In retrospect, one can see that this was a MOND prediction coming true, but at the time I didn’t know about that; all I could see was that the result made no sense in the conventional dark matter picture.

Some while after we published that paper, Bob Sanders, who was at the same institute as my collaborators, related to me that Milgrom had written to him and asked “Do you know these guys?”

8Initially they had called it RelMOND, or just RMOND. AeST stands for Aether-Scalar-Tensor, and is clearly a step along the lines that Bekenstein made with TeVeS.

In addition to fitting the CMB, AeST retains the virtues of TeVeS in terms of providing a lensing signal consistent with the kinematics. However, it is not obvious that it works in detail – Tobias Mistele has a brand new paper testing it, and it doesn’t look good at extremely low accelerations. With that caveat, it significantly outperforms extant dark matter models.

There is an oft-repeated fallacy that comes up any time a MOND-related theory has a problem: “MOND doesn’t work therefore it has to be dark matter.” This only ever seems to hold when you don’t bother to check what dark matter predicts. In this case, we should but don’t detect the edge of dark matter halos at higher accelerations than where AeST runs into trouble.

9Another question I’ve posed for over a quarter century now is what would falsify CDM? The first person to give a straight answer to this question was Simon White, who said that cusps in dark matter halos were an ironclad prediction; they had to be there. Many years later, it is clear that they are not, but does anyone still believe this is an ironclad prediction? If it is, then CDM is already falsified. If it is not, then what would be? It seems like the paradigm can fit any surprising result, no matter how unlikely a priori. This is not a strength, it is a weakness. We can, and do, add epicycle upon epicycle to save the phenomenon. This has been my concern for CDM for a long time now: not that it gets some predictions wrong, but that it can apparently never get a prediction so wrong that we can’t patch it up, so we can never come to doubt it if it happens to be wrong.