In the previous post, I related some of the history of the Radial Acceleration Relation (henceforth RAR). Here I’ll discuss some of my efforts to understand it. I’ve spent more time trying to do this in terms of dark matter than pretty much anything else, but I have not published most of those efforts. As I related briefly in this review, that’s because most of the models I’ve considered are obviously wrong. Just because I have refrained from publishing explanations of the RAR that are manifestly incorrect has not precluded others from doing so.

A theory is only as good as its prior. If a theory makes a clear prediction, preferably ahead of time, then we can test it. If it has not done so ahead of time, that’s still OK, if we can work out what it would have predicted without being guided by the data. A good historical example of this is the explanation of the excess perihelion precession of Mercury provided by General Relativity. The anomaly had been known for decades, but the right answer falls out of the theory without input from the data. A more recent example is our prediction of the velocity dispersions of the dwarf satellites of Andromeda. Some cases were genuine a priori predictions, but even in the cases that weren’t, the prediction is what it is irrespective of the measurement.

Dark matter-based explanations of the RAR do not fall in either category. They have always chased the data and been informed by it. This has been going on for so long that new practitioners have entered field unaware of the extent to which the simulations they inherited had already been informed by the data. They legitimately seem to think that there has been no fine-tuning of the models because they weren’t personally present for every turn of the knob.

So let’s set the way-back machine. I became concerned about fine-tuning problems in the context of galaxy dynamics when I was trying to explain the Tully-Fisher relation of low surface brightness galaxies in the mid-1990s. This was before I was more than dimly aware that MOND existed, much less taken it seriously. Many of us were making earnest efforts to build proper galaxy formation theories at the time (e.g., Mo, McGaugh, & Bothun 1994, Dalcanton, Spergel, & Summers 1997; Mo, Mao, & White 1998 [MMW]; McGaugh & de Blok 1998), though of course these were themselves informed by observations to date. My own paper had started as an effort to exploit the new things we had discovered about low surface brightness galaxies to broaden our conventional theory of galaxy formation, but over the course of several years, turned into a falsification of some of the ideas I had considered in my 1992 thesis. Dalcanton’s model evolved from one that predicted a shift in Tully-Fisher (as mine had) to one that did not (after the data said no). It may never be possible to completely separate theoretical prediction from concurrent data, but it is possible to ask what a theory plausibly predicts. What is the LCDM prior for the RAR?

In order to do this, we need to predict both the baryon distribution (gbar) and that of the dark matter (gobs-gbar). Unfortunately, nobody seems to really agree on what LCDM predicts for galaxies. There seems to be a general consensus that dark matter halos should start out with the NFW form, but opinions vary widely about whether and how this is modified during galaxy formation. The baryonic side of the issue is simply seen as a problem.

That there is no clear prediction is in itself a problem. I distinctly remember expressing my concerns to Martin Rees while I was still a postdoc. He said not to worry; galaxies were such non-linear entities that we shouldn’t be surprised by anything they do. This verbal invocation of a blanket dodge for any conceivable observation did not inspire confidence. Since then, I’ve heard that excuse repeated by others. I have lost count of the number of more serious, genuine, yet completely distinct LCDM predictions I have seen, heard, or made myself. Many dozens, at a minimum; perhaps hundreds at this point. Some seem like they might work but don’t while others don’t even cross the threshold of predicting both axes of the RAR. There is no coherent picture that adds up to an agreed set of falsifiable predictions. Individual models can be excluded, but not the underlying theory.

To give one example, let’s consider the specific model of MMW. I make this choice here for two reasons. One, it is a credible effort by serious workers and has become a touchstone in the field, to the point that a sizeable plurality of practitioners might recognize it as a plausible prior – i.e., the closest thing we can hope to get to a legitimate, testable prior. Two, I recently came across one of my many unpublished attempts to explain the RAR which happens to make use of it. Unix says that the last time I touched these files was nearly 22 years ago, in 2000. The postscript generated then is illegible now, so I have to update the plot:

The prediction of MMW (lines) compared to data (points). Each colored line represents a model galaxy of a given mass. Different lines of the same color represent models with different disk scale lengths, as galaxies of the same mass exist over a range of sizes. Models are only depicted over the range of radii typically observed in real galaxies.

At first glance, this might look OK. The trend is at least in the right direction. This is not a success so much as it is an inevitable consequence of the fact that the observed acceleration includes the contribution of the baryons. The area below the dashed line is excluded, as it is impossible to have gobs < gbar. Moreover, since gobs = gbar+gDM, some correlation in this plane is inevitable. Quite a lot, if baryons dominate, as they always seem to do at high accelerations. Not that these models explain the high acceleration part of the RAR, but I’ll leave that detail for later. For now, note that this is a log-log plot. That the models miss the data a little to the eye translates to a large quantitative error. Individual model galaxies sometimes fall too high, sometimes too low: the model predicts considerably more scatter than is observed. The RAR is not predicted to be a narrow relation, but one with lots of scatter with large intrinsic deviations from the mean. That’s the natural prediction of MMW-type models.

I have explored many flavors of [L]CDM models. They generically predicts more scatter in the RAR than is observed. This is the natural expectation, and some fine-tuning has to be done to reduce the scatter to the observed level. The inevitable need for fine-tuning is why I became concerned for the dark matter paradigm, even before I became aware that MOND predicted exactly this. It is also why the observed RAR was considered to be against orthodoxy at the time: everybody’s prior was for a large scatter. It wasn’t just me.

In order to build a model, one has to make some assumptions. The obvious assumption to make, at the time, was a constant ratio of dark matter to baryons. Indeed, for many years, the working assumption was that this was about 10:1, maybe 20:1. This type of assumption is built into the models of MMW, who thought that they worked provided “(i) the masses of disks are a few percent of those of their haloes”. The (i) is there because it is literally their first point, and the assumption that everybody made. We were terrified of dropping this natural assumption, as the obvious danger is that it becomes a rolling fudge factor, assuming any value that is convenient for explaining any given observation.

Unfortunately, it had already become clear by this time from the data that a constant ratio of dark to luminous matter could not work. The earliest I said this on the record is 1996. [That was before LCDM had supplanted SCDM as the most favored cosmology. From that perspective, the low baryon fractions of galaxies seemed natural; it was clusters of galaxies that were weird.] I pointed out the likely failure of (i) to Mo when I first saw a draft of MMW (we had been office mates in Cambridge). I’ve written various papers about it since. The point here is that, from the perspective of the kinematic data, the ratio of dark to luminous mass has to vary. It cannot be a constant as we had all assumed. But it has to vary in a way that doesn’t introduce scatter into relations like the RAR or the Baryonic Tully-Fisher relation, so we have to fine-tune this rolling fudge factor so that it varies with mass but always obtains the same value at the same mass.

A constant ratio of dark to luminous mass wasn’t just a convenient assumption. There is good physical reason to expect that this should be the case. The baryons in galaxies have to cool and dissipate to form a galaxy in the center of a dark matter halo. This takes time, imposing an upper limit on galaxy mass. But the baryons in small galaxies have ample time to cool and condense, so one naively expects that they should all do so. That would have been natural. It would also lead to a steeply increasing luminosity function, which is not observed, leading to the over-cooling and missing satellite problems.

Reconciling the observed and predicted mass functions is one of the reasons we invoke feedback. The energy produced by the stars that form in the first gas to condense are an energy source that feeds back into the surrounding gas. This can, in principle, reheat the remaining gas or expel it entirely, thereby precluding it from condensing and forming more stars as in the naive expectation. In principle. In practice, we don’t know how this works, or even if the energy provided by star formation couples to the surrounding gas in a way that does what we need it to do. Simulations do not have the resolution to follow feedback in detail, so instead make some assumptions (“subgrid physics”) about how this might happen, and tune the assumed prescription to fit some aspect of the data. Once this is done, it is possible to make legitimate predictions about other aspects of the data, provided they are unrelated. But we still don’t know if that’s how feedback works, and in no way is it natural. Rather, it is a deus ex machina that we invoke to save us from a glaring problem without really knowing how it works or even if it does. This is basically just theoretical hand-waving in the computational age.

People have been invoking feedback as a panacea for all ills in galaxy formation theory for so long that it has become familiar. Once something becomes familiar, everybody knows it. Since everybody knows that feedback has to play some role, it starts to seem like it was always expected. This is easily confused with being natural.

I could rant about the difficulty of making predictions with feedback afflicted models, but never mind the details. Let’s find some aspect of the data that is independent of the kinematics that we can use to specify the dark to luminous mass ratio. The most obvious candidate is abundance matching, in which the number density of observed galaxies is matched to the predicted number density of dark matter halos. We don’t have to believe feedback-based explanations to apply this, we merely have to accept that there is some mechanism to make the dark to luminous mass ratio variable. Whatever it is that makes this happen had better predict the right thing for both the mass function and the kinematics.

When it comes to the RAR, the application of abundance matching to assign halo masses to observed galaxies works out much better than the natural assumption of a constant ratio. This was first pointed out by Di Cintio & Lelli (2016), which inspired me to consider appropriately modified models. All I had to do was update the relation between stellar and halo mass from a constant ratio to a variable specified by abundance matching. This gives rather better results:

A model like that from 2000 but updated by assigning halo masses using an abundance matching relation.

This looks considerably better! The predicted scatter is much lower. How is this accomplished?

Abundance matching results in a non-linear relation bewteen stellar mass and halo mass. For the RAR, the scatter is reduced by narrowing the dynamic range of halo masses relative to the observed stellar masses. There is less variation in gDM. Empirically, this is what needs to happen – to a crude first approximation, the data are roughly consistent with all galaxies living in the same halo – i.e., no variation in halo mass with stellar mass. This was already known before abundance matching became rife; both the kinematic data and the mass function push us in this direction. There’s nothing natural about any of this; it’s just what we need to do to accommodate the data.

Still, it is tempting to say that we’ve succeeded in explaining the RAR. Indeed, some people have built the same kind of models to claim exactly this. While matters are clearly better, really we’re just less far off. By reducing the dynamic range in halo masses that are occupied by galaxies, the partial contribution of gDM to the gobs axis is compressed, and model lines perforce fall closer together. There’s less to distinguish an L* galaxy from a dwarf galaxy in this plane.

Nevertheless, there’s still too much scatter in the models. Harry Desmond made a specific study of this, finding that abundance matching “significantly overpredicts the scatter in the relation and its normalisation at low acceleration”, which is exactly what I’ve been saying. The offset in the normalization at low acceleration is obvious from inspection in the figure above: the models overshoot the low acceleration data. This led Navarro et al. to argue that there was a second acceleration scale, “an effective minimum acceleration probed by kinematic tracers in isolated galaxies” a little above 10-11 m/s/s. The models do indeed do this, over a modest range in gbar, and there is some evidence for it in some data. This does not persist in the more reliable data; those shown above are dominated by atomic gas so there isn’t even the systematic uncertainty of the stellar mass-to-light ratio to save us.

The astute observer will notice some pink model lines that fall well above the RAR in the plot above. These are for the most massive galaxies, those with luminosities in excess of L*. Below the knee in the Schechter function, there is a small range of halo masses for a given range of stellar masses. Above the knee, this situation is reversed. Consequently, the nonlinearity of abundance matching works against us instead of for us, and the scatter explodes. One can suppress this with an apt choice of abundance matching relation, but we shouldn’t get to pick and choose which relation we use. It can be made to work only because there remains enough uncertainty in abundance matching to select the “right” one. There is nothing natural about any this.

There are also these little hooks, the kinks at the high acceleration end of the models. I’ve mostly suppressed them here (as did Navarro et al.) but they’re there in the models if one plots to small enough radii. This is the signature of the cusp-core problem in the RAR plane. The hooks occur because the exponential disk model has a maximum acceleration at a finite radius that is a little under one scale length; this marks the maximum value that such a model can reach in gbar. In contrast, the acceleration gDM of an NFW halo continues to increase all the way to zero radius. Consequently, the predicted gobs continues to increase even after gbar has peaked and starts to decline again. This leads to little hook-shaped loops at the high acceleration end of the models in the RAR plane.

These hooks were going to be the segue to discuss more sophisticated models built by Pengfei Li, but that’s going to be a whole ‘nother post because these are quite enough words for now. So, until next time, don’t invest in bitcoins, Russian oil, or LCDM models that claim to explain the RAR.

15 thoughts on “What should we expect for the radial acceleration relation?

  1. The hooks are fantastic, exactly the kind of fitting artifacts one gets when matching any curve in physics within some range using math (say a polynomial) instead of physics (MOND). It all looks good – once you adjust the parameters – in the region of interest, but hopelessly falls apart out side the region.

    MOND is physics as much as Newton’s gravitational theory was – neither had explanations, which bugged Newton.

    Liked by 1 person

    1. Pretty much. While I didn’t go into it here, these types of models fail as soon as you look outside of the range illustrated, to either higher or lower accelerations. To your broader point, indeed, Newton grasped that there was a universal law of gravitation but didn’t know why or how. We are in an analogous situation, but suffer the drag of the preconception of dark matter much as we struggled with the drag of heliocentrism four centuries ago.

      Liked by 1 person

  2. Dear Stacy,

    I like your way of questioning yourself.
    You are very truthful and you are known for it.
    You are a good scientist.

    It may frustrate you that the mainstream cannot or will not follow your simple arguments or won’t
    (https://tritonstation.com/2016/10/03/four-strikes/)
    Well, you have convinced me about MOND.
    And it looks like you have convinced Sabine about MOND.
    https://backreaction.blogspot.com/2022/03/did-early-universe-inflate.html
    “dark matter, if you think it exists” 🙂

    You are in a comparably good position.
    You have Robert Sanders, Milgrom himself, and many others.

    I have no one.
    I suspect Rutherford was wrong.
    In his experiment, he has two unknowns: the alpha particle and the gold foil.
    He can explain the result by assuming small alpha particles and small gold atomic nuclei.

    Unfortunately, then the double-slit experiment cannot be explained without contradictions.
    Wave-particle dualism is only a nice word for a contradiction.
    Here the explanation should be much more straightforward because one has only one unknown
    (photon, atom, molecule), while the double slit is well known.

    Many greetings
    Stefan

    Like

  3. I’m commenting to mention that your RAR work is cited and taken seriously in A. Deur, “Effect of the field self-interaction of General Relativity on the Cosmic Microwave Background Anisotropies”
    arXiv:2203.02350 (March 4, 2022), which is worth a read in its own right. It basically perfectly fits the CMB power spectrum curve with a gravity theory that also reproduces MOND and RAR in many circumstances including disk galaxies. It does so by considering gravitational field self-interactions from classical GR that are usually neglected.

    Like

  4. Ohwilleke:

    I have to say that I was flabbergasted when I first came across the work of Alexandre Deur from your referencing his earlier papers on another blogpost here, about a year or more ago, as it has elements in common with a model I had been working on to explain the amazing ability of MOND to match astrophysical data. I will definitely check out Deur’s latest work connecting to RAR.

    To be sure my amateur paper cannot hold a candle to Deur’s astounding professional work, much less the papers of any other professionals in either physics or astronomy. Currently my paper is officially at a mere 8 pages. But many more pages were written previously which have been set aside as new insights came along making it evident that those earlier writings needed to be modified to enable them to be integrated into a coherent whole with the pages that so far I have not found any serious faults with; which perhaps doesn’t mean much as I am self-taught in physics and still learning.

    But I must reiterate that I was quite astonished on first reading Deur’s work on astronomy, as it has eerie echoes to my own ideas developed independently. I made the connection to the Dark Matter conundrum with this model some five years ago. I won’t be devoting too many paragraphs to this connection as this astrophysical aspect is peripheral to the main thesis of the model which deals primarily with the quantum domain. As I’ve mentioned elsewhere there’s a relatively simple experiment (at least conceptually) that could confirm or refute the model.

    Now this model does have an Achilles heel built into it which may invalidate the whole thing (hopefully not, but can’t be ruled out). In that case there is Plan B, incorporating the ideas expressed in the model as a plot device for a science-fiction novel.

    Like

  5. could any one comment on this

    The origin of the MOND critical acceleration scale
    David Roscoe

    There is a link: if the idea of a quasi-fractal D≈2 universe on medium scales is taken seriously then there is an associated characteristic mass surface density scale, ΣF say, and an associated characteristic gravitational acceleration scale, aF=4πGΣF. If, furthermore, the quasi-fractal structure is taken to include the inter-galactic medium, then it is an obvious step to consider the possibility that a0 and aF are the same thing.

    Since the scaling relationship also gives rise to the Baryonic Tully-Fisher Relationship, but with a0 replaced by aF, we are led unambiguously to the conclusion that a0 and aF are, in reality, one and the same thing.
    arXiv:2111.01700

    based on this

    Gravitational force distribution in fractal structures
    A. Gabrielli, F. Sylos Labini, S. Pellegrini

    and

    Fractal Analysis of the UltraVISTA Galaxy Survey
    Sharon Teles (1), Amanda R. Lopes (2), Marcelo B. Ribeiro (1,3) ((1) Valongo Observatory, Universidade Federal do Rio de Janeiro, Brazil, (2) Department of Astronomy, Observatório Nacional, Rio de Janeiro, Brazil, (3) Physics Institute, Universidade Federal do Rio de Janeiro, Brazil)

    This paper seeks to test if the large-scale galaxy distribution can be characterized as a fractal system.

    Like

  6. Ohwilleke:

    I’m just so fascinated by Alexandre Deur’s use of gravitational self-interaction to explain so much in astronomy via analogy with QCD. RAR and Tully-Fisher are readily accommodated from what I’ve seen in his papers. And then there is the expansion of the space between galaxies as a function of the weakening of gravity beyond the radii of galaxies in analogy with QCD where the Strong Force asymptotically diminishes outside of nucleons. Of course the disparity of the gravitational force inside and outside galaxies is not as drastic as in the QCD case. But, this gravitational weakening/strengthening dichotomy has the added benefit of conserving gravitational field energy overall, as I recall from his writing. My own model uses a different mechanism to accomplish the same thing and thus by the same token, at least in principle, should also result in energy conservation of the gravitational field.

    One thing I wasn’t sure of was whether Deur’s theory can account for the fine details in galactic rotation curves as embodied in Renzo’s Law. It probably is the case, but I would have to look through his papers to find it. My own primitive model, which I won’t provide details of for compliance with blog rules, has a direct physical mechanism to explain Renzo’s Law. As my model is non-mathematical it’s currently untestable in the astronomical domain. A laboratory test very similar to experiments conducted at the Austrian Research Center under the direction of Martin Tajmar, is the only way I can think of to determine its possible validity, as I expect a fairly robust signal.

    Like

  7. Several things need to happen. First, the scientific community needs to realize that the dark matter paradigm, as it has been known for most of my career, is wrong. Nobody is going to work hard on, or take seriously, alternatives until they let go. Everyone has to establish for themselves what criteria would be required to get them to let go. So far, very few scientists working in the field have faced up to that challenge, or even seem aware that they should do so.
    I make caveat “as it has been known” because the dark matter paradigm has evolved under the pressure of the data, but so far that mostly seems to have resulted in a broadening of what we mean by dark matter (WIMPs!) and associated gaslighting (it was never about WIMPs! We just built dozens of super-elaborate, very expensive, deep-mine-shaft experiments crafted exclusively to detect WIMPs because shut up.) In short, we need to get past the mindset “It HAS to be dark matter!” and it is very hard to do that because dark matter, as a concept, is not falsifiable.
    If we ever get beyond the current mindset (I worry that the scientific method will die in the current mire and fade into an historical footnote), then there needs to be a period of chaos in which we consider all sorts of things, like some of those mentioned above. We’re partially into that phase, so long as we restrict ourselves to “all sorts of things that are still dark matter.” [I predicted this trajectory a long time ago: “If we exclude one candidate, we are free to make up another one. After WIMPs, the next obvious candidate is axions. Should those be falsified, we invent something else.” – http://astroweb.case.edu/ssm/darkmatter/WIMPexperiments.html ]
    Hopefully something sensible emerges from the chaos. The winnowing of wrong steps should be quick once enough people are taking them seriously. The emergence of a new paradigm could be quick or might take much longer: I cannot judge, as only a very few of us have taken the first few tenuous steps down that road: the timescale depends on the community. I have lost hope that we will get any further on the timescale of my own career, so now I am mostly focused on establishing for the future the empirical knowledge (like the RAR) that needs to be explained in any emergent paradigm.

    Like

    1. “I worry that the scientific method will die in the current mire and fade into an historical footnote”

      What gives me hope is the sheer torrent of new observational data from multiple independent groups, in contrast to the comparative trickle of new results in collider physics with a small number of groups that are more vulnerable to groupthink. Also, while astrophysicists are quite timid about leaving the DM bandwagon (your own reframing of MOND effects as RAR was a brilliant move on this score), there are plenty of accumulating LamdaCDM criticisms pilling up. Efforts to squeeze MOND into a DM paradigm like https://arxiv.org/abs/2203.05606 are still progress because they acknowledge that it needs to be done and requires a very particular sort of DM model.

      On the growing data front, I recently blogged, for example, the very mainstream Elcio Abdalla, et al., “Cosmology Intertwined: A Review of the Particle Physics, Astrophysics, and Cosmology Associated with the Cosmological Tensions and Anomalies” arXiv:2203.06142 (March 11, 2022) which acknowledges that increased data is making it increasingly hard to support LambdaCDM, although results like a potential discrediting of the EDGES result at Singh, S., Nambissan T., J., Subrahmanyan, R. et al., “On the detection of a cosmic dawn signal in the radio background.” Nature Astronomy (February 28, 2022). https://doi.org/10.1038/s41550-022-01610-5 makes it a two steps forward, one step back kind of progression. Also, because LambdaCDM is the front runner, it naturally attracts lots of attention and criticism when it fails. Plenty of investigators ignore it, but slowly the existence of an understanding that there is a problem is infiltrating into the community.

      I recent discussion I participated in at PhysicsForums on the problems with LambdaCDM, https://www.physicsforums.com/threads/observations-fit-poorly-with-the-standard-model-of-cosmology.1012331/ is pretty representative of how these discussions go, and while there is caution and pressing on MOND as the “correct” alternative, there is also less dismissal out of hand of the numerous flaws than there used to be. Overton’s Window now allows for significant criticism of LambdaCDM, although not necessarily for equally open discussion of alternatives (in part, because there is no complete alternative with lots of support rallied around one candidate to replace it).

      To some extent, which it necessary for someone to organize a community of non-DM alternatives, e.g. in a new journal and associated conferences so that people would stay aware of what is being done “in the wilderness” better and have each others backs. I sincerely think that that day will come, although it might take 10-30 years to get there. The best of all possible worlds would be to win over some converts from the CDM world with enough reputation to get others to take the work that already exists seriously and to get people to risk investing time and reputation kicking the tires and working on these models more.

      Like

      1. I *am* a reputable “convert” from the CDM world. I thought it might take 10-30 years 25 years ago. On the one hand, it is a good thing that the issues are being discussed more openly. On the other hand, the debates I find myself having with people are considerably less sophisticated than they were 20 years ago.

        Like

        1. “I *am* a reputable “convert” from the CDM world.” You are indeed, and there are three or four others, at least. Double that number and we’re getting close to critical mass and would rival the number of active investigators, for example, in loop quantum gravity type theories.

          Like

  8. Here is another interesting article from Quanta magazine about super-massive black holes at the centres of dwarf agalaxies: https://www.quantamagazine.org/tiny-galaxies-reveal-secrets-of-supermassive-black-holes-20220314/

    This caused me to think; you have commented about the need for a non-linear relationship between dark and baryonic matter in galaxies of different masses to give the observed RAR. What would happen if we took this to the limit of zero baryonic matter so we started with a pure spherical dark matter halo only. Would it be possible to show that this would collapse to form a SMBH (with the ejection of most of the dark matter) in a time that is short compared with the age of the universe? The existence of SMBHs everywhere, even in dwarf galaxies, suggests they must have formed quickly in the early universe and if one could show that a pure dark matter halo could not form a SMBH, that would be an indirect argument against its existence.

    Like

  9. Andrew Ohwilleke, thanks for mentioning that thread over at physicsforums. I was reading older threads of that genre at physicsforums this morning, but somehow missed that very recent one.

    Like

  10. Andrew, I had to upvote your comments over at the physicsforums thread, as the clarity of your thinking is outstanding. My handle over there is Davephaelon (Phaelon is the alien planet from which Max originates in “Flight of the Navigator”).

    There is such a ferment now in the astrophysics community. As a Chinese philosopher once said “May you live in interesting times”, and these, for sure, are interesting times in Astronomy as ever more data clashes with the Concordance Model. Perhaps a good comparison is the turn of the 19th to 20th centuries with the transition from Classical to Relativistic and Quantum physics. Something new is afoot. I believe the phenomenological success of MOND and the work of Alexandre Deur is pointing us in the right direction.

    Like

Comments are closed.