Crater 2: prediction verified.

Crater 2: prediction verified.

The arXiv brought an early Xmas gift in the form of a measurement of the velocity dispersion of Crater 2. Crater 2 is an extremely diffuse dwarf satellite of the Milky Way. Upon its discovery, I realized there was an opportunity to predict its velocity dispersion based on the reported photometry. The fact that it is very large (half light radius a bit over 1 kpc) and relatively far from the Milky Way (120 kpc) make it a unique and critical case. I will expand on that in another post, or you could read the paper. But for now:

The predicted velocity dispersion is σ = 2.1 +0.9/-0.6 km/s.

This prediction appeared in press in advance of the measurement (ApJ, 832, L8). The uncertainty reflects the uncertainty in the mass-to-light ratio.

The measured velocity dispersion is σ = 2.7 ± 0.3 km/s

as reported by Caldwell et al.

Isn’t that how science is suppose to work? Make the prediction first? Not just scramble to explain it after the fact?

Reckless disregard for the scientific method

Reckless disregard for the scientific method

There has been another attempt to explain away the radial acceleration relation as being fine in ΛCDM. That’s good; I’m glad people are finally starting to address this issue. But lets be clear: this is a beginning, not a solution. Indeed, it seems more like a rush to create truth by assertion than an honest scientific investigation. I would be more impressed if these papers were (i) refereed rather than rushed onto the arXiv, and (ii) honestly addressed the requirements I laid out.

This latest paper complains about IC 2574 not falling on the radial acceleration relation. This is the galaxy that I just pointed out (about the same time they must have been posting the preprint) does adhere to the relation. So, I guess post-factual reality has come to science.

Rather than consider the assertions piecemeal, lets take a step back. We have established that galaxies obey a single effective force law. Federico Lelli has shown that this applies to pressure supported elliptical galaxies as well as rotating disks.

rar_todo_raronly
The radial acceleration relation, including pressure supported early type galaxies and dwarf Spheroidals.

Lets start with what Newton said about the solar system: “Everything happens… as if the force between two bodies is directly proportional to the product of their masses and inversely proportional to the square of the distance between them.” Knowing how this story turns out, consider the following.

Suppose someone came to you and told you Newton was wrong. The solar system doesn’t operate on an inverse square law, it operates on an inverse cube law. It just looks like an inverse square law because there is dark matter arranged just so as to make this so. No matter whether we look at the motion of the planets around the sun, or moons around their planets, or any of the assorted miscellaneous asteroids and cometary debris. Everything happens as if there is an inverse square law, when really it is an inverse cube law plus dark matter arranged just so.

Would you believe this assertion?

I hope not. It is a gross violation of the rule of parsimony. Occam would spin in his grave.

Yet this is exactly what we’re doing with dark matter halos. There is one observed, effective force law in galaxies. The dark matter has to be arranged just so as to make this so.

Convenient that it is invisible.

Maybe dark matter will prove to be correct, but there is ample reason to worry. I worry that we have not yet detected it. We are well past the point that we should have. The supersymmetric sector in which WIMP dark matter is hypothesized to live flunked the “golden test” of the Bs meson decay, and looks more and more like a brilliant idea nature declined to implement. And I wonder why the radial acceleration relation hasn’t been predicted before if it is such a “natural” outcome of galaxy formation simulations. Are we doing fair science here? Or just trying to shove the cat back in the bag?

74117526

I really don’t know what the final answer will look like. But I’ve talked to a lot of scientists who seem pretty darn sure. If you are sure you know the final answer, then you are violating some basic principles of the scientific method: the principle of parsimony, the principle of doubt, and the principle of objectivity. Mind your confirmation bias!

That’ll do for now. What wonders await among tomorrow’s arXiv postings?

Going in Circles

Going in Circles

Sam: This looks strangely familiar.

Frodo: That’s because we’ve been here before. We’re going in circles!

Last year, Oman et al. published a paper entitled “The unexpected diversity of dwarf galaxy rotation curves”. This term, diversity, has gained some traction among the community of scientists who simulate the formation of galaxies. From my perspective, this terminology captures some of the story, but misses most of it.

Lets review.

mr-peabody-sherman-1960s
Set the Wayback Machine, Mr. Peabody!

It was established (by van Albada & Sancisi and by Kent) in the ’80s that rotation curves were generally well described as maximal disks: the inner rotation curve was dominated by the stars, with a gradual transition to the flat outer part which required dark matter. By that time, I had became interested in low surface brightness (LSB) galaxies, which had not been studied in such detail. My nominal expectation was that LSB galaxies were stretched out versions of more familiar spiral galaxies. As such they’d also have maximal disks, but lower peak velocities (since V2 ≈ GM/R and LSBs had larger R for the same M).

By the mid-1990s, we had shown that this was not the case. LSB galaxies had the same rotation velocity as more concentrated galaxies of the same luminosity. This meant that LSB galaxies were dark matter dominated. This result is now widely known (to the point that it is often taken for granted), but it had not been expected. One interesting consequence was that LSB galaxies were a convenient laboratory for testing the dark matter hypothesis.

So what do we expect? There were, and are, many ideas for what dark matter should do. One of the leading hypotheses to emerge (around the same time) was the NFW halo obtained from structure formation simulations using cold dark matter. If a galaxy is dark matter dominated, then to a good approximation we expect the stars to act as tracer particles: the rotation curve should just reflect that of the underlying dark matter halo.

This did not turn out well. The rotation curves of low surface brightness galaxies do not look like NFW halos. One example is provided by the LSB galaxy F583-1, reproduced here from Fig. 14 of McGaugh & de Blok (1998).

f5831fig14mdb98a
The rotation curve of LSB galaxy F583-1 (filled points) as reported in McGaugh & de Blok (1998). Open points are what is left after subtracting the contribution of the stars and the gas: this is the rotation curve of the dark matter halo. Lines are example NFW halos. The data do not behave as predicted by NFW, a generic problem in LSB galaxies.

This was bad for NFW. But there is a more general problem, irrespective of the particular form of the dark matter halo. The M*-Mhalo relation required by abundance matching means that galaxies of the same luminosity live in nearly identical dark matter halos. When dark matter dominates, galaxies of the same luminosity should thus have the same rotation curve.

We can test this by comparing the rotation curves of Tully-Fisher pairs: galaxies with the same luminosity and flat rotation velocity, but different surface brightness. The high surface brightness NGC 2403 and low surface brightness UGC 128 are such a pair. So for 20 years, I have been showing their rotation curves:

N2403U128_GalRev_togethertwice.png
The rotation curves of NGC 2403 (red points) and UGC 128 (open points). The top panel shows radius in physical units; the bottom panel shows the same data with the radius scaled by the scale length of the disk. This is larger for the LSB galaxies (blue lines in top panel) and has the net effect that the normalized rotation curves are practically indistinguishable.

If NGC 2403 and UGC 128 reside in the same dark matter halo, they should have basically the same rotation curve in physical units [V(R in kpc)]. They don’t. But they do have the pretty much the same rotation curve when radius is scaled by the size of the disk [V(R/Rd)]. The dynamics “knows about” the baryons, in contradiction to the expectation for dark matter dominated galaxies.

Oman et al. have rediscovered the top panel (which they call diversity) but they don’t notice the bottom panel (which one might call uniformity). That galaxies of the same luminosity have different rotation curves remains surprising to simulations, at least the EAGLE and APOSTLE simulations Oman et al. discuss. (Note that APOSTLE was called LG by Oman et al.)  Oman et al. illustrate the point with a number of rotation curves, for example, their Fig. 5:

OmanFig5.png
Fig. 5 from Oman et al. (2015).

Oman et al. show that the rotation curves of LSB galaxies rise more slowly than predicted by simulations, and have a different shape. This is the same problem that we pointed out two decades ago. Indeed, note that the lower left panel is F583-1: the same galaxy noted above, showing the same discrepancy. The new thing is that these simulations include the effects of baryons (shaded regions). Baryons do not help to resolve the problem, at least as implemented in EAGLE and APOSTLE.

It is tempting to be snarky and say that this quantifies how many years simulators are behind observers. But that would be too generous. Observers had already noticed the systematic illustrated in the bottom panel of the NGC2403/UGC 128 in the previous millennium. Simulators are just now coming to grips with the top panel. The full implications of the bottom panel seems not yet to have disturbed their dreams of dark matter.

Perhaps that passes snarky and on into rude, but it isn’t like we haven’t been telling them exactly this for years and years and years. The initial reaction was not mere disbelief, but outright scorn. The data disagree with simulations, so the data must be wrong! Seriously, this was the attitude. I don’t doubt that it persists in some of the colder, darker corners of the communal astro-theoretical intellect.

Indeed, Ludlow et al. provide an example. These are essentially the same people as wrote Oman et al. Though Oman et al. point out a problem when comparing the simulations to data, Ludlow et al. claim that the observed uniformity is “a Natural Outcome of Galaxy Formation in CDM halos”. Seriously. This is in their title.

Well, which is it? Is the diversity of rotation curves a problem for simulations? Or is their uniformity a “natural outcome”? This is not natural at all.

Note that the lower right panel of the figure from Oman et al. contains the galaxy IC 2574. This galaxy obviously deviates from the expectation of the simulations. These predict accelerations that are much larger than observed at small radii. Yet Ludlow et al. claim to explain the radial acceleration relation.

This situation is self-contradictory. Either the simulations explain the RAR, or they fail to explain the “diversity” of rotation curves. These are not independent statements.

I can think of two explanations: either (i) the data that define the RAR don’t include diverse galaxies, or (ii) the simulations are not producing realistic galaxies. In the latter case, it is possible that both the rotation curve and the baryon distribution are off in a way that maintains some semblance of the observed RAR.

I know (i) is not correct. Galaxies like F583-1 and IC 2574 help define the RAR. This is one reason why the RAR is problematic for simulations.

ic2574_rar
The rotation curve of IC 2574 (left) and its location along the RAR (right).

That leaves (ii). Though the correlation Ludlow et al. show misses the data, the real problem is worse. They only obtain the semblance of the right relation because the simulated galaxies apparently don’t have the same range of surface brightness as real galaxies. They’re not just missing V(R); now that they include baryons they are also getting the distribution of luminous mass wrong.

I have no doubt that this problem can be fixed. Doing so is “simply” a matter of revising the feedback prescription until the desired results is obtained. This is called fine-tuning.

What is Natural?

I have been musing for a while on the idea of writing about Naturalness in science, particularly as it applies to the radial acceleration relation. As a scientist, the concept of Naturalness is very important to me, especially when it comes to the interpretation of data. When I sat down to write, I made the mistake of first Googling the term.

The top Google hits bear little resemblance to what I mean by Naturalness. The closest match is specific to a particular, rather narrow concept in theoretical particle physics. I mean something much more general. I know many scientific colleagues who share this ideal. I also get the impression that this ideal is being eroded and cheapened, even among scientists, in our post-factual society.

I suspect the reason a better hit for Naturalness doesn’t come up more naturally in a Google search is, at least in part, an age effect. As wonderful a search engine as Google may be, it is lousy at identifying things B.G. (Before Google).  The concept of Naturalness has been embedded in the foundations of science for centuries, to the point where it is absorbed by osmosis by students of any discipline: it doesn’t need to be formally taught; there probably is no appropriate website.

In many sciences, we are often faced with messy and incomplete data. In Astronomy in particular, there are often complicated astrophysical processes well beyond our terrestrial experience that allow a broad range of interpretations. Some of these are natural while others are contrived. Usually, the most natural interpretation is the correct one. In this regard, what I mean by Naturalness is closely related to Occam’s Razor, but it is something more as well. It is that which follows – naturally – from a specific hypothesis.

An obvious astronomical example: Kepler’s Laws follow naturally from Newton’s Universal Law of Gravity. It is a trivial amount of algebra to show that Kepler’s third Law, P2 = a3, follows as a direct consequence of Newton’s inverse square law. The first law, that orbits are ellipses, follows with somewhat more math. The second law follows with the conservation of angular momentum.

It isn’t just that Newtonian gravity is the simplest explanation for planetary orbits. It is that all the phenomena identified by Kepler follow naturally from Newton’s insight. This isn’t obvious just by positing an inverse square law. But in exploring the consequences of such a hypothesis, one finds that one clue after another falls into place like the pieces of a jigsaw puzzle. This is what I mean by Naturalness.

I expect that this sense of Naturalness – the fitting together of the pieces of the puzzle – is what gave Newton encouragement that he was on the right path with the inverse square law. Let’s not forget that both Newton and his inverse square law came in for a lot of criticism at the time. Both Leibniz and Huygens objected to action at a distance, for good reason. I suspect this is why Newton prefaced his phrasing of the inverse square law with the modifier as if: “Everything happens… as if the force between two bodies is directly proportional to the product of their masses and inversely proportional to the square of the distance between them.” He is not claiming that this is right, that it has to be so. Just that it sure looks that way.

The situation with the radial acceleration relation in galaxies today is the same. Everything happens as if there is a single effective force law in galaxies. This is true regardless of what the ultimate reason proves to be.

The natural explanation for the single effective force law indicated by the radial acceleration relation is that there is indeed a unique force law at work. In this case, such a force law has already been hypothesized: MOND. Often MOND is dismissed for other reasons, though reports of its demise have repeatedly been exaggerated. Perhaps MOND is just the first approximation of some deeper theory. Perhaps, like action at a distance, we simply don’t yet understand the underlying reasons for it.

Another quick-trick simulation result

Another quick-trick simulation result

There has already been one very quick attempt to match ΛCDM galaxy formation simulations to the radial acceleration relation (RAR). Another rapid preprint by the Durham group has appeared. It doesn’t do everything I ask for from simulations, but it does do a respectable number of them. So how does it do?

First, there is some eye-rolling language in the title and the abstract. Two words: natural (in the title) and accommodated (in the abstract). I can’t not address these before getting to the science.

Natural. As I have discussed repeatedly in this blog, and in the refereed literature, there is nothing natural about this. If it were so natural, we’d have been talking about it since Bob Sanders pointed this out in 1990, or since I quantified it better in 1998 and 2004. Instead, the modus operandi of much of the simulation community over the past couple of decades has been to pour scorn on the quality of rotation curve data because it did not look like their simulations. Now it is natural?

42300179

Accommodate. Accommodation is an important issue in the philosophy of science. I have no doubt that the simulators are clever enough to find a way to accommodate the data. That is why I have, for 20 years, been posing the question What would falsify ΛCDM? I have heard (or come up myself with) only a few good answers, and I fear the real answer is that it can’t be. It is so flexible, with so many freely adjustable parameters, that it can be made to accommodate pretty much anything. I’m more impressed by predictions that come ahead of time.

That’s one reason I want to see what the current generation of simulations say before entertaining those made with full knowledge of the RAR. At least these quick preprints are using existing simulations, so while not predictions in the strictest since, at least they haven’t been fine-tuned specifically to reproduce the RAR. Lots of other observations, yes, but not this particular one.

Ludlow et al. show a small number of model rotation curves that vary from wildly unrealistic (their NoAGN models peak at 500 km/s; no disk galaxy in the universe comes anywhere close to that… Vera Rubin once offered a prize for any that exceeded 300 km/s) to merely implausible (their StrongFB model is in the right ballpark, but has a very rapidly rising rotation curve). In all cases, their dark matter halos seem little affected by feedback, in contrast to the claims of other simulation groups. It will be interesting to follow the debate between simulators as to what we should really expect.

They do find a RAR-like correlation. Remarkably, the details don’t seem to depend much on the feedback scheme. This motivates some deeper consideration of the RAR.

The RAR plots observed centripetal acceleration, gobs, against that predicted by the observed distribution of baryons, gbar. We chose these coordinates because this seems to be the fundamental empirical correlation, and the two quantities are measured in completely independent ways: rotation curves vs. photometry. While measured independently, some correlation is guaranteed: physically, gobs includes gbar. Things only become weird when the correlation persists as gobs ≫ gbar.

The models are well fit by the functional form we found for the data, but with a different value of the fit parameter: g = 3 rather than 1.2 x 10-10 m s-2. That’s a factor of 2.5 off – a factor that is considered fatal for MOND in galaxy clusters. Is it OK here?

The uncertainty in the fit value is 1.20 ± 0.02. So formally, 3 is off by 90σ. However, the real dominant uncertainty is systematic: what is the true mean mass-to-light ratio at 3.6 microns? We estimated the systematic uncertainty to be ± 0.24 based on an extensive survey of plausible stellar population models. So 3 is only 7.5σ off.

The problem with systematic uncertainties is that they do not obey Gaussian statistics. So I decided to see what we might need to do to obtain g = 3 x 10-10 m s-2. This can be done if we take sufficient liberties with the mass-to-light ratio.

rar_lowmlforhighgdagger
The radial acceleration relation as observed (open points fit by blue line) and modeled (red line). Filled points are the same data with the disk mass-to-light ratio reduced by a factor of two.

Indeed, we can get in the right ball park simply by reducing the assumed mass-to-light ratio of stellar disks by a factor of two. We don’t make the same factor of two adjustment to the bulge components, because the data don’t approach the 1:1 line at high accelerations if this is done. So rather than our fiducial model with M*/L(disk) = 0.5 M/L and M*/L(bulge) = 0.7 M/L (open points in plot), we have M*/L(disk) = 0.25 M/L and M*/L(bulge) = 0.7 M/L (filled points in plot). Lets pretend like we don’t know anything about stars and ignore the fact that this change corresponds to truncating the IMF of the stellar disk so that M dwarfs don’t exist in disks, but they do in bulges. We then find a tolerable match to the simulations (red line).

Amusingly, the data are now more linear than the functional form we assumed. If this is what we thought stars did, we wouldn’t have picked the functional form the simulations apparently reproduce. We would have drawn a straight line through the data – at least most of it.

That much isn’t too much of a problem for the models, though it is an interesting question whether they get the shape of the RAR right for the normalization they appear to demand. There is a serious problem though. That becomes apparent in the lowest acceleration points, which deviate strongly below the red line. (The formal error bars are smaller than the size of the points.)

It is easy to understand why this happens. As we go from high to low accelerations, we transition from bulge dominance to stellar disk dominance to gas dominance. Those last couple of bins are dominated by atomic gas, not stars. So it doesn’t matter what we adopt for the stellar mass-to-light ratio. That’s where the data sit: well off the simulated line.

Is this fatal for these models? As presented, yes. The simulations persist in predicting higher accelerations than observed. This has been the problem all along.

There are other issues. The scatter in the simulated RAR is impressively small. Much smaller than I expected. Smaller even than the observational scatter. But the latter is dominated by observational errors: the intrinsic relation is much tighter, consistent with a δ-function. The intrinsic scatter is what they should be comparing their results to. They either fail to understand, or conveniently choose to gloss over, the distinction between intrinsic scatter and that induced by random errors.

It is worth noting that some of the same authors make this same mistake – and it is a straight up mistake – in discussing the scatter in the baryonic Tully-Fisher relation. The assertion there is “the scatter in the simulated BTF is smaller than observed”. But the observed scatter is dominated by observational errors, which we have taken great care to assess. Once this is done, there is practically no room left over for intrinsic scatter, which is what the models display. This is important, as it completely inverts the stated interpretation. Rather than having less scatter than observed, the simulations exhibit more scatter than allowed.

Can these problems be fixed? No doubt. See the comments on accommodation above.

La Fin de Quoi?

La Fin de Quoi?

Last time, I addressed some of the problems posed by the radial acceleration relation for galaxy formation theory in the LCDM cosmogony. Predictably, some have been quick to assert there is no problem at all. The first such claim is by Keller & Wadsley in a preprint titled La Fin du MOND: LCDM is Fully Consistent with SPARC Acceleration Data.”

There are good things about this paper, bad things, and the potential for great ugliness.

good_bad_ugly

The good:

  This is exactly the reaction that I had hoped to see in response to the radial acceleration relation (RAR): people going to their existing simulations and checking what answer they got. The answer looks promising. The same relation is apparent in the simulations as in the data. That’s good.

  These simulations already existed. They haven’t been tuned to match this particular observations. That’s good.  The cynic might note that the last 15+ years of galaxy formation simulations have been driven by the need to add feedback to match data, including the shapes of rotation curves. Nevertheless, I see no guarantee that the RAR will fall out of this process.

  The scatter in the simulations is 0.05 dex. The scatter in the data not explained by random errors is 0.06 dex. This agreement is good. I think the source of the scatter needs to be explored further (see below), but it is at least in the right ballpark, which is by no means guaranteed.

  The authors make a genuine prediction for how the RAR should evolve with redshift. That isn’t just good; it is bold and laudable.

The bad:

  There are only 18 simulated galaxies to compare to 153 real ones. I appreciate the difficulty in generating these simulations, but we really need a bigger sample. The large number of sampled points (1800) is less important given the simulators’ ability to parse the data as finely as their CPU allows them to resolve. I also wonder if the lowest acceleration points extend beyond the range sampled in comparable galaxies. Typically the data peter out around an HI surface density of 1 Msun/pc^2.

  The comparison they make to Fig. 3 of arxiv:1609.05917 is great.  I would like to see something like Fig. 1 and 2 from that paper as well. What range of galaxy properties do the models span? What do individual mass models looks like?

rar_fig1and2-001
Fig. 1 from McGaugh, Lelli, & Schombert (2016) showing the range of luminosity and surface brightness covered by the SPARC data. Galaxies range over a factor of 50,000 in luminosity. The shaded region shows the range explored by the simulations discussed by Keller & Wadsley, which cover a factor of 15. Note that this is a logarithmic scale. On a linear scale, the simulations cover 0.03% of the range covered by the data along the x-axis. The range covered along the y-axis was not specified.

  My biggest concern is that there is a limited dynamic range in the simulations, which span only a factor of 15 in disk mass: from 1.7E10 to 2.7E11 Msun. For comparison, the data span 1E7 to 5E11 Lsun in [3.6] luminosity, a factor of 50,000. The simulations only sample the top 0.03% of this range.

  Basically, the simulated galaxies go from a little less massive than the Milky Way up to a bit more massive than Andromeda. Comparing this range to the RAR and declaring the problem solved is like fitting the Milky Way and Andromeda and declaring all problems in the Local Group solved without looking at any of the dwarfs. It is at lower mass scales and for lower surface brightness galaxies that problems become severe. Consequently, the most the authors can claim is a promising start on understanding a tiny fraction of bright galaxies, not a complete explanation of the RAR.

  Indeed, while the authors quantify the mass range over which their simulated galaxies extend, they make no mention of either size or surface brightness. Are these comparable to real galaxies of similar mass? Too narrow a range in size at fixed mass, as seems likely in a small sample, may act to artificially suppress the scatter.  Put another way: if the simulated galaxies only cover a tiny region of Fig. 1 above, it is hardly surprising if they exhibit little scatter.

  The apparent match between the simulated and observed scatter seems good. But the “left over” observational scatter of 0.06 dex is the same as what we expect from scatter in the mass-to-light ratio.  That is irreducible. There has to be some variation in stellar populations, and it is much easier to imagine this number getting bigger than being much smaller.

  In the simulations, the stellar mass is presumably known perfectly, so I expect the scatter has a different source. Presumably there is scatter from halo to halo as seen in other simulations. That’s natural in LCDM, but there isn’t any room for it if we also have to accommodate scatter from the mass-to-light ratio. The apparent equality of observed and simulated scatter is meaningless if they represent scatter in different quantities.

  I have trouble believing that the RAR follows simply from dissipative collapse without feedback. I’ve worked on this before, so I’m pretty sure it does not work this way. It is true that a single model does something like this as a result of dissipative collapse. It is not true that an ensemble of such models are guaranteed to fall on the same relation.

  There are many examples of galaxies with the same mass but very different scale lengths. In the absence of feedback, shorter scale lengths lead to more compression of the dark matter halo. One winds up with more dark matter where there are more baryons. This is the opposite of what we see in the data.

  This makes me suspect the dynamic range in the simulations is a problem. Not only do they cover little range in mass compared to the data, but this particular conclusion may only be reached if there is virtually no dynamic range in size at a given mass. That is hardly surprising given the small sample size.

The ugly:

  The title.

  This paper has nothing to do with MOND, nor says anything about it. Why is it in the title?

  At best, the authors have shown that, over a rather limited dynamic range, simulations in LCDM might reproduce post facto what MOND predicted a priori. If so, LCDM survives this test (as far as it goes). But in no respect can this be considered a problem for MOND, which predicted the phenomenon over 30 years ago. This is a classic problem in the philosophy of science: should we put more weight on the a priori prediction, or on the capacity of a more flexible theory to accommodate the same observation later on?

The title is revealing of a deep-rooted bias. It tarnishes what might be an important results and does a disservice to the objectivity we’re suppose to value in science.

DO OTHER SIMULATIONS AGREE?

  I am eager to see whether other simulations agree with these results. Not all simulators implement feedback in the same way, nor get the same results. The most dangerous aspect of this paper is that it may give people an excuse to think the problem is solved so they never have to think about it again. The RAR is a test that needs to be applied every time to each and every batch of simulations. If they don’t pass this test, they’re wrong. Unfortunately, there is precedent in the galaxy formation community to take reassurances such as this for granted, and not to bother to perform the test.

THE RAR TEST MUST BE PERFORMED FOR ALL SIMULATIONS. ALWAYS.

674d6672e4ac0bfbfef7d141db03ee12

Four Strikes

Four Strikes

So the radial acceleration relation is a new law of nature. What does it mean?

One reason we have posed it as a law of nature is that it is interpretation-free. It is a description of how nature works – in this case, a rule for how galaxies rotate. Why nature behaves thus is another matter.

Some people have been saying the RAR (I tire of typing out “radial acceleration relation”) is a problem for dark matter, while others seem to think otherwise. Lets examine this.

The RAR has a critical scale g = 1.2 · 10-10 m s-2. At high acceleration, above this scale, we don’t need dark matter: systems like the solar system or the centers of high surface brightness galaxies are WYSIWYG. At low accelerations, below this scale, we begin to need dark matter. The lower the acceleration, the more dark matter we need.

OK, so this means there is little to no dark matter when the baryons are dense (high gbar), but progressively more as gbar becomes smaller than the critical scale g. Low gbar happens when the surface density of baryons is low. So the amount of dark matter scales inversely with baryonic surface density.

That’s weird.

This is weird for a number of reasons. First, there is no reason for the dark matter to care what the baryons are doing when dark matter dominates. When gobs ≫ gbar the dark matter greatly outweighs the baryons, which simply become tracer particles in the gravitational potential of the dark matter halo. There is no reason for the dark matter to know or care about what the baryonic tracer particles are doing. And yet the RAR persists as a tight correlation well into this regime. It is as if the baryonic tail wags the dark matter dog.

Second, there should be more dark matter where there are more baryons. Galaxies form by baryons falling into dark matter halos. As they do so, they dissipate energy and sink to the center of the halo. In this process, the drag some of the dark matter along with them in a process commonly referred to as “adiabatic compression.” In practice, the process need not be adiabatic, but the dark matter must respond to the rearrangement of the gravitational potential caused by the dissipative infall of the baryons.

These topics have been discussed at great length in the galaxy formation literature. Great arguments have erupted time and again about how best to implement the compression in models, and how big the effect is in practice. These details need not concern us here. What matters is that they are non-negotiable fundamentals of the dark matter paradigm.

Galaxies form by baryonic infall within dark matter halos. The halos form first while the baryons are still coupled to the photons prior to last scattering. This is one of the fundamental reasons we need non-baryonic cold dark matter that does not interact with photons: to get a jump on structure formation. Without it, we cannot get from the smooth initial condition observed in the cosmic microwave background to the rich amount of structure we see today.

As the baryons fall into halos, they must sink to the center to form galaxies. Why? Dark matter halos are much bigger than the galaxies that reside within them. All tracers of the gravitational potential say so. Initially, this might seem odd, as the baryons might to just track the dominant dark matter. But baryons are different: they can dissipate energy. By so doing, they can sink to the center – not all baryons need to sink to the centers of their dark matter halos, but enough to make a galaxy. This they must do in order to form the galaxies that we observe – galaxies that are more centrally condensed than their dark matter halos.

That’s enough, in return, to affect the dark matter. As the baryons dissipate, the gravitational potential is non-stationary. The dark matter distribution must respond to this change in the total gravitational potential. The net result is a further concentration of the dark matter towards the center of the halo: in effect, the baryons drag some dark matter along with them.

I have worked on adiabatic compression myself, but a nice illustration is given by this figure from Elbert et al. (2016):

compressedhalos_cdmonly
Dark matter halos formed in numerical simulations illustrating the effect of adiabatic compression. One the left is a pristine halo without baryons. In the middle is a halo after formation of a disk galaxy. On right is a halo after formation of a more compact disk.

One can see by eye the compression caused by the baryons. The more dense the baryons become, the more dark matter they drag towards the center with them.

The fundamental elements of the dark matter paradigm, galaxy formation by baryonic infall and dissipation accompanied by the inevitable compression of the dark matter halo, inevitably lead us to expect that more baryons in the center means more dark matter as well. We observe the exact opposite in the RAR. As baryons become denser, they become the dominant component, to the point where they are the only component. Rather than more dark matter as we expect, more baryons means less dark matter in reality.

Third, the RAR correlation is continuous and apparently scatter-free over all accelerations. The data map from the regime of no dark matter at high accelerations to lots of dark matter at low accelerations in perfect 1:1 harmony with the distribution of the baryons. If we observe the distribution of baryons, we know the corresponding distribution of dark matter. The tail doesn’t just wag the dog. It tells it to sit, beg, and roll over.

Fourth, there is a critical scale in the data, g. That’s the scale where the mass discrepancy sets in. This is a purely empirical statement.

Cold dark matter is scale free. Being scale free is fundamental to its nature. It is essential to fitting the large scale structure, which it does quite well.

So why is there this ridiculous acceleration scale in the data?!? Who ordered this?! It should not be there.

So yes, the radial acceleration relation is a problem for the cold dark matter paradigm.

Tully-Fisher: the Second Law

Tully-Fisher: the Second Law

Previously I noted how we teach about Natural Law, but we no longer speak in those terms. All the Great Laws are already know, right? Surely there can’t be such things left to discover!

That rotation curves tend towards asymptotic flatness is, for all practical purposes, a law of nature. It is tempting to leap straight to the interpretation (dark matter!), but it is worth appreciating the discovery for itself. It isn’t like rotation curves merely exceed what can be explained by the stars and gas, nor that they rise and fall willy-nilly. The striking, ever-repeated observation is an indefinitely extended radial range with near-constant rotation velocity.

flatrcillwpictures
The rotation curves of galaxies over a large dynamic range in mass, from the most massive spiral with a well measured rotation curve (UGC 2885) to tiny, low mass, low surface brightness, gas rich dwarfs.

New Laws of Nature aren’t discovered every day. This discovery should have warranted a Nobel prize for Vera Rubin and Albert Bosma. If only we were able to see it in those terms three decades ago. Instead, we phrased it in terms of dark matter, and that was a radical enough idea it has to await verification in the laboratory. Now the prize will go to some experimental group (should there ever be a successful detection) while the new law of nature goes unrecognized. That’s OK – there should be a Nobel prize for a verified laboratory detection of non-baryonic dark matter, should that ever occur – but there should also be a Nobel prize for flat rotation curves, and it should have been awarded a long time ago.

It takes a while to appreciate these things. Another well known yet unrecognized Law of Nature is the Tully-Fisher relation. First discovered as a relation between luminosity and line-width (figure from Tully & Fisher 1977), this relation is most widely known for its utility in measuring the cosmic distance scale.

tforig
The original Tully-Fisher relation.

At the time, it gave the “wrong” answer (H0 ≠ 50), and Sandage is reputed to have suppressed its publication for a couple of years. This is one reason astronomy journals have, and should have, a high acceptance rate – too many historical examples of bad behavior to protect sacred cows.

Besides its utility as a distance indicator, the Tully-Fisher relation has profound implications for physical theory. It is not merely a relation between two observables of which only one is distance-dependent. It is a link between the observed mass and the physics that sets the flat velocity.

btf_mst_mb_2009
The stellar mass Tully-Fisher relation (left) and the baryonic Tully-Fisher relation (right). In both cases, the x-axis is the flat rotation velocity measured from resolved rotation curves. In the right panel, the y-axis is the baryonic mass – the sum of observed stars and gas. The latter appears to be a law of nature from which galaxies never stray.

The original y-axis of the Tully-Fisher relation, luminosity, was a proxy for stellar mass. The line-width was a proxy for rotation velocity, of which there are many variants. At this point it is clear that the more fundamental variables are baryonic mass – the sum of observed stars and gas – and the flat rotation velocity.

I had an argument – of the best scientific sort – with Renzo Sancisi in 1995. I was disturbed that our then-new low surface brightness galaxies were falling on the same Tully-Fisher relation as previously known high surface brightness galaxies of comparable luminosity. The conventional explanation for the Tully-Fisher relation up to that point invoked Freeman’s Law – the notion (now deprecated) that all spirals had the same central surface brightness. This had the effect of suppressing the radius term in Newton’s

V2 = GM/R.

Galaxies followed a scaling between luminosity (mass) and velocity because they all had the same R at a given M.

By construction, this was not true for low surface brightness galaxies. They have larger radii at fixed luminosity (representing the mass M). That’s what makes them low surface brightness – their stars are more spread out. Yet they fall smack on the same Tully-Fisher relation!

Renzo and I looked at the result and argued up and down, this way and that about the data, the relation, everything. We were getting no closer to understanding it, or agreeing on what it meant. Finally he shouted “TULLY-FISHER IS GOD!” to which I retorted “NEWTON IS GOD!”

It was a healthy exchange of viewpoints.

Renzo made his assertion because, in his vast experience as an observer, galaxies always fell on the Tully-Fisher relation. I made mine, because, well, duh. The problem is that the observed Tully-Fisher relation does not follow from Newton.

But Renzo was right. Galaxies do always fall on the Tully-Fisher relation. There are no residuals from the baryonic Tully-Fisher relation. Neither size nor surface brightness are second parameters. The relation cares not whether a galaxy disk has a bar or not. It does not matter whether a galaxy is made of stars or gas. It does not depend on environment or pretty much anything else one can imagine. Indeed, there is no intrinsic scatter to the relation, as best we can tell. If a galaxy rotates, it follows the baryonic Tully-Fisher relation.

The baryonic Tully-Fisher relation is a law of nature. If you measure the baryonic mass, you know what the flat rotation speed will be, and vice-versa. The baryonic Tully-Fisher relation is the second law of rotating galaxies.

Missing baryons in LCDM and MOND

Missing baryons in LCDM and MOND

People often ask for a straight up comparison between ΛCDM and MOND. This is rarely possible because the two theories are largely incommensurable. When one is eloquent the other is mute, and vice-versa.

It is possible to attempt a comparison about how bad the missing baryon problem is in each. In CDM, we expect a relation between dynamical mass and rotation speed of the form Mvir ∝ Vvir3. In MOND the equivalent relation has a different power law, Mb ∝ Vf4.

In CDM we speak of virial quantities – the total mass of everything, including dark matter, and the circular speed way out at the virial radius (typically far outside the luminous extent of a galaxy). In MOND, we use the observed baryonic mass (stars and gas) and the flat rotation speed. These are not the same, so strictly speaking, still incommensurable. But they provide a way to compare the baryonic mass with the total inferred mass.

missingbaryonsinLCDMMOND

This plot shows the detected baryon fraction as a function of mass. The top panel is identical to last time. In ΛCDM we see most of the baryons in the most massive systems, but progressively less in smaller systems. In MOND the situation is reversed. The check-sum is complete in galaxies, but falls short in clusters of galaxies. (Note that the error bars have been divided by an extra power of velocity in the lower panel, which amplifies their appearance.) The reader may judge for himself which of these problems is more serious.

Critics of MOND frequently cite the bullet cluster as having falsified MOND. Period. No room for debate. See the linked press release from NASA: dark_matter_proven.

OK, what kind of dark matter? As discussed previously, we need at least two kinds of dark matter in ΛCDM: non-baryonic cold dark matter (some entirely novel particle) and dark baryons (normal matter not yet detected). Unfortunately, “dark matter” is a rather generic, catch-all term that allows these two distinct problems to be easily confused. We see the need for unseen mass in objects like the bullet cluster, and make the natural leap to conclude that we are seeing the non-baryonic cold dark matter that we expect in cosmology. There it is, case closed.

This is an example of a logical fallacy. There is nothing about the missing mass problem suffered by MOND in clusters that demands the unseen mass be non-baryonic. Indeed, even in ΛCDM we suffer some missing baryon problem on top of the need for non-baryonic cold dark matter. In both theories, there is a missing baryon problem in clusters. In both cases, this missing baryon problem is more severe at small radii, suggestive of a connection with the also-persistent cooling flow problem. Basically, the X-ray emitting gas observed in the inner 200 kpc or so of clusters have time to cool, so it ought to be condensing into – what? Stars? MACHOs? Something normal but as yet unseen.

It is not obvious that cooling flows can solve MOND’s problem in clusters. The problem is both serious and persistent. It was first pointed out in 1988 by The & White, and is discussed in this 2002 Annual Review. A factor or two (or even a bit more) of the expected baryons in clusters are missing (the red portion of the plot above). Note, however, that this problem was known long before the bullet cluster was discovered. From this perspective, it would have been very strange had the bullet cluster not shown the same discrepancy as every other cluster in the sky.

I do not know if the missing mass in clusters is baryonic. I am at a loss to suggest a plausible form that the missing baryons might be lurking in. Certainly others have tried. But lets take a step back and as if it is plausible.

As seen above, we have a missing baryon problem in both theories. It just manifests in different places. Advocates of ΛCDM do not, by and large, seem to consider the baryon discrepancy in galaxies to be a problem. The baryons were blown out, or are there but just not detected yet. No Problem. I’m not as lenient, but if we are to extend that grace to ΛCDM, why not also to MOND?

Recall that Shull et al. found that about 30% of baryons remain undetected in the local universe. In order to solve the problem MOND suffers in clusters, we need a mass in baryons about equal to the ICM wedge in this pie chart:

GlobalMissingBaryons

Note that the missing wedge is much larger than the ICM wedge. There are more than enough baryons out there to solve this problem. Indeed, it hardly makes a dent in the global missing baryon problem. Those baryons “must” be somewhere, so why not some in clusters of galaxies?

The short answer is cognitive dissonance. If one comes to the problem sure of the answer, then one sees in the data what one expects to see. MOND fits rotation curves? That’s just a fluke: it bounces off the wall of cognitive dissonance without serious consideration. MOND needs dark matter in clusters? Well of course – we knew that it had to be wrong in the first place.

I understand this perspective exceedingly well. It is where I started from myself. But the answer I wanted is not the conclusion that a more balanced evaluation of the evidence leads one to. The challenge is not in the evidence – it is to give an unorthodox idea a chance in the first place.

Missing Baryons

A long standing problem in cosmology is that we do not have a full accounting of all the baryons that we believe to exist. Big Bang Nucleosynthesis (BBN) teaches us that the mass density in normal matter is Ωb ≈ 5%. One can put a more precise number on it, but that’s close enough for our purposes here.

Ordinary matter fails to account for the closure density by over an order of magnitude. To make matters worse, if we attempt an accounting of where these baryons are, we again fall short. As well as the dynamical missing mass problem, we also have a missing baryon problem.

For a long time, this was also an order of magnitude problem. The stars and gas we could most readily see added up to < 1%, well short of even 5%. More recent work has shown that many, but not all, of the missing baryons are in the intergalactic medium (IGM).  The IGM is incredibly diffuse – a better vacuum than we can make in the laboratory by many orders of magnitude – but it is also very, very, very, well, very big. So all that nothing does add up to a bit of something.

GlobalMissingBaryons

A thorough accounting has been made by Shull et al. (2012). A little over half of detected baryons reside in the IGM, in either the Lyman alpha forest (Ly a in the pie chart above) or in the so-called warm-hot intergalactic medium (WHIM). There are also issues of double-counting, which Shull has taken care to avoid.

Gravitationally bound objects like galaxies and clusters of galaxies contain a minority of the baryons. Stars and cold (HI) gas in galaxies are small wedges of the pie, hence the large problem we initially had. Gas in the vicinity of galaxies (CGM) and in the intracluster medium of clusters of galaxies (ICM) also matter. Indeed, in the most massive clusters, the ICM outweighs all the stars in the galaxies there. This situation reverses as we look at lower mass groups. Rich clusters dominated by the ICM are rare; objects like our own Local Group are more typical. There’s no lack of circum-galactic gas (CGM), but it does not obviously outweigh the stars around L* galaxies.

There are of course uncertainties, so one can bicker and argue about the relative size of each slice of the pie. Even so, it remains hard to make their sum add up to 5% of the closure density. It appears that ~30% of the baryons that we believe to exist from BBN are still unaccounted for in the local universe.

The pie diagram only illustrates the integrated totals. For a long time I have been concerned about the baryon budget in individual objects. In essence, each dark matter halo should start with a cosmically equal share of baryons and dark matter. Yet in most objects, the ratio of baryons to total mass falls well short of the cosmic baryon fraction.

The value of the cosmic baryon fraction is well constrained by a variety of data, especially the cosmic microwave background. The number we persistently get is

fb = Ωbm = 0.17

or maybe 0.16, depending on which CMB analysis you consult.  But it isn’t 0.14 nor 0.10 nor 0.01. For sticklers, note that this the fraction of the total gravitating mass in baryons, not the ratio of baryons to dark matter: Ωm includes both. For numerologists, note that within the small formal uncertainties, 1/fb = 2π.

This was known long before the CMB experiments provided constraints that mattered. Indeed, one of the key findings that led us to repudiate standard SCDM in favor of ΛCDM was the recognition that clusters of galaxies had too many baryons for their dynamical mass. We could measure the baryon fraction in clusters. If we believe that these are big enough chunks of the universe to be representative of the whole, and we also believe BBN, then we are forced to conclude that Ωm ≈ 0.3.

Why stop with clusters? One can do this accounting in every gravitationally bound object. The null hypothesis is that every object should be composed of the universal composition, roughly 1 part baryons for every 5 parts dark matter. This almost works in rich clusters of galaxies. It fails in small clusters and groups of galaxies, and gets worse as you examine progressively smaller systems. So: not only are we missing baryons in the cosmic sum, there are some missing in each individual object.

missingbaryonsinLCDM

The figure shows the ratio of detected baryons to those expected in individual systems. I show the data I compiled in McGaugh et al. (2010), omitting the tiniest dwarfs for which the baryon content becomes imperceptible on a linear scale. By detected baryons I mean all those seen to exist in the form of stars or gas in each system (Mb = M*+Mg), such that

fd = Mb/(fbMvir)

where Mvir is the total mass of each object. This `virial’ mass is a rather uncertain quantity, but in this plot it can only slide the data up and down a little bit. The take-away is that not a single, gravitationally bound object appears to contain its fair share of cosmic baryons. There is a missing baryon problem not just globally, but in each and every object.

This halo-by-halo missing baryon problem is least severe in the most massive systems, rich clusters. Indeed, the baryon fraction of clusters is a rising function of radius, so a case could be made that the observations simply don’t reach far enough out to encompass a fair total. This point has been debated at great length in the literature, and I have little to add to it, except to observe that rich clusters are perhaps like horseshoes – close enough.

Irrespective of whether we consider the most massive clusters to be close enough to the cosmic baryon fraction or not, no other system comes close to close enough. There is already a clear discrepancy among smaller clusters, and an apparent trend with mass. This trend continues smoothly and continuously over many decades in baryonic mass through groups, then individual L* galaxies, and on to the tiniest dwarfs.

A respectively massive galaxy like the Milky Way has many tens of billions of solar masses in form of stars, and another ten billion or so in the form of cold gas. Yet this huge mass represents only a 1/4 or so of the baryons that should reside in the halo of the Milky Way. As we look at progressively smaller galaxies, the detected baryon fraction decreases further. For a galaxy with a mere few hundred million stars, fd ≈ 6%. It drops below 1% for M* < 107 solar masses.

That’s a lot of missing baryons. In the case of the Milky Way, all those stars and cold gas are within a radius of 20 kpc. The dark matter halo extends out to at least 150 kpc. So there is plenty of space in which the missing baryons might lurk in some tenuous form. But they have to remain pretty well hidden. Joel Bregman has spent a fair amount of his career searching for such baryonic reservoirs. While there is certainly some material out there, it does not appear to add up to be enough.

It is still harder to see this working in smaller galaxies. The discrepancy that is a factor of a few in the Milky Way grows to an order of magnitude and more in dwarfs. A common hypothesis is that these baryons do indeed lurk there, probably in a tenuous, hot gas. If so, direct searches have yet to see them. Another common idea is that the baryons get expelled entirely from the small potential wells of dwarf galaxy dark matter halos, driven by winds powered by supernovae. It that were the case, I’d expect to see a break at a critical mass where the potential well was or wasn’t deep enough to prevent this. If there is any indication of this, it is at still lower mass than shown above, and begs the question as to where those baryons are now.

So we don’t have a single missing mass problem in cosmology. We have at least two. One is the need for non-baryonic dark matter. The other is the need for unseen normal matter: dark baryons. This latter problem has at least two flavors. One is that the global sum of baryons comes up short. The other is that each and every individual gravitationally bound object comes up short in the number of baryons it should have.

An obvious question is whether accounting for the missing baryons in individual objects helps with the global problem. The wedges in the pie chart represent what is seen, not what goes unseen. Or do they? The CGM is the hot gas around galaxies, the favored hiding place for the object-by-object missing baryon problem.

Never mind the potential for double counting. Lets amp up the stars wedge by the unseen baryons indicated in red in the figure above. Just take for granted, for the moment, that these baryons are there in some form, associated in the proper ratio. We can then reevaluate the integrated sum and… still come up well short.

Low mass galaxies appear to have lots of missing baryons. But they are low mass. Even when we boost their mass in this way, they still contribute little to the integral.

This is a serious problem. Is it hopeless? No. Is it easily solved? No. At a minimum, it means we have at least two flavors of dark matter: non-baryonic [cosmic] dark matter, and dark baryons.

Does this confuse things immensely? Oh my yes.