Mass is a basic quantity. How much stuff does an astronomical object contain? For a galaxy, mass can mean many different things: that of its stars, stellar remnants (e.g., white dwarfs, neutron stars), atomic gas, molecular clouds, plasma (ionized gas), dust, Bok globules, black holes, habitable planets, biomass, intelligent life, very small rocks… these are all very different numbers for the same galaxy, because galaxies contain lots of different things. Two things that many scientists have settled on as Very Important are a galaxy’s stellar mass and its dark matter halo mass.
The mass of a galaxy’s dark matter halo is not well known. Most measurement provide only lower limits, as tracers fade out before any clear end is reached. Consequently, the “total” mass is a rather notional quantity. So we’ve adopted as a convention the mass M200 contained within an over-density of 200 times the critical density of the universe. This is a choice motivated by an ex-theory that would take an entire post to explain unsatisfactorily, so do not question the convention: all choices are bad, so we stick with it.
One of the long-standing problems the cold dark matter paradigm has is that the galaxy luminosity function should be steep but is observed to be shallow. This sketch shows the basic issue. The number density of dark matter halos as a function of mass is expected to be a power law – one that is well specified once the cosmology is known and a convention for the mass is adopted. The obvious expectation is that the galaxy luminosity function should just be a downshifted version of the halo mass function: one galaxy per halo, with the stellar mass proportional to the halo mass. This was such an obvious assumption [being provision (i) of canonical galaxy formation in LCDM] that it was not seriously questioned for over a decade. (Minor point: a turn down at the high mass end could be attributed to gas cooling times: the universe didn’t have time to cool and assemble a galaxy above some threshold mass, but smaller things had plenty of time for gas to cool and form stars.)
The galaxy luminosity function does not look like a shifted version of the halo mass function. It has the wrong slope at the faint end. At no point is the size of the shift equal to what one would expect from the mass of available baryons. The proportionality factor md is too small; this is sometimes called the over-cooling problem, in that a lot more baryons should have cooled to form stars than apparently did so. So, aside from the shape and the normalization, it’s a great match.
We obsessed about this problem all through the ’90s. At one point, I thought I had solved it. Low surface brightness galaxies were under-represented in galaxy surveys. They weren’t missed entirely, but their masses could be systematically underestimated. This might matter a lot because the associated volume corrections are huge. A small systematic in mass would get magnified into a big one in density. Sadly, after a brief period of optimism, it became clear that this could not work to solve the entire problem, which persists.
Circa 2000, a local version of the problem became known as the missing satellites problem. This is a down-shifted version of the mismatch between the galaxy luminosity function and the halo mass function that pervades the entire universe: few small galaxies are observed where many are predicted. To give visual life to the numbers we’re talking about, here is an image of the dark matter in a simulation of a Milky Way size galaxy:
In contrast, real galaxies have rather fewer satellites that meet the eye:
By 2010, we’d thrown in the towel, and decided to just accept that this aspect of the universe was too complicated to predict. The story now is that feedback changes the shape of the luminosity function at both the faint and the bright ends. Exactly how depends on who you ask, but the predicted halo mass function is sacrosanct so there must be physical processes that make it so. (This is an example of the Frenk Principle in action.)
Lacking a predictive theory, theorists instead came up with a clever trick to relate galaxies to their dark matter halos. This has come to be known as abundance matching. We measure the number density of galaxies as a function of stellar mass. We know, from theory, what the number density of dark matter halos should be as a function of halo mass. Then we match them up: galaxies of a given density live in halos of the corresponding density, as illustrated by the horizontal gray lines in the right panel of the figure above.
There have now been a number of efforts to quantify this. Four examples are given in the figure below (see this paper for references), together with kinematic mass estimates.
The abundance matching relations have a peak around a halo mass of 1012 M☉ and fall off to either side. This corresponds to the knee in the galaxy luminosity function. For whatever reason, halos of this mass seem to be most efficient at converting their available baryons into stars. The shape of these relations mean that there is a non-linear relation between stellar mass and halo mass. At the low mass end, a big range in stellar mass is compressed into a small range in halo mass. The opposite happens at high mass, where the most massive galaxies are generally presumed to be the “central” galaxy of a cluster of galaxies. We assign the most massive halos to big galaxies understanding that they may be surrounded by many subhalos, each containing a cluster galaxy.
Around the same time, I made a similar plot, but using kinematic measurements to estimate halo masses. Both methods are fraught with potential systematics, but they seem to agree reasonably well – at least over the range illustrated above. It gets dodgy above and below that. The agreement is particularly good for lower mass galaxies. There seems to be a departure for the most massive individual galaxies, but why worry about that when the glass is 3/4 full?
Skip ahead a decade, and some people think we’ve solved the missing satellite problem. One key ingredient of that solution is that the Milky Way resides in a halo that is on the lower end of the mass range that has traditionally been estimated for it (1 to 2 x 1012 M☉). This helps because the number of subhalos scales with mass: clusters are big halos with lots of galaxy-size halos; the Milky Way is a galaxy-sized halo with lots of smaller subhalos. Reality does not look like that, but having a lower mass means fewer subhalos, so that helps. It does not suffice. We must invoke feedback effects to make the relation between light and mass nonlinear. Then the lowest mass satellites may be too dim to detect: selection effects have to do a lot of work. It also helps to assume the distribution of satellites is isotropic, which looks to be true in the simulation, but not so much in reality where known dwarf satellites occupy a planar distribution. We also need to somehow fudge the too-big-to-fail problem, in which the more massive subhalos appear not to be occupied by luminous galaxies at all. Given all that, we can kinda sorta get in the right ballpark. Kinda, sorta, provided that we live in a galaxy whose halo mass is closer to 1012 M☉ than to 2 x 1012 M☉.
At an IAU meeting in Shanghai (in July 2019, before travel restrictions), the subject of the mass of the Milky Way was discussed at length. It being our home galaxy, there are many ways in which to constrain the mass, some of which take advantage of tracers that go out to greater distances than we can obtain elsewhere. Speaker after speaker used different methods to come to a similar conclusion, with the consensus hedging on the low side (roughly 1 – 1.5 x 1012 M☉). A nice consequence would be that the missing satellite problem may no longer be a problem.
Galaxies in general and the Milky Way in particular are different and largely distinct subfields. Different data studied by different people with distinctive cultures. In the discussion at the end of the session, Pieter van Dokkum pointed out that from the perspective of other galaxies, the halo mass ought to follow from abundance matching, which for a galaxy like the Milky Way ought to be more like 3 x 1012 M☉, considerably more than anyone had suggested, but hard to exclude because most of that mass could be at distances beyond the reach of the available tracers.
This was not well received.
The session was followed by a coffee break, and I happened to find myself standing in line next to Pieter. I was still processing his comment, and decided he was right – from a certain point of view. So we got to talking about it, and wound up making the plot below, which appears in a short research note. (For those who know the field, it might be assumed that Pieter and I hate each other. This is not true, but we do frequently disagree, so the fact that we do agree about this is itself worthy of note.)
The Milky Way and Andromeda are the 1012 M☉ gorillas of the Local Group. There are many dozens of dwarf galaxies, but none of them are comparable in mass, even with the boost provided by the non-linear relation between mass and luminosity. To astronomical accuracy, in terms of mass, the Milky Way plus Andromeda are the Local Group. There are many distinct constraints, on each galaxy as an individual, and on the Local Group as a whole. Any way we slice it, all three entities lie well off the relation expected from abundance matching.
There are several ways one could take it from here. One might suppose that abundance matching is correct, and we have underestimated the mass with other measurements. This happens all the time with rotation curves, which typically do not extend far enough out into the halo to give a good constraint on the total mass. This is hard to maintain for the Local Group, where we have lots of tracers in the form of dwarf satellites, and there are constraints on the motions of galaxies on still larger scales. Moreover, a high mass would be tragic for the missing satellite problem.
One might instead imagine that there is some scatter in the abundance matching relation, and we just happen to live in a galaxy that has a somewhat low mass for its luminosity. This is almost reasonable for the Milky Way, as there is some overlap between kinematic mass estimates and the expectations of abundance matching. But the missing satellite problem bites again unless we are pretty far off the central value of the abundance matching relation. Other Milky Way-like galaxies ought to fall on the other end of the spectrum, with more mass and more satellites. A lot of work is going on to look for satellites around other spirals, which is hard work (see NGC 6946 above). There is certainly scatter in the number of satellites from system to system, but whether this is theoretically sensible or enough to explain our Milky Way is not yet apparent.
There is a tendency in the literature to invoke scatter when and where needed. Here, it is important to bear in mind that there is little scatter in the Tully-Fisher relation. This is a relation between stellar mass and rotation velocity, with the latter supposedly set by the halo mass. We can’t have it both ways. Lots of scatter in the stellar mass-halo mass relation ought to cause a corresponding amount of scatter in Tully-Fisher. This is not observed. It is a much stronger than most people seem to appreciate, as even subtle effects are readily perceptible. Consequently, I think it unlikely that we can nuance the relation between halo mass and observed rotation speed to satisfy both relations without a lot of fine-tuning, which is usually a sign that something is wrong.
A lot of effort has been put into beating down the missing satellite problem around the Milky Way. Matters are worse for Andromeda. Kinematic halo mass estimates are typically in the same ballpark as the Milky Way. Some are a bit bigger, some are lower. Lower is a surprise, because the stellar mass of M31 is clearly bigger than that of the Milky Way, placing it is above the turnover where the efficiency of star formation is maximized. In this regime, a little stellar mass goes a long way in terms of halo mass. Abundance matching predicts that a galaxy of Andromeda’s stellar mass should reside in a dark matter halo of at least 1013 M☉. That’s quite a bit more than 1 or 2 x 1012 M☉, even by astronomical standards. Put another way, according to abundance matching, the Local Group should have the Milky Way as its most massive occupant. Just the Milky Way. Not the Milky Way plus Andromeda. Despite this, the Local Group is not anomalous among similar groups.
Words matter. A lot boils down to what we consider to be “close enough” to call similar. I do not consider the Milky Way and Andromeda to be all that similar. They are both giant spirals, yes, but galaxies are all individuals. Being composed of hundreds of billions of stars, give or take, leaves a lot of room for differences. In this case, the Milky Way and Andromeda are easily distinguished in the Tully-Fisher plane. Andromeda is about twice the baryonic mass of the Milky Way. It also rotates faster. The error bars on these quantities do not come close to overlapping – that would be one criterion for considering them to be similar – a criterion they do not meet. Even then, there could be other features that might be readily distinguished, but let’s say a rough equality in the Tully-Fisher plane would indicate stellar and halo masses that are “close enough” for our present discussion. They aren’t: to me, the Milky Way and M31 are clearly different galaxies.
I spent a fair amount of time reading the recent literature on satellites searches, and I was struck by the ubiquity with which people make the opposite assumption, treating the Milky Way and Andromeda as interchangeable galaxies of similar mass. Why would they do this? If one looks at the kinematic halo mass as the defining characteristic of a galaxy, they’re both close to 1012 M☉, with overlapping error bars on M200. By that standard, it seems fair. Is it?
Luminosity is observable. Rotation speed is observable. There are arguments to be had about how to convert luminosity into stellar mass, and what rotation speed measure is “best.” These are sometimes big arguments, but they are tiny in scale compared to estimating notional quantities like the halo mass. The mass M200 is not an observable quantity. As such, we have no business using it as a defining characteristic of a galaxy. You know a galaxy when you see it. The same cannot be said of a dark matter halo. Literally.
If, for some theoretically motivated reason, we want to use halo mass as a standard then we need to at least use a consistent method to assess its value from directly observable quantities. The methods we use for the Milky Way and M31 are not applicable beyond the Local Group. Nowhere else in the universe do we have such an intimate picture of the kinematic mass from a wide array of independent methods with tracers extending to such large radii. There are other standards we could apply, like the Tully-Fisher relation. That we can do outside the Local Group, but by that standard we would not infer that M31 and the Milky Way are the same. Other observables we can fairly apply to other galaxies are their luminosities (stellar masses) and cosmic number densities (abundance matching). From that perspective, what we know from all the other galaxies in the universe is that the factor of ~2 difference in stellar mass between Andromeda and the Milky Way should be huge in terms of halo mass. If it were anywhere else in the universe, we wouldn’t treat these two galaxies as interchangeably equal. This is the essence of Pieter’s insight: abundance matching is all about the abundance of dark matter halos, so that would seem to be the appropriate metric by which to predict the expected number of satellites, not the kinematic halo mass that we can’t measure in the same way anywhere else in the universe.
That isn’t to say we don’t have some handle on kinematic halo masses, it’s just that most of that information comes from rotation curves that don’t typically extend as far as the tracers that we have in the Local Group. Some rotation curves are more extended than others, so one has to account for that variation. Typically, we can only put a lower limit on the halo mass, but if we assume a profile like NFW – the standard thing to do in LCDM, then we can sometimes exclude halos that are too massive.
Abundance matching has become important enough to LCDM that we included it as a prior in fitting dark matter halo models to rotation curves. For example:
NFW halos are self-similar: low mass halos look very much like high mass halos over the range that is constrained by data. Consequently, if you have some idea what the total mass of the halo should be, as abundance matching provides, and you impose that as a prior, the fits for most galaxies say “OK.” The data covering the visible galaxy have little power to constrain what is going on with the dark matter halo at much larger radii, so the fits literally fall into line when told to do so, as seen in Pengfei‘s work.
That we can impose abundance matching as a prior does not necessarily mean the result is reasonable. The highest halo masses that abundance matching wants in the plot above are crazy talk from a kinematic perspective. I didn’t put too much stock in this, as the NFW halo itself, the go-to standard of LCDM, provides the worst description of the data among all the dozen or so halo models that we considered. Still, we did notice that even with abundance matching imposed as a prior, there are a lot more points above the line than below it at the high mass end (above the bend in the figure above). The rotation curves are sometimes pushing back against the imposed prior; they often don’t want such a high halo mass. This was explored in some detail by Posti et al., who found a similar effect.
I decided to turn the question around. Can we use abundance matching to predict the halo and hence rotation curve of a massive galaxy? The largest spiral in the local universe, UGC 2885, has one of the most extended rotation curves known, meaning that it does provide some constraint on the halo mass. This galaxy has been known as an important case since Vera Rubin’s work in the ’70s. With a modern distance scale, its rotation curve extends out 80 kpc. That’s over a quarter million light-years – a damn long way, even by the standards of galaxies. It also rotates remarkably fast, just shy of 300 km/s. It is big and massive.
(As an aside, Vera once offered a prize for anyone who found a disk that rotated faster than 300 km/s. Throughout her years of looking at hundreds of galaxies, UGC 2885 remained the record holder, with 300 seeming to be a threshold that spirals did not exceed. She told me that she did pay out, but on a technicality: someone showed her a gas disk around a supermassive black hole in Keplerian rotation that went up to 500 km/s at its peak. She lamented that she had been imprecise in her language, as that was nothing like what she meant, which was the flat rotation speed of a spiral galaxy.)
That aside aside, if we take abundance matching at face value, then the stellar mass of a galaxy predicts the mass of its dark matter halo. Using the most conservative (in that it returns the lowest halo mass) of the various abundance matching relations indicates that with a stellar mass of about 2 x 1011 M☉, UGC 2885 should have a halo mass of 3 x 1013 M☉. Combining this with a well-known relation between halo concentration and mass for NFW halos, we then know what the rotation curve should be. Doing this for UGC 2885 yields a tragic result:
The data do not allow for the predicted amount of dark matter. If we fit the rotation curve, we obtain a “mere” M200 = 5 x 1012 M☉. Note that this means that UGC 2885 is basically the Milky Way and Andromeda added together in terms of both stellar mass and halo mass – if added to the M*-M200 plot above, it would land very close to the open circle representing the more massive halo estimate for the combination of MW+M31, and be just as discrepant from the abundance matching relations. We get the same result regardless of which direction we look at it from.
Objectively, 5 x 1012 M☉ is a huge dark matter halo for a single galaxy. It’s just not the yet-more massive halo that is predicted by abundance matching. In this context, UGC 2885 apparently has a serious missing satellites problem, as it does not appear to be swimming in a sea of satellite galaxies the way we’d expect for the central galaxy of such high mass halo.
It is tempting to write this off as a curious anecdote. Another outlier. Sure, that’s always possible, but this is more than a bit ridiculous. Anyone who wants to go this route I refer to Snoop Dog.
I spent much of my early career obsessed with selection effects. These preclude us from seeing low surface brightness galaxies as readily as brighter ones. However, it isn’t binary – a galaxy has to be extraordinarily low surface brightness before it becomes effectively invisible. The selection effect is a bias – and a very strong one – but not an absolute screen that prevents us from finding low surface brightness galaxies. That makes it very hard to sustain the popular notion that there are lots of subhalos that simply contain ultradiffuse galaxies that cannot currently be seen. I’ve been down this road many times as an optimist in favor of this interpretation. It hasn’t worked out. Selection effects are huge, but still nowhere near big enough to overcome the required deficit.
Having the satellite galaxies that inhabit subhalos be low in surface brightness is a necessary but not sufficient criterion. It is also necessary to have a highly non-linear stellar mass-halo mass relation at low mass. In effect, luminosity and halo mass become decoupled: satellite galaxies spanning a vast range in luminosity must live in dark matter halos that cover only a tiny range. This means that it should not be possible to predict stellar motions in these galaxies from their luminosity. The relation between mass and light has just become too weak and messy.
And yet, we can do exactly that. Over and over again. This simply should not be possible in LCDM.
37 thoughts on “Galaxy Stellar and Halo Masses: tension between abundance matching and kinematics”
This is a choice motivated by an ex-theory that would take an entire post to explain unsatisfactorily, so do not question the convention: all choices are bad, so we stick with it.
And I was expecting the Spanish Inquisition!
LikeLiked by 1 person
When I saw that you and Pieter van Dokkum had co-authored a research note I was very surprised. Very cool that you too can work together despite your very different points of view. I’m impressed! One question, on the chart with the UGC 2885 rotation curves, I would expect MOND prediction for the rotation curve to be pretty close to the observed data, is that correct?
LikeLiked by 1 person
Yes. See Fig. 5 of https://arxiv.org/abs/2004.14402.
One can, of course, improve the match by fitting (https://arxiv.org/abs/1803.00022), but if you just take our best guess at M*/L, the match is already remarkably good.
Wow, that was intense and very cool. I need to read it again before commenting on the post. However, I did want to set the stage that in a point charge universe, you must realise that there is a lot of mass shielding. In other words, not all energy appears as mass in particular situations. For example, the 3/3 energy cores of generation I fermions contain and shield the energy of generation 2, which likewise shields generation 3. The precessing dipole current is pretty much going to wipe out internal fields like a Faraday cage, right? However, for the purposes of this discussion, the large scale mass shielding inside a SMBH Planck point charge core is going to take the cake. Point charges have an immutable volume defined by r=Lp/tau. Couple that to FCC or HCP sphere packing. Reject the angular momentum of joining point charges, thus increasing the spin and frame dragging of the surrounding soup. Anyway, the point is that if I did the math right, and we separate mass into mass-total and mass-apparent then mass-total = radius * mass-apparent/3. In other words, you are only seeing a fraction of the total mass that was present before it wound up in a Planck core. Now presuming the Planck core grows and grows like this until it erupts out the poles, the galaxy disc objects will see a constant drain of mass from the core of the galaxy. Then when the Planck point charge core and the SMBH spin and frame dragging all reach the point where the Planck point charge core can breach at the poles, then you get enormously powerful jets. How long this goes on I don’t know but I think it is a very long time. During this time the Planck core will be shrinking and the apparent mass of the SMBH will be shrinking as well. I don’t know if this has any linkage to the post, but I thought I would mention it as a potential factor to be considered.
And I thought that QAnon was weird.
LikeLiked by 2 people
Oh Phillip, you have the skills and opportunity to trailblaze on the paths I am illuminating if only you would relax enough to realize good science will transform and still be decent.
Mark, you write enough here that you should write your own blog. While I am very lenient in encouraging broad discussion, I can see why the internet makes it necessary to moderate content. Don’t make me do that by continuing to post these long disquisitions. If you want to write these things elsewhere, then you can make a short comment here with a link. I’m not paying to host this website on your behalf.
LikeLiked by 1 person
Ok. Thanks Stacy for being patient this long.
Please realize that the point charge model interfaces directly with GR and QM in multiple ways. The simplest construct, the electrino/positrino dipole 1/1 is an amazing gadget: stretchy ruler, variable clock, energy accumulator in h quanta due to a natural control circuit deriving from general relativity. It’s really exactly what everyone is seeking without knowing it. Even better, triples of these dipoles at order of magnitude(s) different energy form fermion cores, which are amazing Emily Noether level conservation gyroscopes.
I’m sorry, in case I buried the lede with my first comment, but I suspect dark matter is synonymous with the spacetime aether detritus, mostly old tired neutrinos and photons thirsty for energy. This aether is the transducer of gravity after all, so it must gain energy around dense matter which excites it. It’s not complicated. Neutrinos and photons must eventually redshift into a state where they lose velocity. Tnis is an open area of research. Then of course, dark energy is correlated with my previous comment, the emission of Planck plasma from an SMBH core and the inflationary jets and their terminii and the new standard matter and spacetime aether that is generated.
Here is another reason I recommend everyone take a reset and start over with immutable point charges at +/- e/6. One, it is a total greenfield. You have one bright hobbiest who is on to something and can tie his work to noamwhy.com with even more math and there are no professional scientists that have revealed they are engaged. It is the most obvious of all avenues since we have sort of realized by now that somehow the idea of immutable point charges was missed. It’s a great ideal. Try it. Don’t take my word for it. It’s easy. If I did it, and I’m sort of just a lucky dummy, then so can you!
As i have said before, and Philip Helbig has said on another thread, if you wish anyone to take you seriously you have to offer a real testable prediction of your theory. The spin magnetic dipole moments of the electron, muon and tauon are an obvious example of something your theory should be able to predict. α (=(g-2)/2) for the electron is 0.00115965218073 (28) [the number in parentheses is the experimental error in units of the last digit]; the measured α for the muon is 0.0011659209 (6), while for the tauon there is no good measurement, only a range for α that includes zero.
LikeLiked by 1 person
Have cosmologists done much related work to improve the observational constraints how angular momentum of notional cold dark matter is distributed in and around both spiral and elliptical galaxies?
How the angular momentum of the notional cold dark matter halo’s formed is a mystery to me. In a proto-galaxy in the early universe (without any initial angular momentum) the initial linear momentum due to gravitational in-fall to the centre of the proto-galaxy must be lost as angular momentum is gained (maybe I am completely misguided in this thought in which case I apologise for the following observation). Baryonic matter loses linear momentum most effectively in collisions, which is not a mechanism open to cold dark matter. Obviously if cold dark matter in the early universe loses linear momentum (cools) by gravitational interaction with baryonic matter, it would act in general like an invisible acceleration force on that baryonic matter toward the centre of the proto-galaxy. Maybe that would have observable consequences in the e-m spectrum emissions of early galaxies.
Physicists mainly concentrate on tidal torque (angular momentum) interaction with baryonic matter. I don’t think they understand the observable consequences of this either – I know I don’t. For example, has anyone found a possible remnant of such notional historic tidal interactions (assuming one believes in the conservation of angular momentum) by comparing galaxies with and without dark matter (i.e. galaxies with obey Newton’s Law and those that don’t).
This is a good question that would require a longer answer than I can provide here. Basically, tidal torques at turnaround between expansion and recollapse are supposed to impart a little angular momentum to halos. Baryons then condense within these halos to make galaxies that are smaller by a factor of ~10 or so (which they must dissipate energy to do, a thing dark matter cannot). If they conserve angular momentum as they do this, then you get something like the observed size distribution of galaxies. Some galaxies had a little more spin to start, so end up a bit more extended at a given mass.
This simple picture only works at a hand-waving level. In simulations, baryons and dark matter are free to exchange angular momentum, and do so. This tends to result in baryons giving up too much AM to DM as they collapse, something once called the angular momentum catastrophe. This is one reason we invoke feedback, to try to prevent that. Reasonable amount of feedback didn’t help, so then we invoked “feedback on steroids.” It’s turtles all the way down from there.
The link to the McGaugh & van Dokkum paper does not go to a paper with these authors. While I enjoyed the paper it does link to and learned some important lessons from it, I believe this one was meant instead:
It was quite interesting to find out that the Local Group timing argument mass is lower than expected from abundance matching. But I suspect astronomers will not take the problem seriously. This is because they tend to focus on the big picture, like cosmology. More serious for them are issues like the Hubble tension and El Gordo, which the previous post discusses. Of course, the Local Group is important as well, but then quantitative statements are necessary regarding the probability that the nearest large external galaxy is like M31. So if a galaxy is picked at random, would it look like that? And if not, then one should try to propose a solution if possible.
That was the wrong link for that caption but the right one for a different caption. I think I’ve fixed it now.
Science comment: as all too often happens, there does not seem to be a clear consensus on what the scatter “should” be, hence it is not possible to make probabilistic statements. Indeed, the expectation for the amount of scatter appears to be evolving in real time to follow the observations, so any probability estimated from that exercise will have been preadjusted to not sound so bad.
Sociology comment: define “astronomers”. I think astronomers as defined here – https://tritonstation.com/2019/06/17/two-fields-divided-by-a-common-interest/ – take the problem seriously. The attitude you describe is more common to cosmologists, particularly those with a particle physics background rather than one in astrophysics. I am not concerned with their lack of concern that stems from the inadequate education they received. Abundance matching has become part of the “hard core” of the LCDM paradigm. It it breaks, the whole thing breaks, whether they understand it or not.
That said, I’m sure the solution is “enough scatter to make this seem not ridiculously improbable” while ignoring tighter but intimately connected relations like Tully-Fisher.
Who decides what is part of the hard core and what is not?
ΛCDM means a universe with a positive cosmological constant and cold dark matter.
Suppose there were a consensus that abundance matching were part of that paradigm. How would showing that it doesn’t work break the whole ΛCDM paradigm as defined above?
This is a good example of what I mean when I say that some MOND arguments cause (rightly or wrongly) the whole idea of MOND not to be taken seriously. (Yes, there is a weird (a)symmetry) here.). Choose a detail which I doubt that everyone who believes in CDM has even heard of, much less worked with, then claim that the whole idea is ruled out because this one thing doesn’t work.
That is really as bad as people saying that since MOND cannot explain the CMB power spectrum then the whole idea of MOND is bunk.
That’s a good question. There is no star chamber of illuminati that defines of what LCDM is. That’s part of the problem – there is no widely agreed set of testable predictions. Different people mean different things by this term.
So, a better answer is that the hard core is defined by people who understand the theory. That apparently leaves out a lot of active practitioners.
So let’s consider definitions: you offer the lowest common denominator definition, that LCDM is a universe made of Lambda+CDM. That’s true, but it does not suffice to define the hard core. Though it goes unmentioned in this most minimal of definitions, the existence of baryons is part of the hard core. And not just that they exist, but that their density is known to high precision. Perhaps we should call it LCDMBBNCMBHBB, as the density of baryons is known from both BBN and the CMB. Both of these are assuredly part of the hard core of the hot big bang cosmology, of which LCDM is a specific realization. You can’t have LCDM without the hot big bang, and pretend like that isn’t an essential part of its definition.
Abundance matching follows from two things: the predicted number density of dark matter halos, and the observed number density of galaxies. Early on, we could imagine that we got the prediction wrong or the observation wrong. So people worked on both – rather a lot. For years merging into decades. It became clear a long time ago that the predicted halo mass function has to be steep in any universe where structure forms via the hierarchical collapse of cold dark matter – this is very much part of the hard core. I am not aware of anyone who seriously disputes this. Same for the galaxy luminosity function. We’ve checked. There is no serious dispute that it has a non-compliant shape. Theorists have mostly accepted this, and the vast majority of them long ago shifted to trying to explain the difference with feedback.
All this leads us inevitably and irrevocably to abundance matching: one has to reconcile the predicted shape of the halo mass function with the observed shape of the galaxy luminosity function. Has to. It is inextricably connected to the hard core of structure formation, which is widely proclaimed as being the most successful aspect of LCDM.
That said, I think what Indranil said above is correct: there are lots of scientists who don’t care. That just means they are not qualified to define the hard core. Apparently it has become possible to spend an entire career within this paradigm without understanding it.
Let me also call out some specific phrases:
(1) “Choose a detail which I doubt that everyone who believes in CDM has even heard of, much less worked with, then claim that the whole idea is ruled out because this one thing doesn’t work.”
I hope I have made abundantly clear that abundance matching is not some obscure detail. It might seem that way from outside the field. From inside the field, everyone has either heard of it or is grossly negligent. I expect that there are plenty of particle physicists who fall into the latter category because they seem obsessed mostly with what particle the dark matter could be rather than with testing the theory. Such ignorance is not a persuasive argument that abundance matching hasn’t become part of the hard core.
(2) “This is a good example of what I mean when I say that some MOND arguments cause (rightly or wrongly) the whole idea of MOND not to be taken seriously.”
This is not a MOND argument. There is nothing about MOND in this post. This is me, as a scientist (not a “MOND person” or a “dark matter person”), expressing concern over an essential aspect of a theory, LCDM, that I understand extraordinarily well. (This post is a longer, more historical version of the paper I wrote with van Dokkum: Pieter isn’t exactly a “MOND person” himself.) If an argument about LCDM in the context of LCDM without reference to MOND somehow causes MOND not to be taken seriously, then the practice of science has a deeper problem than just “LCDM or MOND.”
So no, it is not (3) “as bad as people saying that since MOND cannot explain the CMB power spectrum then the whole idea of MOND is bunk.” It is nothing like that at all. People who say that don’t understand MOND and don’t want to be bothered to do so. What I have said here I say from a rather comprehensive understanding of LCDM. If that exceeds your bounds on what LCDM means, that just brings us back to what I said in the first place: different people use the same word to mean different things.
I would thank you to refrain from asserting false equivalencies based on the presumption that your definition of LCDM is somehow more correct than mine.
“If an argument about LCDM in the context of LCDM without reference to MOND somehow causes MOND not to be taken seriously, then the practice of science has a deeper problem than just “LCDM or MOND.””
That’s absolutely right, and that deeper problem is “LCDM or REALITY”.
Am I mistaken, Dr McGaugh, or are you coming around to the view that the Big Bang paradigm has generally failed, not just the dark matter piece of LCDM?
A minor point and a major point. First, I concede that there was no mention of MOND in the text which I replied to. However, all comments have a context, and the context here is the blog of someone who has a lot of good stuff to say about MOND.
Second, I can’t speak for Stacy, but I would be very surprised if he doubts the Big Bang paradigm in general, at least if the Big Bang is defined via the usual sensible definition.
But the evidence for the Big Bang is quite independent from, and has been around longer than, the idea of ΛCDM, so why do you think that doubting the former should lead one to doubt the latter?
1) The objection to your introduction of MOND into the discussion seems to be that it was a non sequitur in service of a false equivalence. You comment evades the second part of that objection.
2) What, in your opinion, constitutes “the usual sensible definition” of the Big Bang paradigm?
3) Both Λ and CDM are required to reconcile the Big Bang model with observations. Since both are unobserved phenomena, any further doubt cast upon their existence seriously undermines the BB.
1) I don’t follow you.
2) The Universe is expanding from a hot and dense state about 14 billion years ago, at which time the currently observable Universe was within a volume on the order of the Solar System.
3) Define “observed”. There is observational evidence for Λ. While it is unclear whether MOND or dark matter is responsible for things like flat rotation curves, even many MOND enthusiasts see no alternative to dark matter on cosmological scales. There is no reason one can’t have both.
Good answer to a good question. My guess, as an outsider, is that the prediction is wrong. But is the only alternative that the whole paradigm is wrong? Maybe someone just hasn’t figured everything out yet. You might criticize that position, but it is also similar to the MOND position which states that MOND is an effective theory and we don’t yet know how to apply it to cosmology and so on but, hey, give us some time. Another false dichotomy is that if ΛCDM is wrong then MOND must be true. Maybe (some extension) of MOND is true, maybe someone missed something essential. which could make ΛCDM work, maybe some other theory will explain why both work where they do and don’t work where they don’t. We just don’t know.
I once ran across someone defining ΛCDM as including an exactly flat universe. Yes, observations indicate that the universe is nearly flat, and some (rightly or wrongly) assume that it is exactly flat rather than extremely close to flat, either for practical or for esthetic reasons. But people are writing papers suggesting that some things could make more sense if there is small but positive curvature. Then the claim was that something didn‘t work in exactly flat ΛCDM so the whole paradigm must be wrong. That’s cherry-picking: choose some variant which is easy to rule out, not even a consensus or part of the (not well defined) hard core, rule that out, and claim that the entire paradigm is wrong (or, like some commentators here, conclude from that that the universe does not expand).
I think you are making a logical fallacy:
An area of LCDM few have worked with = An area where the theoretical uncertainty is large.
It may well be that something follows quite uniquely from the idea of GR working everywhere, which is really all you need to get to LCDM. But it may be that this gives the ‘wrong’ answer, so people worried about their careers more than the science don’t really work on it.
The example you use to support your claim also fails to do so. If MOND could be shown to be I incompatible with the CMB (which is different to saying it’s an area where the application of MOND is unclear), then MOND would of course be ruled out. Certainly it would be wrong to say that since we don’t know for sure how to apply MOND there and none of the handful of suggestions so far work, MOND is ruled out. But a careful argument that MOND can never explain the CMB would rule out MOND. As it is, the situation is hypothetical because MOND can fit the CMB.
Returning to what Stacy was discussing, abundance matching is clearly an inevitable aspect of LCDM, and also one that most workers on that area have heard of and many have worked with. One hopes that with a proper treatment of baryonic physics, the stellar-halo mass relation will come out. But in the meantime, we can assume it does, and test the consequences. This is like saying we don’t know how the planets formed, but if we assumed they formed where they are/moved there somehow, we can calculate their orbital periods. A contradiction would then rule out the model. It wouldn’t help to say that maybe planet formation worked differently, because either the planets couldn’t be where they are, or if they could, then you can predict the velocity – and it is wrong. Either way, the theory is ruled out. Similarly, LCDM could be ruled out by a failure of abundance matching.
To demonstrate this, I would recommend looking at a simulation which claims to get the radial acceleration relation, like say this one:
The scatter in the abundance matching relation could be used to estimate how problematic the Local Group is. Then other simulations could be used to show e.g. that they have enough scatter to get the Local Group right, but not the radial acceleration relation. Eventually, a quantiative case could be built up that one can’t get low scatter in one while having enough scatter in the other to explain the Local Group. In fact, the latest Illustris TNG50 results give a much better idea of what is expected in LCDM, and our group in Bonn can help download the data into a more user-friendly and application-specific form for sharing with other interested researchers attempting to test LCDM in some non-trivial way.
It is important to be quantitative about the results, because the current situation is like making ten assumptions which seem reasonable but in combination do not work. One or more of them must be wrong. Which means something unlikely must happen. It is no good saying ‘this assumption seems legitimate’, one can say that about all of them. But some are maybe 90% secure, and others 99% secure. An obvious example is that the galaxy data is most easily interpreted with MOND, larger scale data are interpreted with LCDM, but both can’t be right at the same time. What is more important is to ask if galaxies might be explained in LCDM, even if this is not very likely. And the same for MOND on larger scales.
Personally, I am not so keen on tests which might be really sensitive to the baryonic physics. I also agree about some researchers using dodgy arguments and hyperbolic language in support of MOND, thus making it look like a crackpot theory. This is definitely not good, especially as it will look like an obvious conflict of interest where the researcher responsible needs MOND to be correct to advance their career. I really dislike this kind of behaviour, which does not really help anyone.
If all were as balanced and calm and collected as you, we could make more progress. 🙂
As for MOND being compatible with the CMB, I‘m aware of only one paper, which Stacy mentioned here, which makes such a claim. The last time I checked, it hadn‘t been accepted by a reputable journal. Life is short, and really trying to understand something outside of one’s own field is a huge effort, so one filter is to look at papers only after acceptance by a reputable journal. Could it be right but rejected, for the wrong reasons? Yes. But if I assume that about all papers which, to me, or not obviously wrong but still not published in a reputable journal, then I could look at at most a tiny fraction in my lifetime.
My hope is that something like wide binaries will give a clear signal one way or the other. Yes, there are issues such as knowing whether a pair is really a binary or not, but that would contaminate the sample whether or not MOND is right so, at least statistically, one should be able to tell, given enough data, whether wide binaries follow the MOND prediction or not. I think that that would be much more convincing to MOND sceptics than stuff like galaxies which, in ΛCDM if not in MOND, are messy.
There is also the Angus & Diaferio (2011) work with light sterile neutrinos at a mass of 11 eV/c^2, which we discuss further in MNRAS, 499, 2845:
As for the Skordis work you mentioned, it’s still in progress, but has not been rejected as far as I know. It should be under peer review. I agree it is a bit recent and indeed has not gone through all the checks, but realistically it’s just algebra and will achieve what it sets out to do. But I am not convinced it will work in galaxy clusters, and so favour the Angus (2009) model with light sterile neutrinos discussed in the above-mentioned open-access publication.
I definitely agree about the wide binaries. This is what my next postdoc should focus on. I have a detailed plan for how it might be implemented, and have been in touch with researchers that are experienced with handling the relevant Gaia data. The wide binary test is by far the most promising way forwards that I am aware of.
I am agnostic as to what the ultimate resolution will be. Of course, a sterile neutrino is a WIMP, so some MOND apologists would reject it for just that reason (not been seen in the lab, hence it cannot exist, although that doesn’t apply to MOND effects, because of the external field effect—-it’s complicated!). Interestingly, Merritt goes on and on about how stupid the idea of dark matter as WIMPs is, then invokes a sterile neutrino to help MOND explain the CMB.
Logically, MOND could have something to it, and there could also be dark matter (but maybe less and/or not everywhere).
If the work in progress is really just algebra, then I’m sure that it will be accepted, both by the journal and by the community. On what timescale is a different question. I don’t know what other caveats or assumptions it holds. Something to look at after the referees have. 🙂
I’m surprised that the wide-binary test isn’t discussed more. Maybe good data are too recent.
If corona dies down enough, the next Texas Symposium will be in Prague in December 2021. You can already (pre-)register. Try to give a talk on wide binaries there.
It is so much easier with immutable point charges, the electrino and positrino. You can go through the PDG and back-propagate to combinations of |e/6| or let it find the magnitude itself. Then you will be able to reconstruct the true reactions, below the scale of today’s observability. jmarkmorris.com
Maybe I should really be trying to converse with chemistry academics. 🤣🤣🤣
Trolls are not for keeping.
LikeLiked by 1 person
I do not doubt the basics of the hot big bang picture, which to me means an expanding universe that went through an early hot phase that resulted in primordial nucleosynthesis (BBN) and produced the relic background radiation (CMB). There is nothing about MOND that contradicts that, but neither is there a broader, relativistic theory that explains it (with the recent possible exception of the theory introduced by Skordis & Zlosnik). Conventionally, I worry that the fact that we are obliged to invoke both dark matter and dark energy is an indication that we are pounding the square peg into the round hole: *obviously* GR is a great approximation to any deeper theory (if there is one), so what would its failure look like? A small departure from FRW that forces us into a realm of parameter space that a lifetime ago (but not two) we would have rejected out of hand as obviously falsifying the whole picture. (Jim Peebles pointed out to me recently that Lambda was as despised then as MOND is now.) So I agree that – within the framework of GR – we must have both Lambda and CDM. I do question the framework, and don’t see why we shouldn’t hope for a better explanation of both.
I personally do *not* like a cosmology with both dark matter and MOND. That smells too Tychonic to me: an attempt to have the best of both worlds that sounds nice and is unlikely to pan out. This is entirely a philosophical prejudice on my part. I do not see the benefit of swapping in sterile neutrinos for WIMPs. That’s just replacing one thing we don’t know to exist with another. I suppose there are at least hints from the laboratory that sterile neutrinos might exist, but not necessarily of the right mass or number density. I’d hold out more hope (still little) that the ordinary neutrino comes in at an interesting mass (i.e., something more than the structure formation limit < 0.12 eV).
Bob Sanders and I independently worked many of these things out circa 20 years ago. We had more predictive success than mainstream cosmologists (see http://astroweb.case.edu/ssm/mond/LSSinMOND.html), with the predictions of early structure formation still being realized. I stopped bothering to work on it because there's not a lot new to say. I can't fix everything for everyone all the time, nor generalize Einstein's equations all by myself.
Conventionally, I worry that the fact that we are obliged to invoke both dark matter and dark energy is an indication that we are pounding the square peg into the round hole
I’ve never understood those objections. Thinking that dark matter is somehow absurd is tantamount to the very parochial claim that most of the Universe must be made out of the same stuff that we are. There are even MOND supporters who say that dark matter falsifies GR. GR tells how mass—energy affects spacetime curvature and vice versa; it says nothing about what those sources are. There is no it-must-be-baryons in it. (Think about what sort of elementary particles were known at the time GR was developed.). In other contexts, if Nature has a degree of freedom, then it makes use of it. So, those who claim that Λ must be zero need to show the symmetry, conservation law, or whatever which forces this, not the other way around. So the default should be that it is not zero, even with no observational evidence. But now we even have that evidence. What Einstein perhaps said in his later years, who liked &Lanbda; in the 1970s, and so on are important for the history of science, but not for science itself. The Universe is what it is, whatever some people think about it.
I think that the killer argument is the following: If GR breaks down, or there is something else such as backreaction due to large-scale inhomogeneities or.whatever which is not part of the standard model, then it would be really strange that deriving the cosmological parameters just happens to work at all. Theoretically, one can have an arbitrary magnitude-redshift relation, say. Those which are possible in the standard model are essentially a set of measure zero. But that is what we observe. Not only that, but the values derived for things like the age of the Universe, the value of Omega, and so on agree among independent test (which is why the standard model of cosmology—-in contrast to the standard model of particle physics—-is also called the concordance model). Another way of looking at that: it is easy to think of things which could screw up any given cosmological test. But you need to explain how all screw up in a coordinated fashion in order to result in the same wrong value.
An obvious typo which I can’t correct.
I already answered these, so I don’t see the point of doing it again, but here we are. Yes, you have to make all the right things that work out – which is certainly not all of them, but a lot do, so let’s just think about those: measures of the geometry and expansion history – they all have to work out to a very strange place in parameter space called LCDM. (Though even the acceleration in a(z) is extremely subtle; it isn’t that far off the coasting case.) If eventually we come to understand some underlying theory, as I said before, that diverges subtly, as it must, from pure GR, but we insist on fitting it with GR, then we won’t, in retrospect, be surprised that we had to start invoking deus ex machina to make it so. That’s all Lambda and CDM are. If they are real entities, then they must exist – in particular the dark matter, which cannot be baryonic. Sure, we can imagine other things, but we have zero laboratory evidence that there is anything outside the Standard Model of Particle Physics. Particle physicists love to speculate on things that might be, but have so far had no success. That doesn’t mean they won’t, eventually, but it is also an incredibly weak foundation on which to build a universe.
Please stop stating the obvious at me like I haven’t already written about it all somewhere on this blog.
I disagree with the idea that all the cosmological tests work in LCDM. The Hubble tension is an obvious case in point. And there are so many tensions on smaller scales, like individual galaxies.
As for why LCDM works in some cases, obviously a wrong theory with many adjustable parameters would be expected to match some observables. Especially in the nuHDM framework I advocated in the post on the KBC void, where the expansion history and behaviour at redshift > 50 would be basically the same as in LCDM. So I don’t agree about LCDM matching many cosmological observables being evidence in favour of LCDM. One could also argue that when the geocentric model was popular, the fact it could fit all the planets except Mars (due to its higher eccentricity) meant it was probably right and you could just ignore Mars, because how could a completely different theory designed to fit Mars better avoid mucking up all the successes of the geocentric model? But nature will find a way.
The larger issue is of course that agreement between theory and data does not prove a theory correct. I can sit here and argue that since galaxies work so well in MOND, I can’t imagine any other theory. But this is not a helpful argument. More helpful is to agree with the mainstream view that the agreement of MOND may be a coincidence, and come up with more discriminative tests like the wide binary test (MNRAS, 480, 2660). In the meantime, anomalies should not be ignored.
I was with you up until “agree with the mainstream view that the agreement of MOND may be a coincidence.” That may indeed be mainstream, but it is also deeply, disturbingly wrong. That’s not how science works. I’m open to considering non-MOND explanations for MONDian phenomenology, but pretending like it is a coincindince that doesn’t indicate something deeper – even if it is something about the nature of dark matter – is profoundly unscientific.
Perhaps that is not what you really meant, so let’s just leave it at that.
Comments are closed.