In order to agree on an interpretation, we first have to agree on the facts. Even when we agree on the facts, the available set of facts may admit multiple interpretations. This was an obvious and widely accepted truth early in my career*. Since then, the field has decayed into a haphazardly conceived set of unquestionable absolutes that are based on a large but well-curated subset of facts that gratuitously ignores any subset of facts that are inconvenient.

Sadly, we seem to have entered a post-truth period in which facts are drowned out by propaganda. I went into science to get away from people who place faith before facts, and comfortable fictions ahead of uncomfortable truths. Unfortunately, a lot of those people seem to have followed me here. This manifests as people who quote what are essentially pro-dark matter talking points at me like I don’t understand LCDM, when all it really does is reveal that they are posers** who picked up on some common myths about the field without actually reading the relevant journal articles.

Indeed, a recent experience taught me a new psychology term: identity protective cognition. Identity protective cognition is the tendency for people in a group to selectively credit or dismiss evidence in patterns that reflect the beliefs that predominate in their group. When it comes to dark matter, the group happens to be a scientific one, but the psychology is the same: I’ve seen people twist themselves into logical knots to protect their belief in dark matter from being subject to critical examination. They do it without even recognizing that this is what they’re doing. I guess this is a human foible we cannot escape.

I’ve addressed these issues before, but here I’m going to start a series of posts on what I think some of the essential but underappreciated facts are. This is based on a talk that I gave at a conference on the philosophy of science in 2019, back when we had conferences, and published in Studies in History and Philosophy of Science. I paid the exorbitant open access fee (the journal changed its name – and publication policy – during the publication process), so you can read the whole thing all at once if you are eager. I’ve already written it to be accessible, so mostly I’m going to post it here in what I hope are digestible chunks, and may add further commentary if it seems appropriate.

Cosmic context

Cosmology is the science of the origin and evolution of the universe: the biggest of big pictures. The modern picture of the hot big bang is underpinned by three empirical pillars: an expanding universe (Hubble expansion), Big Bang Nucleosynthesis (BBN: the formation of the light elements through nuclear reactions in the early universe), and the relic radiation field (the Cosmic Microwave Background: CMB) (Harrison, 2000; Peebles, 1993). The discussion here will take this framework for granted.

The three empirical pillars fit beautifully with General Relativity (GR). Making the simplifying assumptions of homogeneity and isotropy, Einstein’s equations can be applied to treat the entire universe as a dynamical entity. As such, it is compelled either to expand or contract. Running the observed expansion backwards in time, one necessarily comes to a hot, dense, early phase. This naturally explains the CMB, which marks the transition from an opaque plasma to a transparent gas (Sunyaev and Zeldovich, 1980; Weiss, 1980). The abundances of the light elements can be explained in detail with BBN provided the universe expands in the first few minutes as predicted by GR when radiation dominates the mass-energy budget of the universe (Boesgaard & Steigman, 1985).

The marvelous consistency of these early universe results with the expectations of GR builds confidence that the hot big bang is the correct general picture for cosmology. It also builds overconfidence that GR is completely sufficient to describe the universe. Maintaining consistency with modern cosmological data is only possible with the addition of two auxiliary hypotheses: dark matter and dark energy. These invisible entities are an absolute requirement of the current version of the most-favored cosmological model, ΛCDM. The very name of this model is born of these dark materials: Λ is Einstein’s cosmological constant, of which ‘dark energy’ is a generalization, and CDM is cold dark matter.

Dark energy does not enter much into the subject of galaxy formation. It mainly helps to set the background cosmology in which galaxies form, and plays some role in the timing of structure formation. This discussion will not delve into such details, and I note only that it was surprising and profoundly disturbing that we had to reintroduce (e.g., Efstathiou et al., 1990; Ostriker and Steinhardt, 1995; Perlmutter et al., 1999; Riess et al., 1998; Yoshii and Peterson, 1995) Einstein’s so-called ‘greatest blunder.’

Dark matter, on the other hand, plays an intimate and essential role in galaxy formation. The term ‘dark matter’ is dangerously crude, as it can reasonably be used to mean anything that is not seen. In the cosmic context, there are at least two forms of unseen mass: normal matter that happens not to glow in a way that is easily seen — not all ordinary material need be associated with visible stars — and non-baryonic cold dark matter. It is the latter form of unseen mass that is thought to dominate the mass budget of the universe and play a critical role in galaxy formation.

Cold Dark Matter

Cold dark matter is some form of slow moving, non-relativistic (‘cold’) particulate mass that is not composed of normal matter (baryons). Baryons are the family of particles that include protons and neutrons. As such, they compose the bulk of the mass of normal matter, and it has become conventional to use this term to distinguish between normal, baryonic matter and the non-baryonic dark matter.

The distinction between baryonic and non-baryonic dark matter is no small thing. Non-baryonic dark matter must be a new particle that resides in a new ‘dark sector’ that is completely distinct from the usual stable of elementary particles. We do not just need some new particle, we need one (or many) that reside in some sector beyond the framework of the stubbornly successful Standard Model of particle physics. Whatever the solution to the mass discrepancy problem turns out to be, it requires new physics.

The cosmic dark matter must be non-baryonic for two basic reasons. First, the mass density of the universe measured gravitationally (Ωm ​≈ ​0.3, e.g., Faber and Gallagher, 1979; Davis et al., 1980, 1992) clearly exceeds the mass density in baryons as constrained by BBN (Ωb ​≈ ​0.05, e.g., Walker et al., 1991). There is something gravitating that is not ordinary matter: Ωm ​> ​Ωb.

The second reason follows from the absence of large fluctuations in the CMB (Peebles and Yu, 1970; Silk, 1968; Sunyaev and Zeldovich, 1980). The CMB is extraordinarily uniform in temperature across the sky, varying by only ~ 1 part in 105 (Smoot et al., 1992). These small temperature variations correspond to variations in density. Gravity is an attractive force; it will make the rich grow richer. Small density excesses will tend to attract more mass, making them larger, attracting more mass, and leading to the formation of large scale structures, including galaxies. But gravity is also a weak force: this process takes a long time. In the long but finite age of the universe, gravity plus known baryonic matter does not suffice to go from the initially smooth, highly uniform state of the early universe to the highly clumpy, structured state of the local universe (Peebles, 1993). The solution is to boost the process with an additional component of mass — the cold dark matter — that gravitates without interacting with the photons, thus getting a head start on the growth of structure while not aggravating the amplitude of temperature fluctuations in the CMB.

Taken separately, one might argue away the need for dark matter. Taken together, these two distinct arguments convinced nearly everyone, including myself, of the absolute need for non-baryonic dark matter. Consequently, CDM became established as the leading paradigm during the 1980s (Peebles, 1984; Steigman and Turner, 1985). The paradigm has snowballed since that time, the common attitude among cosmologists being that CDM has to exist.

From an astronomical perspective, the CDM could be any slow-moving, massive object that does not interact with photons nor participate in BBN. The range of possibilities is at once limitless yet highly constrained. Neutrons would suffice if they were stable in vacuum, but they are not. Primordial black holes are a logical possibility, but if made of normal matter, they must somehow form in the first second after the Big Bang to not impair BBN. At this juncture, microlensing experiments have excluded most plausible mass ranges that primordial black holes could occupy (Mediavilla et al., 2017). It is easy to invent hypothetical dark matter candidates, but difficult for them to remain viable.

From a particle physics perspective, the favored candidate is a Weakly Interacting Massive Particle (WIMP: Peebles, 1984; Steigman and Turner, 1985). WIMPs are expected to be the lightest stable supersymmetric partner particle that resides in the hypothetical supersymmetric sector (Martin, 1998). The WIMP has been the odds-on favorite for so long that it is often used synonymously with the more generic term ‘dark matter.’ It is the hypothesized particle that launched a thousand experiments. Experimental searches for WIMPs have matured over the past several decades, making extraordinary progress in not detecting dark matter (Aprile et al., 2018). Virtually all of the parameter space in which WIMPs had been predicted to reside (Trotta et al., 2008) is now excluded. Worse, the existence of the supersymmetric sector itself, once seemingly a sure thing, remains entirely hypothetical, and appears at this juncture to be a beautiful idea that nature declined to implement.

In sum, we must have cold dark matter for both galaxies and cosmology, but we have as yet no clue to what it is.


* There is a trope that late in their careers, great scientists come to the opinion that everything worth discovering has been discovered, because they themselves already did everything worth doing. That is not a concern I have – I know we haven’t discovered all there is to discover. Yet I see no prospect for advancing our fundamental understanding simply because there aren’t enough of us pulling in the right direction. Most of the community is busy barking up the wrong tree, and refuses to be distracted from their focus on the invisible squirrel that isn’t there.

** Many of these people are the product of the toxic culture that Simon White warned us about. They wave the sausage of galaxy formation and feedback like a magic wand that excuses all faults while being proudly ignorant of how the sausage was made. Bitch, please. I was there when that sausage was made. I helped make the damn sausage. I know what went into it, and I recognize when it tastes wrong.

19 thoughts on “Common ground

  1. Neutrons would suffice if they were stable in vacuum, but they are not.

    could Neutron stars formed in the big bang or inflation work ?

    Like

    1. Neutron stars or strange nuggets – objects of nuclear density – have a window of viability. The trick is making them. If they are composed of the known forms of matter, they have to be made before the universe is even one second old: otherwise, it messes up big bang nucleosynthesis. This is a tall order; I’m not aware of any satisfactory mechanism to achieve this magic trick. One could evade this limit by making them out of non-standard forms of matter, but that would already be non-baryonic dark matter.

      Like

      1. there are many inflation theories

        hot Neutron stars can be seen

        are cold primordial Neutron stars invisible except for gravitational lensing

        Like

  2. “…with enough ‘epicycles’—modified gravity, exotic dark matter, quintessence, inflaton, etc.—any observational tension can and will be resolved. Moreover, if cosmology is decoupled from terrestrial physics, there will inevitably come a time when the inhabitants of Terra would stop financing this field.”

    Dark matter and modified gravity are equally epicyclic in my opinion.

    https://doi.org/10.3390/galaxies9040076

    Like

  3. Common ground with lambda-CDM are the forums and places where we can freely discuss issues, places such as this blog. It seems however like you’re doing water in the wine of MOND now, saying that we must have some kind of dark matter. Is that true or am I misinterpreting you?

    Besides, what are your thoughts on Deur’s calculations of selfinteracting gravitons inducing MOND? No need to go beyond the standard model if it’s true!

    Like

  4. This is the best overview I have ever read on the current impasses in astrophysics, which I printed out to add to a reference library. Meanwhile I continue to work on an entry to the Cosmic Mystery Sweepstakes, which outlines a terrestrial laboratory experiment that could potentially confirm or completely exclude the model. Since this paper has little chance of passing peer review, I’ll post it at either the mirror site to viXra or my own website.

    Like

  5. First of all Dr McGaugh, thank you for this:

    “I paid the exorbitant open access fee (the journal changed its name – and publication policy – during the publication process), so you can read the whole thing all at once if you are eager.”

    Putting scientific papers behind paywalls is a a disgrace and an abomination that only widens the moat between the scientific academy and the public whose taxes support it. Science cannot survive as a secret society. Thank you for resisting.

    “I’ve seen people twist themselves into logical knots to protect their belief in dark matter from being subject to critical examination.”

    That is certainly true but the exact same thing can be said of people who believe in the expanding universe paradigm; the situation is identical in that regard.

    “Cosmology is the science of the origin and evolution of the universe: the biggest of big pictures.”

    That defines cosmology in terms of the preferred standard model which precludes the possibility that the Cosmos did not have a singular origin and does not constitute a unified, coherent, simultaneous entity that the model describes. Defining cosmology as the science of the Cosmos on scales larger than individual galaxy clusters avoids the inherent confirmation bias, and leaves open at least the possibility of some day studying models not based on naive assumptions made in the 1920s.

    “Making the simplifying assumptions of homogeneity and isotropy, Einstein’s equations can be applied to treat the entire universe as a dynamical entity.”

    Not everything that can be done should be done. Quite apart from the fact that homogeneity and isotropy are not properties of the observed Cosmos, General Relativity is a relativistic model – it does not have a universal framework. GR cannot logically be applied to a model with a universal framework such as the FLRW metric. Mathematically, of course, you can do it, but the illogical nature of the effort is readily apparent in the result, a Big Bang model consisting of an incoherent creation myth propped up by a set of completely unobservable structural elements.

    “The modern picture of the hot big bang is underpinned by three empirical pillars: an expanding universe (Hubble expansion), Big Bang Nucleosynthesis (BBN: the formation of the light elements through nuclear reactions in the early universe), and the relic radiation field (the Cosmic Microwave Background: CMB)”

    The expanding universe is not an empirical fact; it is a model-dependent assumption – you have to assume the Cosmos is a singular, coherent, simultaneous entity and that the cause of the redshift is some form of recessional velocity. There is no direct empirical evidence supporting either assumption. That the model can, nonetheless, be made to postdict observations puts it squarely in the Ptolemaic class of cosmological models.

    Nucleosynthesis theory is not dependent on the Big Bang event (a similar galactic scale event would suffice). NST cannot be cited as evidence for the BB just because NST can be fit to the BB model.

    The CMB is an empirical observation that was neither exclusively nor accurately predicted by BB theorists prior to its discovery in 1965, therefore it is at best inferential evidence. https://en.wikipedia.org/wiki/Cosmic_microwave_background#Timeline_of_prediction,_discovery_and_interpretation

    The bottom line of all this is: the reason the BB requires dark matter, dark energy, inflation and causally interacting spacetime is because the model’s 100 year old assumptions are fundamentally wrong in the same way that Ptolemaic cosmology was wrong; those foundational assumptions misrepresent and are irreconcilable with the nature of physical reality.

    I know you don’t agree with the foregoing assessment Dr. McGaugh but I greatly appreciate the scientific integrity and courage that allows the argument to be presented here. Again, thank you.

    Like

    1. It is always wise to know some history, when criticising historical figures. In Gamov’s case, he was working with the then best-known figure for H0, when he quoted a temperature based on a 3-billion year-old universe. Remember that Hubble’s (1929) value for H0 was 500 km/s/Mpc. Even in the 1970s, when I was for a short time a professional astronomer, the principal argument was between Sandage (H0=50 km/s/Mpc) and de Vaucoulers (H0=100 km/s/Mpc). Stacy captured this period well in his blog post from 2016 https://tritonstation.com/2016/05/06/two-numbers/

      You cannot expect an accurate prediction when the base data you rely on is so uncertain and it is mere sophistry to pretend otherwise.

      Like

      1. I said nothing about Gamov. Here is what I said :

        “The CMB is an empirical observation that was neither exclusively nor accurately predicted by BB theorists prior to its discovery in 1965, therefore it is at best inferential evidence.”

        “You cannot expect an accurate prediction when the base data you rely on is so uncertain and it is mere sophistry to pretend otherwise.”

        By failing to note that other theorists made more accurate predictions using the same data by not employing the BB model, your comment is the essence of sophistry.

        Like

        1. If you accidentally predict the right value of a property despite starting from the wrong base data, that is not a prediction in the scientific sense. Much more meaningful is when your theory is right but the prediction is wrong because the base data (H0) is wrong. That way someone else can come along with better base data and show that it does fit reality. Is that simple enough for you to understand?

          In response to Havinga’s comment below; Dicke et al’s paper, which was published together with Penzias and Wilson’s, makes no reference to McKellar’s work. We cannot assume that Gamov, or Dicke definitely knew about the detail of McKellar’s work; assuredly they would have referenced it if they had, even if only to argue against it. All we have is a later memory from Fred Hoyle (hardly a disinterested observer) who says he mentioned it to both Gamov and Dicke at different times. (Spot the parallel to MOND here.)

          Quite clearly every BB theorist at the time thought that the CMB temperature was higher than it actually was because they were assuming that the universe was younger than it was (higher H0). Hoyle should have predicted a non-zero background just because of red-shifting of starlight; even his steady-state model was an ever-expanding universe, it was just that continuous creation would have kept the density constant.

          Hubble’s original value for H0 was 500 km/s/Mpc, by 1956 Sandage et al with better observations put it at 180 km/s/Mpc, two years later Sandage estimated it at 75, and in the 1970s at 55 km/s/Mpc. Then we had the disagreement between Sandage and de Vaucoulers that amounted to almost a factor of 2 and lasted until the first precision cosmology measurements.

          https://en.wikipedia.org/wiki/Hubble's_law

          Like

          1. Laurence,

            You’re your own worst enemy. Are you really trying to sell the proposition that if another model makes a correct prediction where yours (the BB) doesn’t it was an accident? Because the failure of your preferred model (BB) was due to incorrect data and once you had the right answer (2.7K) it was easy to fit the model and therefore the independent discovery of the correct temperature vindicates the BB? Worse than sophistry, that argument is flamboyantly illogical.

            Worse for you your last paragraph demonstrates that having been given the correct CMB temperature the BB model still couldn’t predict the model dependent value of H0 and therefore the correct age of the model’s “universe”. According to you though, the reason the BB couldn’t predict the temperature of the CMB in the first place was it didn’t know the correct value of H0 and therefore the correct age of the “universe”. So having been handed the correct value for the CMB temp, could the BB then correctly predict the value of H0 and therefore the correct age of the model “universe”?

            Of course not, as your last paragraph makes clear, illustrating once again the fundamental fact of the matter: the BB only postdicts its way into agreement with observations. The expanding “universe” paradigm, of which the BB version is the Homecoming Queen, popularity-wise, has a long record of predictive failures. As a scientific model it has been a rolling disaster. All the model fitting does provide a seemingly endless make-work project for theorists though.

            Like

            1. “So having been handed the correct value for the CMB temp, could the BB then correctly predict the value of H0 and therefore the correct age of the model “universe”?”
              Hindsight is 20/20…
              I guess that they had more confidence in the value of H0 than on the relatively newly measured value for the CMB temperature. After all, expansion was known for decades before the CMB was discovered.
              But this is not a reason to dismiss the model because the proponents did not predict the value for the parameter that you wish they did.

              Like

              1. “But this is not a reason to dismiss the model because the proponents did not predict the value for the parameter that you wish they did.”

                That’s quite a warped take on the fact that BB proponents proclaimed at the time of the CMB’s discovery and continue to claim to this day that the CMB was a successful prediction of their model and therefore a significant empirical vindication of the model. It is a matter of historical record that BB theorists did not successfully predict the CMB, while other models made more accurate predictions. According to Laurence Cox, those more accurate predictions were a matter of luck or something whereas the BB theorists would have gotten the prediction right if they had only known the correct value in advance – or something.

                “… expansion was known for decades…”

                Belief and knowledge are not the same thing. The belief that the “universe” is expanding rests on the unverified assumption that the cause of the cosmological redshift is some form of recessional velocity. Given the absurdity of the resulting BB model, that assumption is almost certainly wrong.

                Like

  6. @Budrap

    I’ll give an answer because the topic is sort of related to the blog post, but I do it with much hesitation because I feel that we’re profiting from the patience of prof. McGaugh.
    “It is a matter of historical record that BB theorists did not successfully predict the CMB” – I believe what you’re trying to say is that they did not predict the correct temperature of the CMB. This is not the same as they did not predict the CMB.
    The correct temperature depends also on other parameters, which were not known with sufficient precision at that time.
    My point still stands – you cannot dismiss the model because the proponents did not predict the value of a parameter the way you wish they did. They did predict the existence of that parameter.
    Anyway – this ping-pong discussion is not really constructive. We know that you don’t agree with the BB model and you exposed already multiple times your arguments and me and other exposed our arguments.
    So I will refrain from engaging further in this discussion because I don’t want the comments to be closed.

    Like

    1. Other models also predicted the CMB and did so more accurately than the BB, therefore it was then, and remains to this day, disingenuous to claim that the detection of the CMB somehow uniquely vindicated the BB model. Why the BB model’s wide-ranging temperature predictions failed is irrelevant to that point. I’ll happily leave it at that.

      Like

Comments are closed.