And then there were six

With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.

– John von Neumann

The simple and elegant cosmology encapsulated by the search for two numbers has been replaced by ΛCDM. This is neither simple nor elegant. In addition to the Hubble constant and density parameter, we now also require distinct density parameters for baryonic mass, non-baryonic cold dark matter, and dark energy. There is an implicit (seventh) parameter for the density of neutrinos.

Now we also include the power spectrum as cosmological parameters (σ8, n). These did not use to be considered on the same level as the Big Two. They aren’t: they concern structure formation within the world model, not the nature of the world model. But I guess they seem more important once the Big Numbers are settled.

Here is a quick list of what we believed, then and now:

 

Paramater SCDM ΛCDM
H0 50 70
Ωm 1.0 0.3
Ωbh2 0.0125 0.02225
ΩΛ 0.7
σ8 0.5 0.8
n 1.0 0.96

 

There are a number of “lesser” parameters, like the optical depth to reionization. Plus, the index n can run, one can invoke scale dependent non-linear biasing (a rolling fudge factor for σ8), and people talk seriously about the time evolution of antigravity the dark energy equation of state.

From the late ’80s to the early ’00s, all of these parameters (excepting only n) changed by much more than their formal uncertainty or theoretical expectation. Even big bang nucleosynthesis – by far the most robustly constrained – suffered a doubling in the mass density of baryons. This should be embarrassing, but most cosmologists assert it as a great success while quietly sweeping the lithium problem under the carpet.

The only thing that hasn’t really changed is our belief in Cold Dark Matter. That’s not because it is more robust. It is because it is much harder to detect, let alone measure.

7 thoughts on “And then there were six

  1. Stacy, good points. one more comment. In 80’s and 90s, when total omega was less than 1, people also concocted models of open inflation. Now if you ask the same people they will tell you a “robust” prediction of inflation is that omega=1, which we know is true.

    Like

  2. Yes, this is a bit of convenient recollection.
    The robust prediction of Inflation originally was Omega=1, all in mass. That’s why most Inflationary theorists told us we were stupid for finding Omega<1. A few did try to concoct models of open inflation, but these were pretty contrived.
    Now the "real" prediction of Inflation is a flat geometry, which is indeed consistent with current observations. However, it is only the sum of matter and dark energy that gets us there. This is not natural.
    One of the original (1980s) selling points of Inflation, which made me and many others take it seriously, was that it gave a natural explanation for the coincidence problem (then mis-called the flatness problem): why Omega was anywhere close to 1 when it would spend eternity near zero in an open universe. LCDM brings back this problem with a vengeance: not only is the future Omega_m=0, but the transition from matt domination to dark energy domination happened yesterday in cosmic terms. We just happen to live right after the transition – which was the coincidence original Inflation was sold on fixing.

    Like

  3. This quote does not deserve its success..Taken literally (but who would?) it is ridiculously pretentious, taken metaphorically, as a warning that too many parameters can help you fit anything, it is naive. what does many mean? Why 5 in the quote? The number of parameters needed to explain a given set of phenomena depends on the complexity of the underlying system that produced them. It’s perfectly ok to have 2000 parameters or more in any branch of science, if the underlying system of interest requires it. Of course you should prefer the model that has less parameters. But that’s only when all the things explained by all models are equal. The von Neumann quote is misleading, there is an entire field devoted to the question of model selection (https://en.wikipedia.org/wiki/Model_selection). The only two things the quote has for it is: 1. it’s from von Neumann (but von Neumann was not above being idiotic sometime, because nobody is), and 2. it’s funny.

    Is there a paper where you guys actually apply model selection? I see this great table 2 in https://arxiv.org/pdf/1112.3960.pdf
    have you done the same table for lCDM and start quantifying the actual fit when the number of parameters are taken into account (even applying things like the AIC or BIC criteria)?

    Like

    1. Yes. The quote is funny. For better and worse, such quips often play an outsized role in informing our attitudes, so I appreciate your concern. In this case, it aptly expresses my frustration with the development of cosmology during my career – we have never been ashamed to add more new free parameters, and are rarely in the regime of over-constraint. When we get there and there’s a tension, we blame some inconvenient aspect of the data and proceed to ignore it. A current example is the tension between the Hubble constant as measured directly by galaxy distances, and as fit parameter in the acoustic power spectrum of the cosmic microwave background. People who are informed by one mostly just ignore the other.

      As for model selection between LCDM and MOND, that is well beyond what I have yet to get to in this forum. The short answer is that you can’t really: the theories make different kinds of predictions in different regimes. Usually one is mute when the other is eloquent – see http://arxiv.org/abs/1404.7525. One can make a comparison between rotation curve fits, once you pick a halo model, which is itself controversial. Broadly speaking, you can find halo models that perform as well as MOND in terms of χ2</suP on individual rotation curves. But they require more free parameters (a minimum of 3 per galaxy vs 1) so according to the BIC, MOND is strongly preferred. In contrast, for cosmology, MOND makes few testable predictions, while LCDM "wins ugly" with its many parameters.

      The closest thing I can offer to a straight-up comparison is http://astroweb.case.edu/ssm/mond/LCDMmondtesttable.html The issue isn't so much which model the BIC chooses as it is to pick what data you're trying to explain in the first place. If you put all the weight on fits to "large scale" data, you get one answer. If you put weight on quantitative a priori predictions, you get another.

      Like

  4. Thanks for the comparison table, it’s great that you did it (and for pointing to the Hubble constant discrepancy, first time i hear about it.)
    Also surprised that MOND makes few predictions for cosmology, I would have thought that large-scale simulations with MOND dynamics would be the only thing required. Your point about putting different weights on the data is well taken. It’s hard to see a way around that.

    Like

    1. So you would think. Unfortunately, MOND structure formation is highly non-linear. So it is not an easy problem, even to simulate. This is a good example of where one theory is simple and the other complex.
      There is a sketch of how structure formation might proceed in MOND at http://astroweb.case.edu/ssm/mond/LSSinMOND.html . Needs work… the same tens of thousands of person-years of effort we’ve put into LCDM.

      Like

Comments are closed.