There has been some hand-wringing of late about the tension between the value of the expansion rate of the universe – the famous Hubble constant, H_{0}, measured directly from observed redshifts and distances, and that obtained by multi-parameter fits to the cosmic microwave background. Direct determinations consistently give values in the low to mid-70s, like Riess et al. (2016): H_{0} = 73.24 ± 1.74 km/s/Mpc while the latest CMB fit from Planck gives H_{0} = 67.8 ± 0.9 km/s/Mpc. These are formally discrepant at a modest level: enough to be annoying, but not enough to be conclusive.

The widespread presumption is that there is a subtle systematic error somewhere. Who is to blame depends on what you work on. People who work on the CMB and appreciate its phenomenal sensitivity to cosmic geometry generally presume the problem is with galaxy measurements. To people who work on local galaxies, the CMB value is a non-starter.

This subject has a long and sordid history which entire books have been written about. Many systematic errors have plagued the cosmic distance ladder. Hubble’s earliest (c. 1930) estimate of H_{0} = 500 km/s/Mpc was an order of magnitude off, and made the universe impossibly young by what was known to geologists at the time. Recalibration of the distance scale brought the number steadily down. There followed a long (1960s – 1990s) stand-off between H_{0} = 50 as advocated by Sandage and 100 as advocated by de Vaucouleurs. Obviously, there were some pernicious systematic errors lurking about. Given this history, it is easy to imagine that even today there persists some subtle systematic error in local galaxy distance measurements.

In the mid-90s, I realized that the Tully-Fisher method was effectively a first approximation – there should be more information in the full shape of the rotation curve. Playing around with this, I arrived at H_{0} = 72 ± 2. My work relied heavily on the work of Begeman, Broeils, & Sanders and in turn on the distances they had assumed. This was a much large systematic uncertainty. To firm up my estimate would require improved calibration of those distances quite beyond the scope of what I was willing to take on at that time, so I never published it.

In 2001, the HST Key Project on the Distance Scale – the primary motivation to build the Hubble Space Telescope – reported H_{0} = 72 ± 8. That uncertainty was still plagued by the same systematics that had befuddled me. Since that time, the errors have been beaten down. There have been many other estimates of increasing precision, mostly in the range 72 – 75. The serious-minded cosmologist always worries about some subtle remaining systematic error, but the issue seemed finally to be settled.

One weird consequence of this was that all my extensive notes on the distance scale no longer seemed essential to teaching graduate cosmology: all the arcane details that had occupied the field for decades suddenly seemed like boring minutia. That was OK – about that time, there finally started to be interesting data on the the cosmic microwave background. Explaining that neatly displaced the class time spent on the distance scale. No longer were the physics students stopping to ask, appalled, *“what’s a distance modulus?”*; now it was the astronomy students who were appalled to be confronted by the spherical harmonics they’d seen but not learned in quantum mechanics.

The first results from WMAP were entirely consistent with the results of the HST key project. This reinforced the feeling that the problem was solved. In the new century, we finally knew the value of the Hubble constant!

Over the past decade, the best-fit value of H_{0} from the CMB has done a slow walk away from the direct measurements in the local universe. It has gotten far enough to result in the present tension. The problem is that the CMB doesn’t measure the Hubble constant directly; it constrains a multi-dimensional parameters space that approximately projects to a constant of the product Ω_{m}H_{0}^{3}, as illustrated below.

Much of the progress in cosmology has been the steady reduction in the allowed range in the above parameter space. The CMB data now allow only a narrow trench. I worry that it may wink out entirely. Were that to happen, it would falsify our current model of cosmology.

For now the only thing that seems to be happening is that the χ^{2} for the CMB data is ever so slightly better for lower values of the Hubble constant. While the lines of the trench represent no-go zones – the data require cosmological parameters to fall between the lines – there isn’t much difference along the trench. It is like walking along the floor of the Grand Canyon: exiting by climbing up the cliffs is disfavored; meandering downstream is energetically favored.

That’s what it looks like to me. The CMB χ^{2} has meandered a bit down the trench. It is not obvious to me that the current Planck best-fit is all that preferable to that from WMAP3. I have asked a few experts what would be so terrible about imposing the local distance scale as a strong prior. Have yet to hear a good answer, so chime in if you know one. If we put the clamps on H_{0} it must come out somewhere else. Where? How terrible would it be?

This is not an idle question. If one can recover the local Hubble constant with only a small tweak to, say, the baryon density, then fine – we’ve already got a huge problem there with lithium that we’re largely ignoring – why argue about the Hubble constant if this tension can be resolved where there’s already a bigger problem? If instead, it requires something more radical, like a clear difference from the standard number of neutrinos, then OK, that’s interesting and potentially a big deal.

So what is it? What does it take to reconcile to Planck with local H_{0}? Since this is an issue of geometry, I suspect it might be something like the best fit geometry of the universe becoming ever so slightly not-flat, at the 2σ level instead of 1σ.

While I have not come across a satisfactory explanation of what it would take to reconcile Planck with the local distance scale, I have seen many joint analyses of Planck plus lots of other data. They all seem consistent, so long as you ignore the high-L (L > 600) Planck data. It is only the high-L data that are driving the discrepancy (low L appear to be OK).

So I will say the obvious, for those who are too timid: it looks like the systematic error is most likely with the high-L data of Planck itself.

Reblogged this on In the Dark and commented:

A few months ago I blogged about the apparent “tension” between different measurements of the Hubble constant. Here is an alternative view of the situation, with some recent updates. The plot has thickened a bit, but it’s still unclear to me whether there’s really a significant discrepancy.

LikeLike

Agreed. If I put on my cosmology hat, I’m not genuinely worried about the exact value of H0 nor high-L tension from Planck. These appear to be details. If I put on my galaxies hat, we’ve been down this road many times. It is easy to believe there is a systematic error in the distance ladder, but hard to see where it is now, if it is there at all.

Good point in your blog about the value of G. Hopefully this discrepancy won’t last as long, nor be decided by confirmation bias.

LikeLike

I’m no scientist, but in know that galaxies contain multiple light sources. Where are we accounting for light interference over distance? Do longer waves win out over distance as the shorter waves cancel out more quickly causing the appearance of a red shift? Such an affect could help account for other mysteries as well.

LikeLike

If you mean self-interference due to the wave-like nature of light, this is not relevant over cosmic distances. Once the light escapes its source, it is free to traverse the emptiness of the cosmic void with little hindrance aside from the occasional grain of cosmic dust.

LikeLike

how much is the age difference between local measurements and cosmic measurements?

LikeLike

We’re basically talking about the difference between 13 and 14 billion years. I think it is fair to say all the data are consistent with those numbers but can’t easily distinguish between them.

LikeLike

Hi, I’m not attempting to hijack anything here.

There is now a very simple way to calculate Hubble’s Constant, by inputting to an equation, the numerical value of Pi and the speed of light (C) from Maxwell’s equations, and the value of a parsec. NO space probe measurements (with their inevitable small measuring / interpretation errors) are now required. Hubble’s Constant is ‘fixed’ at 70.98047 PRECISELY. This maths method removes the errors / tolerances that is always a part of attempting to measuring something as ‘elusive’ as Hubble’s Constant. This has very deep implications for theoretical cosmology.

The equation to perform this is :- 2 X a meg parsec X light speed (C). This total is then divided by Pi to the power of 21. This gives 70.98047 kilometres per sec per meg parsec.

The equation to perform this can also be found in ‘The Principle of Astrogeometry’ on Amazon Kindle Books. This also explains how the Hubble 70.98047 ‘fixing’ equation was found. David.

LikeLike

Stacy, what does “high-L” mean? Could you explain it in a sentence or two?

LikeLike

High-L are the large multipoles in the acoustic power spectrum of the CMB. In human-comprehensible terms, those correspond to temperature fluctuations at small angular scales. As the resolution of the CMB experiments improved, the resolved smaller and smaller temperature fluctuations on the sky. That’s what seems to be driving the changes in the best-fit parameters.

LikeLike

Thank you – and all the best for your H_0 value.

LikeLike