At the dawn of the 21st century, we were pretty sure we had solved cosmology. The Lambda Cold Dark Matter (LCDM) model made strong predictions for the power spectrum of the Cosmic Microwave Background (CMB). One was that the flat Robertson-Walker geometry that we were assuming for LCDM predicted the location of the first peak should be at ℓ = 220. As I discuss in the history of the rehabilitation of Lambda, this was a genuinely novel prediction that was clearly confirmed first by BOOMERanG and subsequently by many other experiments, especially WMAP. As such, it was widely (and rightly) celebrated among cosmologists. The WMAP team has been awarded major prizes, including the Gruber cosmology prize and the Breakthrough prize.

As I discussed in the previous post, the location of the first peak was not relevant to the problem I had become interested in: distinguishing whether dark matter existed or not. Instead, it was the amplitude of the second peak of the acoustic power spectrum relative to the first that promised a clear distinction between LCDM and the no-CDM ansatz inspired by MOND. This was also first tested by BOOMERanG:

postboomer
The CMB power spectrum observed by BOOMERanG in 2000. The first peak is located exactly where LCDM predicted it to be. The second peak was not detected, but was clearly smaller than expected in LCDM. It was consistent with the prediction of no-CDM.

In a nutshell, LCDM predicted a big second peak while no-CDM predicted a small second peak. Quantitatively, the amplitude ratio A1:2 was predicted to be in the range 1.54 – 1.83 for LCDM, and 2.22 – 2.57 for no-CDM. Note that A1:2 is smaller for LCDM because the second peak is relatively big compared to the first. 

BOOMERanG confirmed the major predictions of both competing theories. The location of the first peak was exactly where it was expected to be for a flat Roberston-Walker geometry. The amplitude of the second peak was that expected in no-CDM. One can have the best of both worlds by building a model with high Lambda and no CDM, but I don’t take that too seriously: Lambda is just a place holder for our ignorance – in either theory.

I had made this prediction in the hopes that cosmologists would experience the same crisis of faith that I had when MOND appeared in my data. Now it was the data that they valued that was misbehaving – in precisely the way I had predicted with a model that was motivated by MOND (albeit not MOND itself). Surely they would see reason?

There is a story that Diogenes once wandered the streets of Athens with a lamp in broad daylight in search of an honest man. I can relate. Exactly one member of the CMB community wrote to me to say “Gee, I was wrong to dismiss you.” [I paraphrase only a little.] When I had the opportunity to point out to them that I had made this prediction, the most common reaction was “no you didn’t.” Exactly one of the people with whom I had this conversation actually bothered to look up the published paper, and that person also wrote to say “Gee, I guess you did.” Everyone else simply ignored it.

The sociology gets worse from here. There developed a counter-narrative that the BOOMERang data were wrong, therefore my prediction fitting it was wrong. No one asked me about it; I learned of it in a chance conversation a couple of year later in which it was asserted as common knowledge that “the data changed on you.” Let’s examine this statement.

The BOOMERanG data were early, so you expect data to improve. At the time, I noted that the second peak “is only marginally suggested by the data so far”, so I said that “as data accumulate, the second peak should become clear.” It did.

The predicted range quoted above is rather generous. It encompassed the full variation allowed by Big Bang Nucleosynthesis (BBN) at the time (1998/1999). I intentionally considered the broadest range of parameters that were plausible to be fair to both theories. However, developments in BBN were by then disfavoring low-end baryon densities, so the real expectation for the predicted range was narrower. Excluding implausibly low baryon densities, the predicted ranges were 1.6 – 1.83 for LCDM and 2.36 – 2.4 for no-CDM. Note that the prediction of no-CDM is considerably more precise than that of LCDM. This happens because all the plausible models run together in the absence of the forcing term provided by CDM. For hypothesis testing, this is great: the ratio has to be this one value, and only this value.

A few years later, WMAP provided a much more accurate measurement of the peak locations and amplitudes. WMAP measured A1:2 = 2.34 ± 0.09. This is bang on the no-CDM prediction of 2.4.

peaks_predict_wmap
Peak locations measured by WMAP in 2003 (points) compared to the a priori (1999) predictions of LCDM (red tone lines) and no-CDM (blue tone lines).

The prediction for the amplitude ratio A1:2 that I made over twenty years ago remains correct in the most recent CMB data. The same model did not successfully predict the third peak, but I didn’t necessarily expect it to: the no-CDM ansatz (which is just General Relativity without cold dark matter) had to fail at some point. But that gets ahead of the story: no-CDM made a very precise prediction for the second peak. LCDM did not.

LCDM only survives because people were willing to disregard existing bounds – in this case, on the baryon density. It was easier to abandon the most accurately measured and the only over-constrained pillar of Big Bang cosmology than acknowledge a successful prediction that respected all those things. For a few years, the attitude was “BBN was close, but not quite right.” In time, what appears to be confirmation bias kicked in, and the measured abundances of the light elements migrated towards the “right” value – as  specified by CMB fits.

LCDM does give an excellent fit to the power spectrum of the CMB. However, only the location of the first peak was predicted correctly in advance. Everything subsequent to that (at higher ℓ) is the result of a multi-parameter fit with sufficient flexibility to accommodate any physically plausible power spectrum. However, there is no guarantee that the parameters of the fit will agree with independent data. For a long while they did, but now we see the emergence of tensions in not only the baryon density, but also the amplitude of the power spectrum, and most famously, the value of the Hubble constant. Perhaps this is the level of accuracy that is necessary to begin to perceive genuine anomalies. Beyond the need to invoke invisible entities in the first place.

I could say a lot more, and perhaps will in future. For now, I’d just like to emphasize that I made a very precise, completely novel prediction for the amplitude of the second peak. That prediction came true. No one else did that. Heck of a coincidence, if there’s nothing to it.

8 thoughts on “Second peak bang on

  1. Just for fun, my very imperfect qualitative summary of the current situation is:

    According to the Standard Model of Cosmology approximately 84% of the mass in the visible universe consists of invisible cold dark matter (CDM), with the remaining 16% consisting of normal Baryonic matter. Since missing transverse momentum has not been observed at the LHC, this indicates that all the very low interaction CDM, if it exists, has to have been all created at extremely high collision energies very early on in the timeline of the big bang (certainly at higher temperature collision energies than can be currently be produced at the LHC) .

    Somehow this CDM (created by extremely high energy collisions) is then mostly/all? imbued with an intrinsically, and my opinion, implausibly low residual temperature/momentum (hence the word cold) so it can conveniently stably orbit and be otherwise entrained by galaxies and clusters of galaxies (or visa versa) in such a way that the first two major peaks in the Cosmic Microwave Background (CMB) power spectrum remain largely unaffected; with a large angular scale dependent influence somehow only first appearing in the tail of the second peak and the subsequent peaks.

    In terms of the CMB, CDM only appears to influence the larger scale structures in the universe, fixing the failure of General Relativity in this respect by just the right amount, after numerous years of intensive tweaking. Alternatively MOND theory and related observations indicate that General Relativity, and possibly the strong equivalence principle, breakdown on the scale of galaxies, and beyond. So far MOND theories seem to get the answer to “and beyond” scales wrong, starting with the observed (presumably stable) dynamics of galaxies within clusters.

    A relativistic MOND theory has recently been put forward that can be used to account for all the observed peaks in the CMB power spectrum not just the first and second. A theory that modifies General Relativity on the scale of galaxies and beyond, also produces changes in the predicted CMB power spectrum on a much larger implied length scale. The question is, can this new relativistic MOND theory also do a better job of the galaxy cluster scale dynamics than earlier MOND theories? If it needs further tweaks then they will have to be made without messing up the CMB power spectrum predictions.

    I feel we are headed for a major scientific conceptual revolution in Cosmology any time soon now. The last scientific conceptual revolution was the Earth’s Sciences revolution which culminated in the 1950’s to the early 1960’s. Alfred Wegener basically initiated the revolution in 1912 with his theory of continental drift. Mordehai Milgrom first came up with MOND in 1982. Add 40 to 50 years (or basically two generations) like for the recent Earth Sciences scientific revolution and you get to 2022-2032.

    Like

    1. Yes, it takes a long time for these things to sink in. Many earth scientists have noted to me the continental drift analogy.
      Both MOND and CDM get clusters wrong, just in different ways. The widespread perception that CDM is somehow better in clusters is a matter of choice. The consistency of the cluster baryon fractions with the cosmic baryon fraction is hailed as a great success, but the fact that this only applies to the most massive clusters – and fails for all other systems in the universe, even low mass clusters – is ignored. That the mass-temperature relation has the slope predicted by MOND is ignored; scientists seem mostly to be unaware that this is the case, even as they make up stories about preheating and entropy floors to explain the discrepancy conventionally. I could go on; the point is that clusters are not the clear win for CDM that they’re often portrayed to be.

      Like

  2. 100 years ago, Einstein tried to use the equations of general relativity to describe the internal structure of elementary particles. Essentially, he succeeded, but there was not enough experimental evidence to support his theory, so it was forgotten. Subsequently, QCD was invented for this purpose, but is, I believe, inferior to Einstein’s original approach. Re-instating Einstein’s theory, and relating the measured masses of electron, proton and neutron to properties of the gravitational field in which we measure them, gives a prediction for the value of the fudge factor that needs to be introduced to the modern theory of [m(e)/(m(n)-m(p))]^2 = 0.1561. Bang on, I reckon.

    Like

    1. I don’t think Einstein ultimately succeed in the way you suggest since he would have had to eventually abandon the strong equivalence principle and thus General Relativity in the form he presented; basically what some people are now seriously considering today. Milgrom’s MOND comes in two forms Modified Gravity and Modified Inertia. If you want to try to link particle binding energy to gravitational fields I think it is the modified inertia version you want, combined with the weak equivalence principle. In cosmological terms masses that approach each other become slightly more massive (and decelerate slightly due to the conservation of momentum), and slightly less massive (and accelerate slightly due to the conservation of momentum) when they move apart.

      In particle physics the increase of mass, in terms of binding energy on close approach, is very much larger; in this case gravitational fields instead would I think have to act indirectly to weaken of the energy density of electromagnetic fields in the vicinity of elementary and composite particles (with perhaps something like a (k-c/r) energy density weakening effect where k is a gravitational field environment constant and c is a particle constant) causing what are currently described as the strong and weak forces.

      However having gone a little way down this route with Occams Razor in my hand I stall: the logical conclusion I tentatively reach is that the electron and its anti particle would be the only stable elementary (non- composite) particles that can exist, and their size and energy content would be both environmental (background gravitational field) and velocity dependent.

      Like

      1. James Arathoon,
        You quite accurately summarise my thoughts on this issue. Abandoning the strong equivalence principle does seem to be necessary, and a dependence of the energy content on the background gravitational field is also necessary. Most people give up at this point because it seems just too absurd. But I have continued on this path for a further five years, and it seems to me that it works. I invite you to my blog to find out more, as I am sure Stacy does not want me to clog up his blog with this stuff. Publishing it, even on the arXiv, is a problem, because editors regard my assumptions as absurd before they even start reading the actual arguments.

        Like

  3. Stacy, thanks for relating this piece of history. Funny how the CDM proponents don’t ever mention the tuning phase of dealing with the 2nd peak data. You had me convinced of the no-DM reality of things a while back when you noted how variations in rotation curves matched variations in visible matter density–something that is highly Implausible if the stars are embedded in huge DM halos. Hopefully, we will see CDM unmasked as the ether of the 21st century soon.

    Liked by 1 person

    1. Not only do they not mention it, many people seem to seem to have convinced themselves that LCDM correctly predicted the second peak in advance, when in fact it came as a tremendous surprise. This is an example of hindsight bias/creeping determinism. Or gaslighting.

      Like

  4. Hi All,

    Has there been any serious research effort to support or falsify the idea that several LCDM ‘whole universe’ processes might better be modeled as galaxy local via the SMBH/jets? First you’ll need to suspend disbelief about breaching the SMBH event horizon — even though we see such internally driven jets in other events on Earth as well as in events of increasing energy in the stars. And what do we really know about supermassive black holes anyway? As I understand it they were not researched thoroughly and were force fit into LCDM.

    If there has been serious research considering and falsifying such a galaxy local process please provide references or pointers if you are so inclined.

    If there has not been such research, I think it would be reasonable to conduct. It seems to me that almost all processes map directly and naive me thinks it might be that a lot of analysis is largely correct, if worded awkwardly or incorrectly in such a scenario. Even where models and analysis were erroneous, we still have the data, right? In fact, I think many tensions would probably relax. No more Hubble tension since galaxies would expand into another and we would expect variation both from the photon as possibly from the analytical/experimental technique. I mean basically every photon would go through many galaxies, each galaxy doing its own thing on the general process timeline of the galaxy recycling loop. So that would also mean Hubble would not have anything to do with universe wide expansion, and we could drop the idea of receding galaxies and universe bangs and crunches. It would be more of a dynamic steady state. No known beginning/end/size. I don’t know all the dirty laundry of cosmology but as I understand it there are several processes that typically have a process duration >> 13.8B years. That’s kinda weird isn’t it? So now you wouldn’t have to worry about any of that. But my general impression is that you already have a lot of the grand cycles of various types of celestial objects worked out, and certainly anything you already do that is galaxy local is on solid ground. Given the vast distances and the cosmic soup, I’m not clear how much impact this would have. That is why I am asking here.

    Best,
    Mark

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s