At the dawn of the 21st century, we were pretty sure we had solved cosmology. The Lambda Cold Dark Matter (LCDM) model made strong predictions for the power spectrum of the Cosmic Microwave Background (CMB). One was that the flat Robertson-Walker geometry that we were assuming for LCDM predicted the location of the first peak should be at ℓ = 220. As I discuss in the history of the rehabilitation of Lambda, this was a genuinely novel prediction that was clearly confirmed first by BOOMERanG and subsequently by many other experiments, especially WMAP. As such, it was widely (and rightly) celebrated among cosmologists. The WMAP team has been awarded major prizes, including the Gruber cosmology prize and the Breakthrough prize.

As I discussed in the previous post, the location of the first peak was not relevant to the problem I had become interested in: distinguishing whether dark matter existed or not. Instead, it was the amplitude of the second peak of the acoustic power spectrum relative to the first that promised a clear distinction between LCDM and the no-CDM ansatz inspired by MOND. This was also first tested by BOOMERanG:

postboomer
The CMB power spectrum observed by BOOMERanG in 2000. The first peak is located exactly where LCDM predicted it to be. The second peak was not detected, but was clearly smaller than expected in LCDM. It was consistent with the prediction of no-CDM.

In a nutshell, LCDM predicted a big second peak while no-CDM predicted a small second peak. Quantitatively, the amplitude ratio A1:2 was predicted to be in the range 1.54 – 1.83 for LCDM, and 2.22 – 2.57 for no-CDM. Note that A1:2 is smaller for LCDM because the second peak is relatively big compared to the first. 

BOOMERanG confirmed the major predictions of both competing theories. The location of the first peak was exactly where it was expected to be for a flat Roberston-Walker geometry. The amplitude of the second peak was that expected in no-CDM. One can have the best of both worlds by building a model with high Lambda and no CDM, but I don’t take that too seriously: Lambda is just a place holder for our ignorance – in either theory.

I had made this prediction in the hopes that cosmologists would experience the same crisis of faith that I had when MOND appeared in my data. Now it was the data that they valued that was misbehaving – in precisely the way I had predicted with a model that was motivated by MOND (albeit not MOND itself). Surely they would see reason?

There is a story that Diogenes once wandered the streets of Athens with a lamp in broad daylight in search of an honest man. I can relate. Exactly one member of the CMB community wrote to me to say “Gee, I was wrong to dismiss you.” [I paraphrase only a little.] When I had the opportunity to point out to them that I had made this prediction, the most common reaction was “no you didn’t.” Exactly one of the people with whom I had this conversation actually bothered to look up the published paper, and that person also wrote to say “Gee, I guess you did.” Everyone else simply ignored it.

The sociology gets worse from here. There developed a counter-narrative that the BOOMERang data were wrong, therefore my prediction fitting it was wrong. No one asked me about it; I learned of it in a chance conversation a couple of year later in which it was asserted as common knowledge that “the data changed on you.” Let’s examine this statement.

The BOOMERanG data were early, so you expect data to improve. At the time, I noted that the second peak “is only marginally suggested by the data so far”, so I said that “as data accumulate, the second peak should become clear.” It did.

The predicted range quoted above is rather generous. It encompassed the full variation allowed by Big Bang Nucleosynthesis (BBN) at the time (1998/1999). I intentionally considered the broadest range of parameters that were plausible to be fair to both theories. However, developments in BBN were by then disfavoring low-end baryon densities, so the real expectation for the predicted range was narrower. Excluding implausibly low baryon densities, the predicted ranges were 1.6 – 1.83 for LCDM and 2.36 – 2.4 for no-CDM. Note that the prediction of no-CDM is considerably more precise than that of LCDM. This happens because all the plausible models run together in the absence of the forcing term provided by CDM. For hypothesis testing, this is great: the ratio has to be this one value, and only this value.

A few years later, WMAP provided a much more accurate measurement of the peak locations and amplitudes. WMAP measured A1:2 = 2.34 ± 0.09. This is bang on the no-CDM prediction of 2.4.

peaks_predict_wmap
Peak locations measured by WMAP in 2003 (points) compared to the a priori (1999) predictions of LCDM (red tone lines) and no-CDM (blue tone lines).

The prediction for the amplitude ratio A1:2 that I made over twenty years ago remains correct in the most recent CMB data. The same model did not successfully predict the third peak, but I didn’t necessarily expect it to: the no-CDM ansatz (which is just General Relativity without cold dark matter) had to fail at some point. But that gets ahead of the story: no-CDM made a very precise prediction for the second peak. LCDM did not.

LCDM only survives because people were willing to disregard existing bounds – in this case, on the baryon density. It was easier to abandon the most accurately measured and the only over-constrained pillar of Big Bang cosmology than acknowledge a successful prediction that respected all those things. For a few years, the attitude was “BBN was close, but not quite right.” In time, what appears to be confirmation bias kicked in, and the measured abundances of the light elements migrated towards the “right” value – as  specified by CMB fits.

LCDM does give an excellent fit to the power spectrum of the CMB. However, only the location of the first peak was predicted correctly in advance. Everything subsequent to that (at higher ℓ) is the result of a multi-parameter fit with sufficient flexibility to accommodate any physically plausible power spectrum. However, there is no guarantee that the parameters of the fit will agree with independent data. For a long while they did, but now we see the emergence of tensions in not only the baryon density, but also the amplitude of the power spectrum, and most famously, the value of the Hubble constant. Perhaps this is the level of accuracy that is necessary to begin to perceive genuine anomalies. Beyond the need to invoke invisible entities in the first place.

I could say a lot more, and perhaps will in future. For now, I’d just like to emphasize that I made a very precise, completely novel prediction for the amplitude of the second peak. That prediction came true. No one else did that. Heck of a coincidence, if there’s nothing to it.

90 thoughts on “Second peak bang on

  1. Just for fun, my very imperfect qualitative summary of the current situation is:

    According to the Standard Model of Cosmology approximately 84% of the mass in the visible universe consists of invisible cold dark matter (CDM), with the remaining 16% consisting of normal Baryonic matter. Since missing transverse momentum has not been observed at the LHC, this indicates that all the very low interaction CDM, if it exists, has to have been all created at extremely high collision energies very early on in the timeline of the big bang (certainly at higher temperature collision energies than can be currently be produced at the LHC) .

    Somehow this CDM (created by extremely high energy collisions) is then mostly/all? imbued with an intrinsically, and my opinion, implausibly low residual temperature/momentum (hence the word cold) so it can conveniently stably orbit and be otherwise entrained by galaxies and clusters of galaxies (or visa versa) in such a way that the first two major peaks in the Cosmic Microwave Background (CMB) power spectrum remain largely unaffected; with a large angular scale dependent influence somehow only first appearing in the tail of the second peak and the subsequent peaks.

    In terms of the CMB, CDM only appears to influence the larger scale structures in the universe, fixing the failure of General Relativity in this respect by just the right amount, after numerous years of intensive tweaking. Alternatively MOND theory and related observations indicate that General Relativity, and possibly the strong equivalence principle, breakdown on the scale of galaxies, and beyond. So far MOND theories seem to get the answer to “and beyond” scales wrong, starting with the observed (presumably stable) dynamics of galaxies within clusters.

    A relativistic MOND theory has recently been put forward that can be used to account for all the observed peaks in the CMB power spectrum not just the first and second. A theory that modifies General Relativity on the scale of galaxies and beyond, also produces changes in the predicted CMB power spectrum on a much larger implied length scale. The question is, can this new relativistic MOND theory also do a better job of the galaxy cluster scale dynamics than earlier MOND theories? If it needs further tweaks then they will have to be made without messing up the CMB power spectrum predictions.

    I feel we are headed for a major scientific conceptual revolution in Cosmology any time soon now. The last scientific conceptual revolution was the Earth’s Sciences revolution which culminated in the 1950’s to the early 1960’s. Alfred Wegener basically initiated the revolution in 1912 with his theory of continental drift. Mordehai Milgrom first came up with MOND in 1982. Add 40 to 50 years (or basically two generations) like for the recent Earth Sciences scientific revolution and you get to 2022-2032.

    Liked by 1 person

    1. Yes, it takes a long time for these things to sink in. Many earth scientists have noted to me the continental drift analogy.
      Both MOND and CDM get clusters wrong, just in different ways. The widespread perception that CDM is somehow better in clusters is a matter of choice. The consistency of the cluster baryon fractions with the cosmic baryon fraction is hailed as a great success, but the fact that this only applies to the most massive clusters – and fails for all other systems in the universe, even low mass clusters – is ignored. That the mass-temperature relation has the slope predicted by MOND is ignored; scientists seem mostly to be unaware that this is the case, even as they make up stories about preheating and entropy floors to explain the discrepancy conventionally. I could go on; the point is that clusters are not the clear win for CDM that they’re often portrayed to be.

      Liked by 1 person

  2. 100 years ago, Einstein tried to use the equations of general relativity to describe the internal structure of elementary particles. Essentially, he succeeded, but there was not enough experimental evidence to support his theory, so it was forgotten. Subsequently, QCD was invented for this purpose, but is, I believe, inferior to Einstein’s original approach. Re-instating Einstein’s theory, and relating the measured masses of electron, proton and neutron to properties of the gravitational field in which we measure them, gives a prediction for the value of the fudge factor that needs to be introduced to the modern theory of [m(e)/(m(n)-m(p))]^2 = 0.1561. Bang on, I reckon.

    Like

    1. I don’t think Einstein ultimately succeed in the way you suggest since he would have had to eventually abandon the strong equivalence principle and thus General Relativity in the form he presented; basically what some people are now seriously considering today. Milgrom’s MOND comes in two forms Modified Gravity and Modified Inertia. If you want to try to link particle binding energy to gravitational fields I think it is the modified inertia version you want, combined with the weak equivalence principle. In cosmological terms masses that approach each other become slightly more massive (and decelerate slightly due to the conservation of momentum), and slightly less massive (and accelerate slightly due to the conservation of momentum) when they move apart.

      In particle physics the increase of mass, in terms of binding energy on close approach, is very much larger; in this case gravitational fields instead would I think have to act indirectly to weaken of the energy density of electromagnetic fields in the vicinity of elementary and composite particles (with perhaps something like a (k-c/r) energy density weakening effect where k is a gravitational field environment constant and c is a particle constant) causing what are currently described as the strong and weak forces.

      However having gone a little way down this route with Occams Razor in my hand I stall: the logical conclusion I tentatively reach is that the electron and its anti particle would be the only stable elementary (non- composite) particles that can exist, and their size and energy content would be both environmental (background gravitational field) and velocity dependent.

      Like

      1. James Arathoon,
        You quite accurately summarise my thoughts on this issue. Abandoning the strong equivalence principle does seem to be necessary, and a dependence of the energy content on the background gravitational field is also necessary. Most people give up at this point because it seems just too absurd. But I have continued on this path for a further five years, and it seems to me that it works. I invite you to my blog to find out more, as I am sure Stacy does not want me to clog up his blog with this stuff. Publishing it, even on the arXiv, is a problem, because editors regard my assumptions as absurd before they even start reading the actual arguments.

        Like

      2. …Einstein… would have had to eventually abandon the strong equivalence principle and thus General Relativity in the form he presented…

        The strong equivalence principle has nothing to do with GR as Einstein presented it. At best the weak principle provided Einstein with a heuristic framework for the derivation of GR.

        Since Einstein developed general relativity, there was a need to develop a framework to test the theory against other possible theories of gravity compatible with special relativity. This was developed by Robert Dicke as part of his program to test general relativity. Two new principles were suggested, the so-called Einstein equivalence principle and the strong equivalence principle…
        https://en.wikipedia.org/wiki/Equivalence_principle

        In other words, neither the Einstein nor the strong EP, have even a tenuous connection to Einstein’s GR. They are however deeply entangled in the post-Einstein revisionist version of GR that has become de facto dogma in modern theoretical physics. One result is that scientists now commonly assert that the speed of light in vacuo is a universal constant, despite the observed fact that the speed of light varies with position in a gravitational field, and despite the fact that in Einstein’s GR the speed of light varies with position in a gravitational field. Modern theoretical physics is a hot mess of unscientific claims at odds with physical reality.

        Like

        1. The weak equivalence principle requires only that the ratio of gravitational mass and inertial mass be constant when measured at any one selected time and place. In principle that ratio of gravitational mass and inertial can change over time and from place to place in the universe.

          If you disallow or think you’ve disproved the second sentence, then you can only end up with the strong equivalence principle (inertial mass = gravitational mass) because a fixed universal constant ratio can always be absorbed into the universal constant of gravitation.

          Dicke suggested lots of definitions like trying to make a vague distinction between active gravitational mass and passive gravitational mass. It all seems like dancing on the head of a pin to me.

          Like

          1. The weak equivalence principle in this form has been tested to great accuracy, and no discrepancy found. The strong equivalence principle is effectively tested, to much lower accuracy, by experimental measurement of G, and significant discrepancies have been found. I am wondering how long it will take before people stop blaming experimental error and start recognising this for the bombshell that it really is.

            Like

              1. I was also searching about this, triggered by the comment of Robert and found a paper by the same author but from 2016 (no link – just the title – Are gravitational constant measurement discrepancies linked to galaxy rotation curves ?)
                In that paper he uses MoND but neglects the external field effect, implying a modification of the inertia. He even states that CRATER II can be used to distinguish between modified inertia or modified gravity based on the value of the velocity dispersion.

                Liked by 1 person

  3. Stacy, thanks for relating this piece of history. Funny how the CDM proponents don’t ever mention the tuning phase of dealing with the 2nd peak data. You had me convinced of the no-DM reality of things a while back when you noted how variations in rotation curves matched variations in visible matter density–something that is highly Implausible if the stars are embedded in huge DM halos. Hopefully, we will see CDM unmasked as the ether of the 21st century soon.

    Liked by 3 people

    1. Not only do they not mention it, many people seem to seem to have convinced themselves that LCDM correctly predicted the second peak in advance, when in fact it came as a tremendous surprise. This is an example of hindsight bias/creeping determinism. Or gaslighting.

      Liked by 1 person

  4. Hi All,

    Has there been any serious research effort to support or falsify the idea that several LCDM ‘whole universe’ processes might better be modeled as galaxy local via the SMBH/jets? First you’ll need to suspend disbelief about breaching the SMBH event horizon — even though we see such internally driven jets in other events on Earth as well as in events of increasing energy in the stars. And what do we really know about supermassive black holes anyway? As I understand it they were not researched thoroughly and were force fit into LCDM.

    If there has been serious research considering and falsifying such a galaxy local process please provide references or pointers if you are so inclined.

    If there has not been such research, I think it would be reasonable to conduct. It seems to me that almost all processes map directly and naive me thinks it might be that a lot of analysis is largely correct, if worded awkwardly or incorrectly in such a scenario. Even where models and analysis were erroneous, we still have the data, right? In fact, I think many tensions would probably relax. No more Hubble tension since galaxies would expand into another and we would expect variation both from the photon as possibly from the analytical/experimental technique. I mean basically every photon would go through many galaxies, each galaxy doing its own thing on the general process timeline of the galaxy recycling loop. So that would also mean Hubble would not have anything to do with universe wide expansion, and we could drop the idea of receding galaxies and universe bangs and crunches. It would be more of a dynamic steady state. No known beginning/end/size. I don’t know all the dirty laundry of cosmology but as I understand it there are several processes that typically have a process duration >> 13.8B years. That’s kinda weird isn’t it? So now you wouldn’t have to worry about any of that. But my general impression is that you already have a lot of the grand cycles of various types of celestial objects worked out, and certainly anything you already do that is galaxy local is on solid ground. Given the vast distances and the cosmic soup, I’m not clear how much impact this would have. That is why I am asking here.

    Best,
    Mark

    Like

      1. Can anyone tell me how you model expansion? Do you show the energy density of spacetime universally declining at the same rate? I have a different alternative to consider. In the galaxy local recycling model the energy density of spacetime aether is declining as square of the distance from the agn, disc, and jets. So this actually means that every transiting photon would ride up the gradient upon entering and down the gradient while exiting the galaxy. I don’t know what that would imply for redshift or luminosity. I’m wondering if those measures would behave differently in such a galaxy local cosmology. Any brainstorm thoughts?

        Like

        1. First we must reflect on the teachings of Emmy Noether regarding conservation. Conservation may be thought of as a double entry bookkeeping system. If one side goes up, the other must go down. This is the essence of differentiability. No discontinuities are allowed in conservation. Fortunately nature provides spacetime æther as the ultimate bookkeeper. As a result of Noether’s theorem the net exchange of conserved quanta between spacetime æther and standard matter must move in a debit-credit or credit-debit arrangement. Now that all said, spacetime æther and standard matter are structure and many transactions are in quantum units. However, not all transactions are quantum — gravity is continuous.

          Like

  5. From an outsider’s perspective it does look like dark matter might end up being the 21st century version of the planet Vulcan, with astronomers and astrophysicists trying to find something that simply isn’t there in order to shoehorn an observed discrepancy into a model that might not be true.

    Liked by 2 people

    1. This is the best analogy that I heard about this situation with dark matter. Vulcan was indeed a form of dark matter to explain the discrepancies between ND and observations in the orbit of Mercury, before GR explained it.

      Liked by 2 people

  6. What exactly is preventing RelMOND (and MOND theories in general) from simply being an effective field theory whose more fundamental underlying theory is some cold dark matter theory like Hossenfelder’s superfluid dark matter theory?

    Like

    1. Nothing, except that using some Dark Matter as a medium is more complicated than not using it.

      If Dark Matter was detected, or if they could make predictions that can separate them from simple MOND phenomenology, then we could have a more informed opinion. But, right now, to me it looks the same, just more complicated.

      Like

      1. Relativistic MOND, just like GR in general, is not compatible with quantum mechanics, so we should view RelMOND as just the classical limit valid only at large enough scales, and some quantum gravity + cold dark matter fields as one possible quantisation route. What I do not get is those on both sides who argue that either MOND theories or CDM theories are false, or both are incompatible with each other, which Stacy McGaugh seems to be saying in this article.

        Like

    2. Please try to look at it completely vice versa

      If you carefully go trough this blog, you can find solid evidence that inferred DM distribution in galaxies is completely dependend on the observed baryon matter distribution. So that it is much less complicated to expect that observed MOND’s effects are simply due to properties of baryon matter itself. This simple and natural explanation is also supported by steadily increasing number of empty-handed DM searching experiments. The only challenge remains to explain what does actually MOND mean. Hopefully it is something really cosmological…
      Of course, there are also other duties attached to pure DM like young universe development or galaxies growth development, but these are actually problems of worthless simulations with too much adjustable parameters than the universe itself. When one gets rid of them one can leave “concordance model” and start to speculate about the universe based on observations only – thus freely and without prejudices.

      Like

      1. You misunderstand me. I said that RelMOND is valid at large enough scales, where quantum effects are negligible. Everything you say about MOND being better for galaxy astrophysics because MOND is simpler is of course valid because there are no quantum gravity effects involved, and a classical effective field theory is fine, just as how General Relativity is fine for most astrophysics with invoking quantum gravity. But eventually MOND/RelMOND will have to be replaced with a quantum theory, especially for very early cosmology, and (particle) CDM is just one of the possible routes to a quantum MOND.

        What would be interesting is if the string theorists or loop quantum gravity people find a way to incorporate a quantum MOND in their quantum gravity framework without the use of CDM, which can then show that even a quantum MOND doesn’t have to be using CDM. But right now CDM seems to be the only quantum theory of MOND.

        Like

        1. First, yes – MOND can and should be treated as an effective theory that can and should lead to some deeper theory. It very explicitly is not valid to equate CDM with quantum MOND. CDM is simply cold dark matter – slow moving particles that did not participate in BBN. That does not lead to MOND, which is why advocates of CDM get really unhappy when MOND is mentioned. And yes, there might be a connection of MOND as an effective theory with quantum gravity – but again, this has nothing to do with CDM as it is commonly defined. So yes, I am saying CDM and MOND are mutually exclusive as originally defined. One may of course redefine terms to make them compatible, but this would obviously be a form of hindsight bias: “Sure! That completely different thing is exactly what we meant all along!”

          Liked by 1 person

          1. So would the superfluid cold dark matter model by Berezhiani and Khoury or the one by Hossenfelder and Mistele, quantum particle theories with two phases whose condensed phase exhibits MOND properties, not count as a CDM theory by your definition?

            Like

            1. No, it does not. The goal of these ideas is to capture the nominal successes of MOND in galaxies and CDM on much larger scales. It has to be fundamentally different from CDM – as originally conceived – in order to do this.

              Like

              1. Very well, then. It seems that we have two different definitions of cold dark matter. I believe the commonly used definition of cold dark matter in the literature (and the one I use) is any dark matter that moves slowly compared to the speed of light, and since the superfluid dark matter theories are dark matter theories with matter moving slowly compared to the speed of light, that makes those theories theories of CDM according to that definition. The astrophysicists in 1982 simply didn’t consider more complex CDM theories as at the time the simple CDM theories like WIMPs and SUSY seemed to work, and it took about thirty some years before theorists came up with more complex CDM theories whose cold dark matter have multiple phases and phase transitions like normal matter, to address deficiencies in the simpler CDM theories. But if you wish to define CDM theories as the subset of theories of dark matter moving at slow speeds compared to the speed of light that were studied by mainstream astrophysicists beginning in 1982 then I would agree with everything you have said. But I disagree with your definition of what a CDM theory is; it’s like restricting string theory to bosonic string theory and calling it a failure because string theory – as originally conceived – failed as a theory of the strong force.

                Like

          2. Perhaps it is better to say that a subset of CDM theories are quantum MOND theories. Obviously certain CDM models studied in the astrophysics community, like WIMPs, axions, sterile neutrinos, and supersymmetric partners are not MOND compatible, but there are other CDM models that can replicate MOND effects (such that MOND is an emergent force a là hydrodynamics or the nuclear force), two of which I mentioned above, and ought to be considered quantum MOND theories in my opinion.

            Like

            1. One can of course try to explain simple facts in complicated way, but heuristically better attitude is to keep Occam’s razor rule.

              The essence of MOND is about the smallest acceleration in case of ANY NON-ZERO acceleration. It is connected to diffeomorphism and to dynamical Rindler horizon in presence of any non-zero acceleration (near galaxy). As a consequence, an observable universe is also dynamical entity and its gravitational attraction as a whole might be (imho) the source of MOND effect.

              Quantum theories are not background independent/does not fulfill diffeomorphism condition so far, (it would break probability interpretation), therefore they cannot be directly connected to MOND. That’s actually the same reason why we do not have working quantum gravity.

              Like

              1. Which is why I would consider relativistic MOND theories to be the classical effective field theories that should be enough for most astrophysicists, and CDM/quantum MOND/emergent MOND to be part of quantum gravity and early cosmology that is being studied by many theoretical physicists.

                Like

  7. Sure. CDM is any dynamically cold dark matter that is (1) invisible, (2) slow moving, with v << c, and (3) does not participate in BBN. By that definition, it could be anything from 100 GeV WIMPs to million solar mass primordial black holes to strange nuggets to free floating space donkeys. Such CDM forms structure, including dark matter halos that follow an NFW profile. That is not observed. Things like bosonic/superfluid dark matter are auxiliary hypotheses that we tack onto the CDM paradigm in an effort to save it from one of its most fundamental predictions. So while I agree with the broad definition you give, it is a form of creeping determinism to call these things CDM. They were not what we meant by CDM when the hypothesis was made, they are ad hoc alterations made to save the phenomena.

    Liked by 1 person

  8. Apass – The EFE is hard to avoid, which is why I don’t know what to make of the Cavendish experiments. It does depend on all the vectors, and I haven’t tried to track them all down. Seems implausible, but then so does the tension in the measurements of Newton’s constant.

    Like

    1. The derivation made by the author in the first paper is without the EFE although he was clearly aware of it. I haven’t looked at the other paper (baby’s bath is more important at this moment…) but, at least in 2016, he could not decide if MoND is a modification of inertia or gravity. He said that you made some predictions with and without EFE for CRATER II and with the follow up observations, these can be used to discern between the interpretations – modified inertia does not depend on the external field so if CRATER II has a velocity dispersion around 4km/s would imply a modification of inertia.
      But you are confident that EFE is a real effect – does this imply that you are sure that MoND is a modification of gravity?

      Like

  9. First off, discerning between modified gravity and inertia is an incredibly important debate that the scientific community is failing to have. That said, the distinction being made between the two here, with the first having the EFE and the second not, is incorrect. The EFE can and should appear in both flavors of MOND. There are subtle differences, but it isn’t just ON or OFF. So, yes, I’m confident the EFE is a real effect – it manifests clearly in the data for the Local Group dwarfs, not just Crater 2. But that does not mean I think MOND has to be a modification of gravity rather than inertia; that’s a false dichotomy.

    Liked by 1 person

  10. Some thoughts…

    If the new results, regarding the dispersion in experimental measurements of the Constant of Gravitation,G, are on the right track, then presumably the small effects of the larger masses on the inertia of the smaller torsionally suspended masses can only be separated out, in close proximity to the earth, because different sizes of mass have modified-inertia effects operating on completely different length scales, assuming the length scale for each is directly proportional to inertial mass (m) (i.e. L=G k m/a_o ) with (gravitational mass (M) = k * inertial mass (m) ) (m=E/c^2).

    Apass hints that this does not at first sight seem compatible with the EFE, but I don’t agree.

    I think astronomers implicitly assume that they are observing the gravitational mass of distant galaxies, rather than the inertial mass that I think they are actually observing. After all if you implicitly believe the strong equivalence principle is true, it doesn’t matter one way or the other – who cares! The problem only arises if the weak equivalence principle is true, with the strong equivalence principle being demonstrably false; then you’ve got to work out exactly what is actually being observed out there in the distant universe.

    Consider a dwarf galaxy orbiting a massive central galaxy. Assume the inertial mass of the dwarf galaxy decreases slightly with increasing distance from the central galaxy according to some yet to be determined function. For the sake of simplicity assume that all parts of the dwarf galaxy increase in inertial mass by the same percentage, i.e. it is sufficiently far from the central galaxy to be approximated by a point.

    Now assume only the weak equivalence principle is true, with the unobserved gravitational mass remaining constant in the formula [gravitational mass = k * inertial mass], then k must reduce in direct proportion to observed inertial mass increase and visa versa.

    If astronomers observe inertial mass in far off galaxies, rather than gravitational mass, the observed MOND effects are easier to interpret as a change of MOND scale length, Lambda. (i.e. Lambda=L/k=G m/ a_o ).

    Lambda increases for dwarf galaxies with a growing External Field Effect in the Quasi-Newtonian Regime and eventually becomes larger than the orbiting dwarf galaxy itself, to give the full External Field Effect Newtonian Regime.

    (Willi Bohm in a very speculative paper, gives an interesting discussion of the consequences a modified inertia theory with variable scale length “A Modified Mass Concept could explain both the Dark Matter and the Dark Energy Phenomenon”, 2010.)

    Like

    1. Sorry I’ve got the above wrong because the scale length should be a squared quantity and the gravitational mass at the centre of the galaxies is one inferred from the observations to calculate the scale length (L^2=G M/a_0). Therefore to convert to the inertial mass scale length is (L^2/ k) = Gm/a_0.

      When the dwarf galaxy is in the external field of another larger galaxy, the closer the dwarf galaxy is to the centre of the larger galaxy, the larger becomes the inertial mass of the dwarf galaxy and the lower is k, thereby increasing the effective scale length calculated for the dwarf galaxy. The gravitational mass remains constant at all times.

      I’ve really managed to confuse myself here and I.m not sure this is completely right either, but I hope you get the idea.

      Like

      1. If this sort of modified inertia scale length variation based on the mass of the gravitational object being studied is right, and that’s a big if, then Pandora’s box really opens up, because for example Jupiter and the other massive planets might then have a mechanism by which they nudge the weather due to small changes in the inertia of planetary winds over time, depending on the distance of the Earth to Jupiter and/or the rate of change of this distance.

        It might enhance the chance of tidal locking of planetary satellites and also satellite dwarf galaxies orbiting other larger galaxies.

        The scale length associated with the sun mass is much greater than the size of the solar system so no gravitational funny business (other than what is already known in terms of the precession in the orbit of Mercury) is predicted there.

        I know this sounds like the stuff of crackpot science, but the hope is that ideas such as this can prove much more testable in the years ahead, than some long standing well publicised mainstream theoretical physics research programs.

        Like

    2. Just a short correction – it is not me who hints it, it is the author (Norbert Klein) of the papers that me and dr. McGaugh found who does it. It is

      Like

      1. Yes, I think that was clear. It does not encourage me to put in the work to understand what he’s done – which is the problem more generally – we’re all overworked as it is; no one has time for genuinely new ideas. The tensions in measurements of G may well be an important clue, but it sounds like we may need a MOND-like theory that isn’t quite MOND in this respect.

        Liked by 1 person

  11. How very true, that “no one has time for genuinely new ideas”. I think I agree that a MOND-like, but not MOND, theory will be required. The critical acceleration indicates a lower threshold in the gravitational field strength at which classical theories of gravity fail, and that to me is the tell-tale sign that quantisation of the gravitational field has taken over. How it does this, I have no idea, but I am fairly sure that current approaches to quantum gravity will not solve this problem either.

    Like

    1. Indeed – many attempts to formulate quantum gravity theories are made in ignorance of MOND effects. This is like playing solitaire with a decking that’s missing an entire suit of cards.

      Liked by 1 person

    2. As Prof. McGaugh can verify, I have also been an advocate of MoND being a quantum effect for many, many, years. I have been pestering him about the idea for some 20 years. I am also a mathematician, but not as broadly experienced as yourself.
      The only quantum physicist I know of that takes MoND into consideration when working on quantum gravity is Sabine Hosenfelder, who has an excellent blog that I recommend if you are not already aware of it.
      I expect there are other such quantum physicists, but I don’t know their names.
      I looked at your blog and noticed you had recently posted an article discussing the use of the term “Crackpot”. I don’t know if this will be of use to you, but let me briefly describe my use of the term. Again, Prof. McGaugh can confirm this. I refer to myself as a Crackpot. This tends to take the wind out of the sails of critics, especially when I add the fact that sometimes Crackpots turn out to be right. I am not discouraged by the term, rather I try to embrace it, in a modified form, and most of the time just don’t worry or care about it. Time will tell who the real crackpots are.

      Liked by 1 person

      1. Well, I would certainly be very surprised if I was the first person to suggest MOND is a quantum effect. There are lots of clever and very knowledgeable people in the world, and the idea is bound to have occurred to some of them. I am not sure I would describe myself as broadly experienced, even in mathematics. But I am interested in difficult problems, if (and only if) there is an opportunity to think about them in new ways. I do follow Sabine Hossenfelder’s blog, and I find her ideas on emergent gravity very interesting. Not that I really think they are the answer, but they provide plenty of food for thought. I must admit I am quite attracted by the idea of embracing the term “crackpot” as a positive label. Especially when I discovered that the crackpot ideas that got me banned from the arXiv physics sections were (essentially) first published by Einstein.

        Like

      2. I know Lee Smolin has considered it, so Sabine isn’t the only one. But they do seem to be in the minority. Like Dr. Hossenfelder, Dr. Smolin has been an outspoken critic of much of modern physics – he wrote The Trouble with Physics – an important work in its own right.

        Liked by 2 people

        1. “The trouble with physics” is a classic that should be required reading for all physicists, with refresher courses every five years, especially for those in more senior or management positions.

          Like

          1. I’ve read Lee’s more recent book but I’ll think about ordering the Trouble with Physics.

            As to the term crackpot and the hostility of some physicists/cosmologists towards independent ideators, it greatly disturbs me. I get it that physics/cosmologists/astronomers get peppered with ideas which are largely nonsensical, but I still don’t think that should be a license to be a bully. These ideators are people too and they are excited and they do genuinely think they are on to something, even if they aren’t. And then they get crushed by their heroes. Wow, it is such a bad look for the fields. Plus, if you think every paradox and open problem is an indicator that something major is wrong, and if you listen to Sabine, Lee, Roger and others that THEY think something is wrong then it’s gonna be really bad for the field when that is discovered and all those mistaken interpretations come to light. Also, I think that the intellectual superiority and hostility is a carry down from the early days of physics and while ideators like me do get the heck beaten out of us, I can’t really imagine what it must be like inside the field as a new person with ideas and being told directly or indirectly “shut up and calculate” or any of the other intellectual mind games people play. No wonder so many of them feel like the deck is stacked against them, and often they are right. Lastly, there are so many more neutral or kind ways to deal with these eager ideators. 1) ignore them, 2) mute them, 3) block them – although that is sort of aggressive if they can see that, 4) if you feel compelled to respond, just have a short canned answer something like “Thank you for sharing your ideas, but I will not be engaging because I am entirely focused on my own work.” That’s not hurtful. 5) The field could even have some fun with it – create a reddit forum for the ideators with some rules that they must state their idea in N words or less, with no appeal to myth, and as scientifically or logically as they can muster. Let the ideators themselves rate the ideas and a few scientists agree to look at the top M per quarter or year. You never know, a gem might appear. 6) #5 with an entry fee and the proceeds go some to the reviewers of the top M (again let the ideators winnow down the mass of entries) and some of the proceed go to a good cause. Everyone is happy.

            As for me, I have been ideating on a TOE for 2.5 years and have a unique approach of starting where Planck left off, with Planck radius particles that can carry from 0 to Planck energy and are charged, immutable and conserved. Take duos of those particles and toss them into a 3D Euclidean void at a certain particle density and energy density (two free parameters) and I think the universe will emerge. Using just logic alone you can come to some fantastic ideas. If they are immutable, there is no singularity. Then what happens? Well maybe those SMBH are jetting them out? So you go through all the outreach material and you find out that bang theory doesn’t require a single place or time. And you find out that inflation must still be going on, but we can’t see it. And you find out about pocket universes and multiverses from Dr. Guth. And then you have to deal with some of the issues. What about the event horizon? Surely a core of Planck particles at Planck energy in an SMBH could overpower an event horizon of its own making, right? Maybe spin is involved, maybe the frame dragging around the polar axis creates a vortex that allows lower energy spacetime aether to approach the event horizon and bang, the jet breaches. Ok, so what about expansion? Ok, you scratch your head on your fledgling ideas and it dawns on you, hey wait a second, if spacetime is an aether and each galaxy AGN SMBH occassionally and independently erupts in a bang and if Planck plasma inflates from Planck density to what we can observe, then you struggle and it dawns on you – what if expansion is galaxy local! Sure, that makes sense, if inflation comes from the jet each galaxy will expand INTO its neighbors. So now you have repositioned the bang, crunch, and expansion local to galaxies. And galaxies are your ‘pocket universes’ in the multiverse. Much more parsimonious. Oh, and you get crunch for free because it is a black hole and that is nice for all the bright scientists who’s math shows a crunch (Roger, Lee, etc.). And you have expansion but not universe outward, galaxy outward, so it’s in opposition and nothing is going anywhere fast and now the Hubble tension goes away and you realize that oh, those measurements vary because photons take different paths through different galaxies, each of which is in a different stage of a general recycling process. Yeah, you still have some issues to work out with the idea of an ‘accelerating’ universe but you have enough mechanisms now that you will chip away at that issue.

            Meanwhile your spacetime aether is a construct made of particles made of Planck spheres and it provides a Riemannian geometry, except at the smallest scales. So you happen across UV divergence, and IR divergence, and renormalization and you think — uh oh, those integrals bounds don’t go to zero or infinity. There are cutoffs where things are going to get chunky and discrete. Meanwhile, since you started with a Euclidean void, you realize that Einsteins inertial observer needs to be repositioned from low energy spacetime into the Euclidean void. And you realize that nature is a trickster to provide two overlapping geometries sandwiched together like that. And you realize you will need to rewrite the basic equations from the perspective of the Euclidean observer. Since it is an aether, you realize the Euclidean observer will see a variable speed of light, while the Riemannian observer IN the aether will see a constant speed of light because the particles of the aether must guarantee a relationship between their radius and their frequency. That relationship is sort of counterintuitive because at higher energies, those aether particles get smaller and their frequency decreases – but that fits with Einstein and curvature and dilation and contraction. Oh, that means that the aether has a permittivity and a permeability from the point of view of the Euclidean observer and that is nice for the c^2 = 1/sqrt(permittivity * permeability) if they vary with the energy in the aether and that makes sense.

            Every once in a while you pop your head up to share what you are finding with the physicists and the cosmologists and they beat the crap out of you. So, you are like, well ok, if you don’t want to be involved picking all this low hanging fruit, then no problem. Unfortunately it delays the benefits for intelligent life and the environment. So back to picking fruit. You think more about your Planck core in a SMBH and you realize that as speed of light slows in the Euclidean reality that ingested particles will not be moving in the Planck core. All the energy will be in electric field of attraction and repulsion. No magnetism (not moving), no kinetic energy. Oh that’s why the temperature is zero!. Oh, that’s only one microstate, therefore no entropy. Oh there’s certainly no information in a Planck core. Well arguably one bit, Planck core or not Planck core. So you think well that knocks down a few more paradoxes and open problems. Might also mean we need to adjust the Second Law to an equality considering we haven’t been accounting for the aeither. Of course, you realize, the aether is the grand accountant for all things conserved! So you think about how physicsists are all wrapped up in their issues with some kinds of symmetries not being conserved, yet they don’t understand the aether, and you think well, that math is way over my head, but I wonder if the aether is actually doing the conservation and they don’t know it. Set that aside, too much intense math. Can’t pick all the fruit.

            So you surface again to tell folks and again they greet you with hostility and bullying. You can not do physics without our 10^-20 math! You cannot do physics without understanding everything we have done! Really? Well your outreach stuff is actually pretty good. I don’t have a career to worry about. I have the luxury of listening to all the tells when scientists aren’t sure about things and how they each tell a somewhat different narrative. And I recognize that narrative is interpretation and is a degree of freedom I have as long as I preserve the observations and most of their math. And when you come back kindly and say, but don’t you understand emergence they just get madder. And when you say, don’t you understand you are looking at structure that is 15 orders of magnitude above where I am ideating, they just get madder. And then you try to give examples, like thinking about probing around the outside of a computer and what could you discover about the CPU chip, and if you took of the cover and probed around the circuit board what would you know, and if you probed around the pins and developed techniques to probe the ball grid array what would you know, and if you figured out how to take off the package and keep it running what would you know with your probe? Do you know anything yet about semiconductors or junctions or gates. Nope. Still it doesn’t get through because they are in so deep. So then you say things like you are climbing K2 or Everest the hard way, I found a nice heated paved gently sloped path on the other side. No dice.

            So you go back to cosmology and think about all the new mechanisms that could explain galaxy rotation curves. First you have this spacetime aether generator erupting every so often for however long it erupts. So you have this inflationary aether wind. Maybe that has an effect. And you have this kind of interesting observation that the internal Planck spheres in a Planck core have no way to transmit their gravitational mass since all their neighbor particles are maxed out. Wow, so that means that as matter is ingested, that if a Planck core has formed and the ingested matter-energy joins it, it falls off the accounting books for a while, until it jets out if it ever does. So that’s interesting for the orbitals of the stuff in the galaxy that had been attracted to that stuff which is no longer gravitating. You think well the surface of the Planck core is still presenting it’s mass and that core must grow while the SMBH is not jetting and shrink when it is jetting and that’s interesting because the surface area is varying over large time scales, but you don’t know what to do with that. Can’t pick all the fruit. And you realize that when those jets blow that is a lot of mass that all of a sudden reappears and probably does something to galaxy dynamics, but haven’t figure out what yet.

            So you redouble your efforts to keep working at it, in hopes you can find something numerical or mathematical that is new and convincing, because it is already clear that simplicity and logic are not doing the trick. And so goes the life of an ideator….hopefully to connect sometime if they are right because it would be unfortunate to croak before the idea gets out and then it takes another 100 years for scientists to figure it out.

            Best, Mark

            Like

            1. Your ideas are ‘sort’ of interesting, but I fear you may be falling into the same trap that many, so called, ‘top’ physicists have fallen into, namely that you are adding more and more open parameters to your model, in order to get it to work the way you want. That is where ideas like “Dark Matter” and Dark Energy” come from, as well as “Aether” and “Epicycles”. Each is just an external, foreign, addition to the original idea to make things work, without having to figure out why the original idea has failed, This sort of thing is quite common in science. Less common is the moment when someone finally says “Hey, instead of continuing to add more and more extraneous hardware to the engine, why don’t you just remove that potato from the tailpipe?” So the real trick is to be the one to figure out what the potato is and what happens when you remove it. What do I mean by potato? For Copernicus it was the assumption that the Earth was the center of the Universe. For Kepler it was the assumption that orbits are perfect circles. For Newton it was the assumption that up and down are fundamental properties of the universe (replacing that with the assumption that mass causes gravity). For Einstein it was the assumption that space and time are absolute.
              In each case the scientist removed the faulty assumption (potato) and showed that the motion of objects ( engine) is better modeled better without it.
              If you really want to be taken seriously, and have the best chance of actually being right, I would recommend looking for the &%$# potato, i.e. false assumption that has been around for so long people have forgotten that it is really just an assumption. Once you think you have found it, find the simplest possible way to mathematically demonstrate that changing, or totally getting rid of, the assumption results in a more reliable model.
              This sort of approach is harder, much harder, than just adding more and more decorations to a theory to make it look a bit more like what you want, but, in the long run, you end up with something that actually works, instead of being an unwieldy, ad-hoc, conglomeration of unverified assumptions, such as modern cosmology has largely become.
              Sorry for the rant. I was trying to show is that often ‘real’ scientists are just as guilty of fallacious assumptions and reasoning as any amateur.

              Liked by 2 people

              1. I suggest it is not so much the potatoes that are the problem, as the spanners that are left in the works. When I point out the spanners that I think shouldn’t be there, the response is always “What do you know about engines? That’s not a spanner, it’s an essential part of the machinery”. Well, excuse me, but I don’t have to be a highly qualified mechanic to recognise a spanner when I see one.

                Like

              2. A good way of putting it; we all suffer because we need to learn so much to reach the boundaries of science and make original contributions that we have not enough time to go back and check everything we learn but have to take on trust that others have checked it for us.

                In the end, experimental measurement (or in cosmology, observation, because we only know of one universe) is the only arbiter. What I want to see as a scientist is a prediction of something that is not already known; so it was necessary but not sufficient for, say, General Relativity to account for the additional known precession of Mercury’s orbit, but it also predicted the bending of starlight passing near the Sun to be twice the Newtonian value, the change in clock frequency in a gravitational field, and the existence of gravitational waves.

                Like

  12. Actually, my simple model has only two free parameters. The density of the Planck radius spheres themselves, and the density of the energy they carry, which is convertable between electromagnetic and kinetic forms. I can’t imagine a simpler model. It is only through emergence that complexity arises.

    Like

  13. Hello,

    have you heard in the news of this paper,

    An excess of small-scale gravitational lenses observed in galaxy clusters ArXiv:2009.04471

    popular news articles,

    Hubble Discovery Hints at a Serious Problem With Our Understanding of Dark Matter
    https://www.sciencealert.com/new-discovery-suggests-a-problem-with-our-understanding-of-dark-matter

    what do you think and how can MOND perhaps better explain this? how does MOND handle small-scale gravitational lenses ?

    thanks

    Liked by 2 people

    1. Yes, I’ve heard of it – but only yesterday, so I haven’t had time to read it, let alone provide a sober assessment. I have been saying for some time now that clusters don’t make a heck of a lot of sense in either theory, so that much seems consistent.
      For lensing in MOND, one has to worry more about objects along the line of sight contributing to the lensing than in pure GR, not merely those in the cluster itself. I would also expect relatively more high redshift sources, which would affect where the caustics lie relative to the mass that generates them. So – it’s a mess, which is why I mostly avoid working on clusters myself. These lensing reconstructions are fabulously ornate, but they are also fabulously complicated: it is easy to infer the presence of a dark substructure where there is none. Whether that is going on here, I have no idea.

      Liked by 2 people

      1. thanks, while we’re on the topic of clusters and gravitational lensing, have you addressed the bullet cluster?

        since in the bullet cluster, gravitational lensing shows that the majority of the mass causing it is invisible and offset from the visible matter, doesn’t this refute MOND, since in MOND shouldn’t gravitational lensing shows that the majority of the mass causing it be centered on the visible matter?

        could for example, this offset be explained in terms of black holes within the MOND framework?

        more generally, how do you explain instances of gravitational lensing without a source of visible matter within the MOND framework, other than black holes?

        Like

    1. One thing I’ve wondered about is, are there examples of dwarf or satellite galaxies orbiting a larger galaxy, in the same way stars orbit within a galaxy. if so, then do these orbiting galaxies follow RAR and BTFR and MOND ao acceleration rules?

      Like

      1. To the extent we can tell, yes. The problem with satellite galaxies is that you don’t usually know the eccentricity of their orbit, so the observed velocity only gives a ballpark constraint on the circular velocity of the gravitational potential. Just extrapolating flat rotation curves has yet to go wrong for predicting that, at least out to where the acceleration from other masses takes over (e.g., the barycenter between the Milky Way and Andromeda).

        Liked by 1 person

        1. Very interesting! Maybe in the future that could provide a constraint on the apparent overprediction of <30% of the vertical dispersion. The Vast Polar Structure orbits nearly vertically so if it disappears or is compatible with zero at such large distances we can rest assured that at least the problem disappears in the limit where the galaxy is a point mass. Depends on how big the ballpark is of course.

          Like

          1. Yes, I suspect the problem with the vertical velocity dispersion is a combination of known non-equilibrium effects and the details of the vertical structure. The latter may seem a detail, but if one applies the same approach to the radial force – assuming an exponential disk – then one will get an answer that is in the ballpark but formally very wrong according to chi^2. That’s because there are non-exponential features in the radial mass distribution that have to be accounted for. It isn’t clear to me that this will inevitably fix things, but I’d be really stoked about NFW if it had got that close in the first place.

            Liked by 1 person

  14. Stacy, your clear post leads to two questions:

    (1) Did the original prediction for the second peak rely on “no CDM” or did it rely on “MOND”?

    (2) You mention somewhere that present values for the expansion rate differ when deduced from angular estimates and when deduced from distance estimates. Is that also the case for the second peak?

    Like

    1. (1) The prediction I made is based on no CDM. MOND itself makes no prediction for this quantity. It also relies on BBN and the baryon density as known at the time. LCDM fits find a higher baryon density than we believed at the time, and there appears to have been a running confirmation bias about what the “right” baryon density is.
      (2) The first-to-second peak amplitude ratio is not affected by the expansion rate. That’s one reason I phrased the prediction that way.

      Like

      1. Stacy, just to deepen my understanding: is the first-to-second peak amplitude ratio affected if one uses angular estimates instead of distance estimates?

        Like

        1. No. The distance scale affects the locations of the peaks but not the shape of the power spectrum. After the baryon density, the next most important parameter to the peak ratio is the tilt.

          Like

  15. You are probably aware of arXiv:1509.09288v5 but if you aren’t, a look at Figure 3 which examines a no dark matter, atypical application of GR approach to the CMB peaks and gets a good match, is worth a review.

    Like

  16. An overlapping follow up article also notable for clarifying what they think is going on and what they are doing: “[Submitted on 26 May 2016 (v1), last revised 27 Jul 2016 (this version, v3)]
    “End of a Dark Age?” W.M. Stuckey, Timothy McDevitt, A.K. Sten, Michael Silberstein, Journal reference: International Journal of Modern Physics D 25(12), 1644004 (2016) DOI: 10.1142/S0218271816440041
    arXiv:1605.09229

    “We argue that dark matter and dark energy phenomena associated with galactic rotation curves, X-ray cluster mass profiles, and type Ia supernova data can be accounted for via small corrections to idealized general relativistic spacetime geometries due to disordered locality. Accordingly, we fit THINGS rotation curve data rivaling modified Newtonian dynamics, ROSAT/ASCA X-ray cluster mass profile data rivaling metric-skew-tensor gravity, and SCP Union2.1 SN Ia data rivaling ΛCDM without non-baryonic dark matter or a cosmological constant. In the case of dark matter, we geometrically modify proper mass interior to the Schwarzschild solution. In the case of dark energy, we modify proper distance in Einstein-deSitter cosmology. Therefore, the phenomena of dark matter and dark energy may be chimeras created by an errant belief that spacetime is a differentiable manifold rather than a disordered graph.”

    Like

  17. “The same model did not successfully predict the third peak, but I didn’t necessarily expect it to: the no-CDM ansatz (which is just General Relativity without cold dark matter) had to fail at some point. But that gets ahead of the story: no-CDM made a very precise prediction for the second peak.”

    But unless you (could have) said in advance that you expect the MOND prediction to fail for the third peak, or perhaps even if you could say it now and back it up quantitatively, then your prediction of the height of the second peak could be nothing more than a lucky coincidence. That you expect MOND to fail at some point, OK, but that is too much hand-waving to explain why it works for the second peak but not for the third.

    As to ΛCDM not predicting the other peaks: neither did MOND. OK, maybe it can’t (yet). But until someone calculates a power spectrum in MOND at least as good as the experimental data and then find that it fits, the CMB can’t be used as an argument for MOND.

    I also fail to see anything wrong with determining the parameters of the model from the data. That is what observational cosmology is all about. That’s no different than measuring the Hubble constant, or the brightness of an L* galaxy. If you have a theory which predicts such things, fine, but no-one does. MOND is silent here, so it seems a bit strange to count against the standard model that it didn’t predict the height of every peak. There are few theories which predict everything. Again, if you have one which predicts more, fine, but having to determine some parameters from observation is not in itself a mark against a theory.

    Although I think that it is impressive enough that decades-old physics can fit all modern cosmological data with just a few parameters, and nothing was invented as a fudge factor in order to explain observations (despite what David Merritt claims), more interesting is that the values which come out of the CMB jibe well with completely independent measurements.

    At the risk of repeating myself: many people believe in the standard model because it works. People also tend to forget that the parameters of the standard model are derived from observation. They are not the result of theoretical prejudice; quite the opposite. As long as no theory fits the CMB as well as the standard model, saying that it doesn’t predict every feature, or that some other theory got just the second peak right, is a cheap shot. There is enough interesting MOND phenomenology that it should be able to stand on its own. It would be nice to get more people interested, but arguments like “MOND got the second peak right” do more harm than good.

    Like

    1. So to start at the end, I have not been making the argument that MOND got the second peak right for the past 15 years, because I don’t understand the third peak. But I did predict the second peak correctly, and I would like the record to reflect that. I wrote this post because I have become aware that there is a narrative that is widespread in the community that the prediction of the second peak failed when the Boomerang data changed. That’s wrong. The prediction does not hinge on the quality of the first data to test it, and I even said at the time (2000) that I expected the second peak to resolve out at the predicted level. That is exactly what happened.

      I did also say at the time that the simply ansatz I was making had to fail at some point. That point is apparently L = 600. I wish I understood that. I have received no support to attempt to do so, and instead a steady stream of discouragement and contempt from those who failed to predict as much as I did. They’ve even constructed this false narrative to deny that I even accomplished as much as I demonstrably did. So not a lot of motivation there.

      I didn’t just make the prediction for the no-CDM ansatz, I also did it for LCDM. We already knew that at the time. We had what we thought were really good constraints on H0, the mass density, etc. So one could predict the expected power spectrum. One thing LCDM got right was the location of the first peak. Until then, we were pretty much just hoping and assuming that the geometry was flat, and that had to be true. It was! Total kudos for that.

      I would have hoped for similar kudos for getting the second peak right. Or at least an acknowledgement that that had happened, even if it turned out later to be a fluke. That remains unclear, so I’d thank people not to categorize it that way just because it conforms to their confirmation bias. The whole point of making a prediction like that is to make people – scientists – think again. A few members of the CMB community wrote me at the time expressing that sentiment. Most of them have ignored it, or have even asserted that it never happened.

      I identified the second peak as a test because there was distinguishable difference between LCDM as we know it in 1999 and no-CDM. Both allowed some range. The no-CDM prediction allowed very little, as all the models run together for plausible baryon densities. LCDM allowed a big variation, as it is very sensitive to the exact baryon density in combination with the dark matter density. Nevertheless, BBN was well known at the time, and for the allowed range of baryon densities, LCDM predicted a larger second peak. I was hardly alone in this LCDM prediction; you can see the same thing in Ostriker & Steinhardt (1995). The LCDM prediction was A1:2 1000. This seems to be the cause of the current Hubble tension: as we have fit the CMB to higher multipoles, it has walked its way out of the original LCDM concordance region in exactly the way you’d expect from such an excess. Except that I don’t pretend to understand the third peak, so I can’t really say that’s what is happening at still higher L.

      So I haven’t.

      Like

      1. Fair enough; I was just a bit surprised to see a new post on this topic, but you have explained your reasons behind it. From a history-of-science point of view it is interesting, and I remember when someone presented it at a journal club, but as you say yourself it’s not a good argument for MOND today. Which is why, for the sake of serious MOND research, I wish that Merritt would stop making it. That episode is a big episode in his book.

        Someone once said that no matter how extreme satire is, something real exists which is even more absurd. After building up the tension and claiming victory because the no-CDM prediction got the second peak correct, Merritt then asks whether that prediction can be modified to get the third peak correct, then claims that it can, by—drum roll, please—introducing dark matter! Seriously. A sterile neutrino, which suffers from pretty much all of this objections to WIMPs in other parts of the book.

        It would be interesting to find even one person who says that his belief in MOND has been strengthened by Merritt’s book. While I don’t think that it is the right reaction, my fear is that more than one has been turned off of MOND by Merritt. 😦

        Like

        1. As a disinterested outsider looking in, one thing I would like from MOND is an explanation for the ratios of the acoustic peaks that is provided at the same level as, for example, Wayne Hu’s Intermediate Guite: http://background.uchicago.edu/~whu/intermediate/intermediate.html

          Under CDM, the Omega_m to Omega_b ratio provides a prediction of the ratio of amplitudes of the even peaks to the odd peaks all the way down the correlation spectrum. The basic idea is that you have these periodic expansions and collapses at different scales depending on the available time, the smallest angular scales oscillating the most and the largest having just collapsed for the first time.

          So let’s say MOND is right. In this game of repeating oscillations, where does the a_0 acceleration show up? I would love a paper or a chapter or something that gives the cliffnotes version that explains these ratios in terms of a_0. If you know of one, please let me know.

          Like

    1. So two scalar fields, then? One for lambda and one for MOND? How does one calculate Omega_MOND? Would be really nice to be able to relate it to a_0, for instance.

      Like

      1. 1. Not necessarily two scalar fields, but that becomes theory-specific.
        2. What do you mean by Omega_MOND? The mass density in MOND should just be the baryon density. That’s what Omega means to me. But some people have started using it to mean something geometric. That’s not addressed by MOND per se. Obviously whatever theory one comes up with has to be consistent with a flat Robertson-Walker geometry.
        3. Yes, it would be really nice to relate the departure of the CMB power spectrum from the no-CDM prediction at L = 600 to a0. If I could do it at some point in the past twenty years, I’d be happy to tell you about it. It isn’t obvious, and I haven’t had at my disposal the billions of dollars and tens of thousands of person-years of effort that have been sunk into the dark matter paradigm.

        Like

        1. What I mean by “omega” are those values that are used in the “reduced” Friedman Equations. Indeed, while omega is normally used as a density parameter, Omega_k gets invoked precisely because you can treat curvature as acting on the same side of the first Friedmann Equation as the energy densities multiplied by the square of the scale factor. If MOND is supposed to be a scalar field similar to lambda, then it would scale as a constant.

          Hu uses Omega_m to mean Omega_CDM+Omega_baryon because Omega_m combines all the things that scale as matter. This can serve as the only CDM-dependent parameter used in his treatment of the anisotropies. Can I assume that Omega_CDM = Omega_MOND at the surface of last scattering? In that case, you get an energy density associated with the scalar field that then would need to be matched somehow to a_0 scaling so that, for example, flatness is achieved.

          Like

          1. I don’t think you can assume that. I have seen theories in which a scalar field does its thing in the CMB, but has nothing like the Omega of CDM. And we don’t even have the equivalent of the Friedmann equation in MOND. That’s bad (we want one!) but also good (the theory has finite boundaries of applicability, and doesn’t claim to do everything all the time for everyone and his uncle). Felten (1984) tried the Newtonian cosmology trick, but it doesn’t work in MOND: a0 imposes an absolute scale, so the size of the sphere does not cancel out, so you can’t treat any arbitrary patch of uniform density as representative of the whole universe as you can conventionally. This is again both bad (still no Friedmann analog) and good – there is no special density in MOND as conventionally. Absent a repulsive force like the cosmological constant, a MOND universe will eventually recollapse irrespective of its density. This is as much a solution to the flatness problem as Inflation offered – rather than driving Omega_m to 1.0000, it doesn’t matter what the density is.

            Like

            1. One often hears about the flatness problem, but does it really exist? I’ve been saying for a while that it doesn’t, but haven’t made much progress in convincing the community. On the other hand, no-one has rebutted any of my claims. But don’t take it from me; there is a long history of claims that the flatness problem is bogus; I’ve collected them into a review, which will be published by the European Physical Journal H.

              There is also a parallel to MOND: As I show in the review, there are arguments in the top journals in the field, made by many of the top people in the field, that the flatness problem is bogus and based on a misunderstanding. None of those arguments have, to my knowledge, been rebutted. So why do people just ignore the literature and continue to claim that the flatness problem is real?

              ..

              Like

  18. To amplify Phillip’s point about the flatness problem, I think that it got built into the curriculum taught in cosmology sometime in the ’80s and became part of the Word of God so it doesn’t matter what the literature says. I don’t dwell on Inflationary theory when I teach cosmology myself, but it is clear than many others do, and the flatness and horizon problems are imprinted deeply into impressionable young minds (but the magnetic monopole problem got dropped at some point). The ironic thing to me is that the most compelling argument in favor of Inflation when it took off was the flatness problem, but what was really meant by that was the coincidence problem: why do we live at a time when Omega_m = 0.3 when the eternal future will coast towards zero? The only way to avoid that was if Omega_m = 1.0000 exactly, and Inflation provided a compelling mechanism to drive it there from any initial condition. Note that this is Omega_MATTER not Omega_TOTAL. which is why Inflationary cosmologists spent years and years castigating observers for not finding enough mass, and assuring us that we would if only we looked harder. Eventually they gave up in the ’90s and decided to accept Lambda (which they had previously derided) since it got them up to Omega_TOTAL = 1. That is consistent with Inflation, but that is the weak-sauce version in which only the geometry is flat. I call it weak sauce because the coincidence problem is worse in LCDM than for an open universe. We were trying to avoid the coincidence of living at a special time, but Omega_m transitions from 1 to zero more rapidly in a universe with Lambda than in an open universe, so the time we live in now is more special, not less. In effect, LCDM compounds the problem Inflation was originally sold on solving.

    Like

  19. I think that that’s a good summary of the history. Second edition of your cosmology textbook? Better update the stuff on observations. Theory? Nah, don’t bother.

    Yes, some, maybe most, theorists derided the cosmological constant, especially the particle-physics types, so belief in flatness due to inflation led to Ω (by which I mean matter, not total) of 1. What tipped the balance in favour of Λ was not that dark matter wasn’t found (people are still looking for it, though not for as much of it), but because supernova cosmology indicated an accelerating universe. The values for Λ and Ω one gets are pretty precise, and lead to an Ω of about 0.3, which many observational cosmologists had been telling us all along. (Interestingly, the most famous old-school observational cosmologist, Allan Sandage, though most famous for trying to measure the Hubble constant (even claiming that it was 42 at one point), got completely drunk on the inflationary kool-aid.). The age of the Universe comes out right, and later the CMB indicated the same values. It is an impressive story. The convergence of different tests is why it is known as concordance cosmology. While it is easy to think of things which could go wrong with one test, it is more difficult to think of what could go wrong in all of them yet still allow them to converge on the same values. One of the most convincing arguments of the robustness of the concordance model is by none other than noted MOND guru Bob Sanders in his book Deconstructing Cosmology. Yes, one of the main players in the MOND game spends about half a book singing the praises of ΛCDM. See my review at the link below. Of course, he then discusses the successes of MOND. My point is that it is a balanced treatment.

    As to the flatness problem, there are two aspects: the fine-tuning problem as to why Ω was so close to 1 at the beginning (which is independent of Λ) and the instability problem as to why Ω is still of order 1 today if it is not exactly 1, one way of expressing the coincidence problem. As Stacy indicates, Λ changes things a bit here. However, in my review linked to elsewhere, one can read that both of these aspects have been long since addressed in the literature. Most rejections of the flatness problem show that the fine-tuning argument is bogus, but several also address the instability problem (the resolution of which depends on whether or not the Universe will expand for ever, which to a good approximation depends on whether Λ is positive.). One of the first to address this in the context of a universe which will collapse in the future was the late, great Wolfgang Rindler, though for Λ=0. In my 2012 paper, I extended the argument to other values of Λ. Kayll Lake was one of the first (and probably the first in print) to point out that in the case of positive Λ and a universe which will expand forever (our Universe is such a universe), then one needs fine-tuning to get a large departure from an Ω of 1, which is a reverse of the traditional argument due to Dicke and Peebles (which implicitly assumed Λ=1). It‘s all in the review!

    http://www.multivax.de:8000/helbig/research/publications/info/deconstructing_cosmology.html

    By the way, just now I am updating my web pages so that those about my book reviews link not only to a PDF of the proof (usually very close to the final version), but also to the actual published version at ADS.

    Like

  20. The “problems” that are used as object-lessons by inflation-boosters are one thing, but there is also the empirical CMB measurement of curvature which was what I was attempting to reference when I said “flatness is achieved”. Didn’t mean to stir the hornets nest!

    Like

    1. It is important to remember that the original flatness problem was why Ω is between, say, 0.01 and 100. We now know that it is around 0.3, so quite close to 1. Whether the arguments I discuss in the review can explain it being that close is not clear (they probably can for the expand-forever case which, after all, is our Universe), but that is a red herring because the original flatness problem discussed a much less severe coincidence.

      There have been some claims recently that non-negligible positive curvature fits some observations better. The sum of Ω and Λ is 1 to within a per cent, but we don’t know how precisely it is close to 1.

      Guth was originally worried about the monopole problem, discovered that inflation would produce flatness, so that is a prediction. On the other hand, no-one can predict just how flat the Universe should be. The number of e-folding is just chosen to match what we see.

      Of course, inflation could have happened even if there is no flatness problem. Just because there is a solution doesn’t mean that there must be a problem.

      Like

        1. I think that it is fair to say that nobody knows, since there is no MONDian cosmology which can predict things like that in detail. However, Λ>0 doesn’t necessarily mean that the universe will expand forever in conventional cosmology; it has to be bigger than a certain value. So my guess is that at least that would also hold true in MONDian cosmology, but also that a big enough Λ would lead to eternal expansion.

          Like

          1. Yeah, don’t know – MOND gives an extra pull, but don’t know if that’s guaranteed to overcome the extra push from Lambda. I would say we don’t even really know whether these (CDM/Lambda/MOND) are real entities, or just proxies that allow us to approximate some deeper but as yet unknown theory.

            Like

            1. It would certainly depend on the value of Λ which, as far as we know, could be anything really.

              For a long time, there have been two possible interpretations of Λ: One, the original one, has it just a constant, including in the mathematical sense as a constant of integration. Physically, something to do with the structure of spacetime. Yes, we can’t explain its value, but neither can we explain the value of the gravitational constant, at least not from first principles. The other is some stuff with the unfamiliar equation of state p=-ρ, which was originally suggested by none other than Erwin Schrödinger. Einstein agreed that they were mathematically equivalent, though with completely different physical interpretations.

              My guess is that at least the first version, favored by Einstein, is correct.

              Steven Weinberg famously invoked both: an Einstein-style cosmological constant, but negative, and the other-style one arising from quantum-mechanical vacuum fluctuations, with the two cancelling almost but not quite. That explains why the value is so much lower than what the particle physicists believe, and Weinberg invokes weak-anthropic arguments to explain the observed value.

              Another possibility is to not worry about a theory which is off by 120 orders of magnitude in a back-of-the-envelope calculation.

              Although many people work in big-bang nucleosynthesis, inflation, and so on, those are really particle or nuclear physics in a cosmological context. Weinberg is one of the few who has worked in particle physics independent of cosmology and cosmology independent of particle physics. The only other person who springs to mind is Pascual Jordan.

              Like

  21. So… yes, to both of you. The CMB measurement of a flat geometry is a great success of LCDM, for which it was a clear prediction established before there were CMB data adequate to test it. Like Bob Sanders, I believe in giving credit where credit is due – see, e.g., https://tritonstation.com/2019/01/28/a-personal-recollection-of-how-we-learned-to-stop-worrying-and-love-the-lambda/ . Most cosmologists do not have this attitude when it comes to MOND, and choose only to see the problems while ignoring its predictive successes. I try to express both attitudes in https://arxiv.org/abs/1404.7525. This has not been constructive. I don’t see how we can hope to find the truth of the matter while we willfully ignore an inconvenient set of facts.

    Like

Comments are closed.