Cosmology is challenged at present by two apparently unrelated problems: the apparent formation of large galaxies at unexpectedly high redshift observed by JWST, and the tension between the value of the Hubble constant obtained by traditional methods and that found in multi-parameter fits to the acoustic power spectrum of the cosmic microwave background (CMB).

Maybe they’re not unrelated?

The Hubble Tension

Early results in precision cosmology from WMAP obtained estimates of the Hubble constant h = 0.73 ± 0.03 [I adopt the convention h = H0/(100 km s-1 Mpc-1) so as not to have to have to write the units every time.] This was in good agreement with contemporaneous local estimates from the Hubble Space Telescope Key Project to Measure the Hubble Constant: h = 0.72 ± 0.08. This is what Hubble was built to do. It did it, and the vast majority of us were satisfied* at the time that it had succeeded in doing so.

Since that time, a tension has emerged as accuracy has improved. Precise local measures** give h = 0.73 ± 0.01 while fits to the Planck CMB data give h = 0.6736 ± 0.0054. This is around the 5 sigma threshold for believing there is a real difference. Our own results exclude h < 0.705 at 95% confidence. A value as low as 67 is right out.

Given the history of the distance scale, it is tempting to suppose that local measures are at fault. This seems to be the prevailing presumption, and it is just a matter of figuring out what went wrong this time. Of course, things can go wrong with the CMB too, so this way of thinking raises the ever-present danger of confirmation bias, ever a scourge in cosmology. Looking at the history of H0 determinations, it is not local estimates of H0 but rather those from CMB fits that have diverged from the concordance region.

The cosmic mass density parameter and Hubble constant. These covary in CMB fits along the line Ωmh3 = 0.09633 ± 0.00029 (red). Also shown are best-fit values from CMB experiments over time, as labeled (WMAP3 is the earliest shown; Planck2018 the most recent). These all fall along the line of constant Ωmh3, but have diverged over time from concordance with local data. There are many examples of local constraints; for illustration I show examples from Cole et al. (2005), Mohayaee & Tully (2005), Tully et al. (2016), and Riess et al. (2001). The divergence has occurred as finer angular scales have been observed in the CMB power spectrum and correspondingly higher multiples ℓ have been incorporated into fits.


The divergence between local and CMB-determined H0 has occurred as finer angular scales have been observed in the CMB power spectrum and correspondingly higher multiples ℓ have been incorporated into fits. That suggests that the issue resides in the high-ℓ part of the CMB data*** rather than in some systematic in the local determinations. Indeed, if one restricts the analysis of the Planck (“TT”) data to ℓ < 801, one obtains h = 0.70 ± 0.02 (see their Fig. 22), consistent with earlier CMB estimates as well as with local ones.

Photons must traverse the entire universe to reach us from the surface of last scattering. Along the way, they are subject to 21 cm absorption by neutral hydrogen, Thomson scattering by free electrons after reionization, blue and redshifting from traversing gravitational potentials in an expanding universe (the late ISW effect, aka the Rees-Sciama effect), and deflection by gravitational lensing. Lensing is a subtle effect that blurs the surface of last scattering and adds a source of fluctuations not intrinsic to it. The amount of lensing can be calculated from the growth rate of structure; anomalously fast galaxy formation would induce extra power at high ℓ.

Early Galaxy Formation

JWST observations evince the early emergence of massive galaxies at z ≈ 10. This came as a great surprise theoretically, but the empirical result extends previous observations that galaxies grew too big too fast. Taking the data at face value, more structure appears to exist in the early universe than anticipated in the standard calculation. This would cause excess lensing and an anomalous source of power on fine scales. This would be a real, physical anomaly (new physics), not some mistake in the processing of CMB data (which may of course happen, just as with any other sort of data). Here are the Planck data:

Unbinned Planck data with the best-fit power spectrum (red line) and a model (blue line) with h=0.73 and Ωm adjusted to maintain constant Ωmh3. The ratio of the models is shown at bottom, that with = 0.67 divided by the model with h = 0.73. The difference is real; h = 0.67 gives the better fit****. The ratio illustrates the subtle need for slightly greater power with increasing ℓ than provided by the model with h = 0.73. Perhaps this high-ℓ power has a contribution from anomalous gravitational lensing that skews the fit and drives the Hubble tension.

If excess lensing by early massive galaxies occurs but goes unrecognized, fits to the CMB data would be subtly skewed. There would be more power at high ℓ than there should be. Fitting this extra power would drive up Ωm and other relevant parameters*****. In response, it would be necessary to reduce h to maintain a constant Ωmh3. This would explain the temporal evolution of the best fit values, so I posit that this effect may be driving the Hubble tension.

The early formation of massive galaxies would represent a real, physical anomaly. This is unexpected in ΛCDM but not unanticipated. Sanders (1998) explicitly predicted the formation of massive galaxies by z = 10. Excess gravitational lensing by these early galaxies is a natural consequence of his prediction. Other things follow as well: early reionization, an enhanced ISW/Rees-Sciama effect, and high redshift 21 cm absorption. In short, everything that is puzzling about the early universe from the ΛCDM perspective was anticipated and often explicitly predicted in advance.

The new physics driving the prediction of Sanders (1998) is MOND. This is the same driver of anomalies in galaxy dynamics, and perhaps now also of the Hubble tension. These predictive successes must be telling us something, and highlight the need for a deeper theory. Whether this finally breaks ΛCDM or we find yet another unsatisfactory out is up to others to decide.


*Indeed, the ± 0.08 rather undersells the accuracy of the result. I quote that because the Key Project team gave it as their bottom line. However, if you read the paper, you see statements like h = 0.71 ± 0.02 (random) ± 0.06 (systematic). The first is the statistical error of the experiment, while the latter is an estimate of how badly it might go wrong (e.g., susceptibility to a recalibration of the Cepheid scale). With the benefit of hindsight, we can say now that the Cepheid calibration has not changed that much: they did indeed get it right to something more like ± 0.02 than ± 0.08.

**An intermediate value is given by Freedman (2021): h = 0.698 ± 0.006, which gives the appearance of a tension between Cepheid and TRGB calibrations. However, no such tension is seen between Cepheid and TRGB calibrators of the baryonic Tully-Fisher relation, which gives h = 0.751 ± 0.023. This suggests that the tension is not between the Cepheid and TRGB method so much as it is between applications of the TRGB method by different groups.

***I recall being at a conference when the Planck data were fresh where people were visibly puzzled at the divergence of their fit from the local concordance region. It was obvious to everyone that this had come about when the high ℓ data were incorporated. We had no idea why, and people were reluctant to contradict the Authority of the CMB fit, but it didn’t sit right. Since that time, the Planck result has been normalized to the point where I hear its specific determination of cosmic parameters used interchangeably with ΛCDM. And indeed, the best fit is best for good reason; determinations that are in conflict with Planck are either wrong or indicate new physics.

****The sharp eye will also notice a slight offset in the absolute scale. This is fungible with the optical depth due to reionization, which acts as a light fog covering the whole sky: higher optical depth τ depresses the observed amplitude of the CMB. The need to fit the absolute scale as well as the tip in the shape of the power spectrum would explain another temporal evolution in the best-fit CMB parameters, that of declining optical depth from WMAP and early (2013) Planck (τ = 0.09) to 2018 Planck (τ = 0.0544).

*****The amplitude of the power spectrum σ8 would also be affected. Perhaps unsurprisingly, there is also a tension between local and CMB determinations of this parameter. All parameters must be fit simultaneously, so how it comes out in the wash depends on the details of the history of the nonlinear growth of structure. Such a calculation is beyond the scope of this note. Indeed, I hope someone else takes up the challenge, as I tire of solving all the problems only to have them ignored. Better if everyone else comes to grip with this for themselves.

71 thoughts on “Early Galaxy Formation and the Hubble Constant Tension

  1. Is there some reason why this is considered a tension between two values, rather than a change in value over time? I.e., CMB determination is an average value over cosmic time, local is a recent time value determination? Is there some reason that there is no consideration for this value increasing over time?

    Like

    1. The value of the Hubble parameter should change over time, and appears to do so more or less as expected in LCDM. That is, the data constraining H(z) has the right shape. However, it does not hit the locally measured value of H0=H(z=0) for the Planck normalization at z=1090. The error bars on the local value are by far the smallest, aside from Planck itself, so the in-between H(z) don’t help a whole lot to decide which normalization is best.

      It could be that we live in a local bubble that deviates from the global average. That shouldn’t happen in LCDM, though it could in a universe that developed large scale anisotropy by late times, which could happen in a MOND universe (Sanders also discussed the scale for this). But in that case, the Planck best-fit parameters aren’t physically meaningful; they’re just what you need to get a fit with conventional physics that imitates some underlying new physics.

      Like

  2. Interesting post! I also noticed that in the vHDM cosmological model based on MOND and designed to get galaxy clusters right, the best fit Hubble parameter is slightly larger than in LCDM:
    https://doi.org/10.1111/j.1365-2966.2011.19321.x

    We are working with a student to explore this. The early formation of galaxies was expected in MOND as you mention, but the Sanders 1998 paper also mentions that significant density contrasts should be present in the late universe out to hundreds of Mpc because structure formation is enhanced in MOND. A large local void could well solve the Hubble tension:
    https://tritonstation.com/2020/10/23/big-trouble-in-a-deep-void/

    So I do agree that there is probably a link between the unexpectedly early formation of galaxies and the unexpectedly high Hubble parameter. In any case, the cosmological extension to MOND remains rather uncertain. We are doing large vHDM simulations to explore this model, which is all the more important given the difficulties faced by AEST:
    https://arxiv.org/abs/2301.03499

    Will be interesting to see how all this develops.

    Liked by 1 person

    1. Good, these simulations sound promising. One thing to bear in mind: the local H0 is tied to the measured value of a0, which sets the normalization of the BTFR that gives H0=75. So a vHDM cosmology with Planck H0 cannot be self-consistent with the measured value of a0, unless we do indeed live in a bubble that is very discrepant from the mean expansion. If the universe becomes anisotropic at late times, I guess everybody lives in some sort of non-conforming bubble.

      Like

      1. The BTFR is only one way to get the local H_0. In general, one does not need to assume the validity of MOND to see that the local H_0 is higher than the Planck value. Yes, it would be reasonable in MOND if the Universe is presently inhomogeneous out to a substantial fraction of the Hubble radius, as predicted by Sanders in the 1998 paper you mentioned.

        Like

  3. I think you answered my question, but as I am a retired forester with an interest in cosmology, not an amateur or professional cosmologist, could you clarify in layman’s terms? I interpret your response as saying the Hubble parameter is expected to change over time (I assume “increase” over time), but that the Planck normalization should equal the local value. I had assumed that the Planck value was an average over cosmological time, but now assume that the Planck “normalization” is the expected value in current time?

    Like

    1. The Planck normalisation is the expected value of the Hubble parameter at the present epoch. The expansion rate is supposed to change substantially over cosmic history, so to ensure a fair comparison, people usually only talk about the present value. This can be measured with local observables, or predicted from high redshift observables like the CMB with some theory. It is indeed plausible that even slight variations to the theory could remove the tension, which after all only arises in LCDM and not in all possible variations to it. Indeed, the Hubble bubble (local supervoid) scenario I mentioned goes along the lines of your comment. Hope that helps.

      Like

  4. Thanks for your reply Indranil. I think I have been confused by the use (misuse?) of the term “the Hubble constant”. I noticed that Stacy used the term “Hubble Parameter”, which I presume is the correct phraseology. Thanks again for your explanation!

    Liked by 1 person

  5. Stacy,
    Referring to various figures in your papers+postings, you often say (paraphrasing) “the acceleration scale a_0 is obvious IN the data”. Does the data cover a large enough time range to assert that a_0 is definitely a constant, or is the door open to it being a time-dependent “parameter” like H_0? I ask about this because people often write
    a_0 = c H_0 .
    If one is a constant, the other should be too, no?

    Like

    1. Time variation of a_0 is discussed in section 9.1 of my detailed invited review of MOND:

      Click to access 2110.06936.pdf

      It should not have varied too much over cosmic history given various constraints. I am fairly sure that it should not have varied proportionately to the Hubble parameter, which should have been a lot higher in the past – at least in standard cosmology. There would also be various difficulties if a_0 was much lower in the past. I think it may be related to the dark energy density and thus have stayed the same over cosmic history. I am also not sure if covariant theories of MOND give an acceleration scale that depends on redshift.

      Like

      1. I see no clear evidence for variation of a0 with redshift, and certainly not as strong as would be implied by a0(z) ~ cH(z). Last I looked, the best evidence was in the non-evolution of the BTFR with redshift, but Indranil has reviewed this more recently. Perhaps a more relevant scale is a0 ~ c^2*sqrt(Lambda). See the discussion near the beginning of Milgrom’s scholarpedia review: http://www.scholarpedia.org/article/The_MOND_paradigm_of_modified_dynamics

        Like

      2. Hi Indranil,

        one of the main arguments for MOND is McGaugh’s dog
        or the precise calculation of the rotational velocity
        as a function of the baryonic mass.
        See
        https://tritonstation.com/2020/12/31/25-years-a-heretic/
        In the figure 3 (The rotation curves of two galaxies…NGC 2403 and UGC 128)
        MOND can reproduce very well the observed
        rotation velocity.

        But in your article, in Figure 2, it looks like MOND cannot do that.
        Anyway, the viewer sees a figure where LCDM models well and MOND models poorly.

        What is the intention of this figure ?

        Like

        1. Thanks for your interest in my review! The caption to the figure explains that the galaxies shown there are fake. Two galaxies were considered, but the images were swapped. MOND cannot fit the fake combination of photometry of one galaxy and kinematics of a different galaxy. But dark matter can fit the fake data fairly well. So the point of the figure is to show the greater degree of theoretical flexibility in CDM fits to rotation curves.

          Like

          1. Hello Indranil,

            Thank you for your reply. This explains everything.
            Still, I’m not really happy with the figure alone.
            The simple reader will see: Dark matter can model it, MOND cannot model it….

            Like

  6. About a year ago I printed out Milgrom’s Scholarpedia article, linked by Stacy above, but only to page 24 of 38 pages. So I figured I’d complete the printing, only to discover the text and page numbers didn’t match up. Then I realized that the article was recently updated, so reprinted the entire article. Now I can read it at my leisure at coffee shops, or wherever, without the annoyance of the energy saving computer screen going dark after a certain time interval.

    Like

    1. While it might be a bit big to print, I have published a very detailed invited review of MOND, which is 77 pages without the references and acknowledgments:
      https://arxiv.org/abs/2110.06936

      It goes through the observational evidence in a lot of detail and has also benefited greatly from a long consultation of the whole community, which is partly why it has 864 references. I think the summary tables init are particularly helpful.

      It is good to see that this really annoyed some LCDM enthusiasts: in this blog, I was accused of stating “a similarly dubious case for the non-existence of dark matter”:
      https://bigthink.com/starts-with-a-bang/5-truths-dark-matter/

      The dodgy logic here is hard to overstate. First of all, the previous case that I am compared to is a carefully thought through piece by Sabine Hossenfelder, whose doubting of dark matter cannot be considered more dubious than the idea itself. Secondly and most importantly, my review argues you need hot dark matter on large scales. So the post by Ethan Siegel is really not very helpful. One can argue that he makes a similarly dubious claim against MOND to all the other horrifically flawed claims to have falsified it, which were of course subsequently rebutted:
      https://darkmattercrisis.wordpress.com/2022/06/18/70-the-list-of-flawed-mond-rebuttals/

      He also seems to think that MOND advocates are not aware of the CMB and Bullet Cluster, when that is not the case – fits to both are shown in the review. We are currently exploring large-scale structure in MOND. Obviously a lot of vested interests are at stake if much of the evidence attributed to dark matter might be explained by instead modifying the law of gravity. This no doubt underlies the horrifically wrong recent claims to have explained the Milky Way satellite plane using LCDM. Some of the moral standards imposed by Ethan are reasonable, but sadly LCDM enthusiasts often fail to live up to them.

      Like

      1. I found this reply by Indranil awaiting moderation after I posted the reply below. WordPress does that to things with too many embedded links on the presumption that they are spam – which is all too often the case.

        Ethan Siegel is what in the vernacular you would call a “hater.” He hates MOND, and has repeatedly demonstrated both a strong bias against it combined with a stunning ignorance of what it really is (referring in one recent post to it imposing a “minimum” acceleration scale, which is not how it works). While he seems to be pretty good at explaining a lot of science to the public, he is egregiously biased about this particular subject. I understand where he’s coming from – I had much the same reaction when I first encountered MOND, and it took a Herculean effort of intellectual honesty to realize maybe I was wrong to be so sure the answer had to be dark matter. Ethan seems incapable of that. He also appears to lack the self-awareness to imagine that maybe – just maybe – he could turn out to be the bad guy told in future stories about this episode in the history of science.

        Sometimes I wish I could feel the absolute certainty that he evinces, but then I wouldn’t be a very good scientist.

        Like

    2. Yes, one nice thing about scholarpedia is the ability to update it as needed. That was also supposed to happen with the “Living” Review Benoit Famaey and I coauthored, but as you might expect, that is an enormous undertaking. Indranil has done this recently.

      Like

  7. Indranil, I began reading your astonishingly detailed and comprehensive review of MOND on the computer screen, but debated whether I had enough ink to print the whole paper out, so decided temporarily to refrain from doing so. But I plan to buy more ink today. I have a table in my computer room piled high with papers in astronomy and physics, which I started sorting out by category. But my 4 drawer filing cabinet is a mess, and it will be a monumental task to clean it out and set up folders in a neat organized way, so I can access a particular topic and paper conveniently.

    Liked by 1 person

    1. Thanks for your interest! My review is split into sections and subsections, so you can focus on a particular area of astrophysics if desired. The Living Review by Benoit and Stacy is also really great, but many things were found out over the last decade and are consequently just not mentioned there. I took care to include new references even when these were published just a few days before the final author proofs had to be handed in, so it should be as up to date as possible. One of the last results I squeezed in relates to the application of MOND to binary galaxies. Regarding this blog post, I have cited many works on galaxies that apparently formed too early for LCDM. There is also a section on cosmology. I have shown what is perhaps the only viable fit to the CMB in MOND in the review, based on an earlier work and given the recent problems with the AEST model. (A MOND fit to the Bullet Cluster is also shown.) The Hubble tension is of course discussed in the review, but mostly in terms of a local supervoid solution.

      One thing that might be tried is to change the A_L parameter in CMB fits to enhance the lensing amplitude, but by a factor that depends on the scale. Then one can see if there is indeed some degeneracy between foreground lensing of the CMB and the Hubble parameter. Another thing which would preferentially come in at high multipoles (not multiples!) or smaller angular scales is the free streaming effects of sterile neutrinos, as these would be hot rather than cold dark matter. While the Planck team have written that masses above 10 eV/c^2 are indistinguishable from cold dark matter, there might still be some degeneracy with the Hubble parameter for slightly higher masses, as I recommend. We have taken on a student to explore this, good progress has been achieved so far. But I think the Hubble parameter has not been varied yet as that was supposed to be done later after varying the other parameters, which has already allowed the fit to improve quite a lot so it looks really very similar to the LCDM fit by now.

      Like

  8. Indranil, your paper is a true academic treasure, being so up-to-date and thorough in its analysis of all these different problems confronting astronomy today. The Bullet Cluster is something that has long puzzled me as to how MOND can accommodate it. But I’m admittedly coming from a general knowledge of physics perspective, and simple mechanical thinking. The weak lensing of the two galaxy clusters indicating most of the mass resides in those areas seems like a slam-dunk for CDM, where that invisible stuff largely passed through the collision zone of the two gas clouds, mostly unaffected. So your section on the Bullet Cluster will be of keen interest to me. Another item you mentioned – sterile neutrinos – is also very interesting to me, as years ago I came up with a speculative idea that predicts that there can only be three generations of neutrinos. But from what I’ve read elsewhere sterile neutrinos would actually be right-handed neutrinos which don’t interact with matter. Thus no need to impute additional generations as the steriles could be within the three generations already known. I’ll be very interested in seeing your analysis on sterile neutrinos factoring into the higher multipoles of the CMB.

    I’ve got quite some reading ahead of me, but I was able to print up to page 40 of your paper before the ink began running out. I’ll install my new cartridge today and finish the printing.

    Like

  9. I feel like a kid in a candy store, there are so many ‘goodies’ in Indramil Banik’s and Hongsheng Zhao’s review of MOND. I’ve barely begun to sample the ‘sweets’, but paid particular attention to a brief discussion of the modified inertia version of MOND on page 10, an approach I’ve long been intrigued with. But in that section it’s pointed out there are serious difficulties with the inertia approach, that seem quite intractable. But Bekenstein and Milgrom examined this approach in 1984 as cited on page 10, which I’ll have to check out at some point. Out of curiosity I just dialed up the older review from 2012 by Benoit Famaey and Stacy McGaugh, and see they also have a section on the modified inertia strategy, which I’m going to take a look at shortly.

    Like

    1. Thanks for your comments about my review! The Bullet Cluster seems to need dark matter, but I argue that it could be hot as the supposed dark matter clumps are quite big, so there is no need to invoke cold dark matter – sterile neutrinos with a rest energy >2 eV would be sufficient.

      The modified inertia approach to MOND was falsified at 6.9 sigma confidence in this subsequent paper:
      https://arxiv.org/abs/2207.11069

      It does indeed seem to face very severe difficulties at the moment.

      Like

      1. You should not take claims of N sigma too seriously, as they rely on the error bars reflecting proper random errors, which they almost never do in astronomy. I’ve had lengthy discussions about this with Kyu-Hyun. There are many papers based on our SPARC data of which I’m not a coauthor; this is one of them.

        If we play this N-sigma game, everything is ruled out. In many ways, modified inertia appears to me to be one of the least bad of many formally bad options. We’ve barely begun to explore its implications properly, so to take this attitude now would be to make the same mistake that the community made to dismiss MOND without further thought in the ’80s.

        Sadly, whether our misunderstanding of gravity or inertia is to blame is exactly the argument we should be having. Instead, we are stuck discussing flavors of invisible mass, and inventing new forms of early/late/tardy/luke-warm/funny-smelling dark energy (e.g., 2302.05709).

        Like

        1. Stacy,
          Is there any literature on the modified inertia (MI) approach in the context of lensing? Usually, MI in the context of Newtonian F=ma involves squaring the (radial) 3-acceleration. But for lensing we must work with a lightlike geodesic equation… (?)

          Like

        2. “misunderstanding of gravity or inertia”
          I don’t like either of them. I always have the feeling that we are missing a very important point.
          And all we read is pure mathematics, which does not contribute to understanding.

          Like

          1. As a pure mathematician, I disagree strongly with this statement. What you read is *applied* mathematics, which does not contribute to understanding. Pure mathematics, done properly, is always about improving understanding.

            Like

  10. Thank you, Indranil, for referring me to this work. It’s a must read, as I’m very interested in why the modified inertia isn’t viable. Meanwhile while checking papers that I previously printed out, I discovered that I had started printing out Benoit and Stacy’s 2012 “Living” review of MOND in July, 2018. I had only gotten to page 44 of 269 of this monumental work at Springer Link. The arXiv version must have somewhat different formatting or smaller type as it runs to 164 pages. Assuming they are both the same I’ll print out the shorter arXiv version for my reference library.

    Like

  11. Oops, I overlooked Stacy’s comment on Kyu-Hyun’s paper while I was typing my comment above. Now I’m doubly intrigued by the inertial approach and will check out all the literature on it.

    Like

  12. Stacy

    could you comment

    Astrophysics > Astrophysics of Galaxies
    arXiv:2301.04368 (astro-ph)
    [Submitted on 11 Jan 2023]
    On the functional form of the radial acceleration relation
    Harry Desmond, Deaglan J. Bartlett, Pedro G. Ferreira
    Download PDF

    We apply a new method for learning equations from data — Exhaustive Symbolic Regression (ESR) — to late-type galaxy dynamics as encapsulated in the radial acceleration relation (RAR). Relating the centripetal acceleration due to baryons, gbar, to the total dynamical acceleration, gobs, the RAR has been claimed to manifest a new law of nature due to its regularity and tightness, in agreement with Modified Newtonian Dynamics (MOND). Fits to this relation have been restricted by prior expectations to particular functional forms, while ESR affords an exhaustive and nearly prior-free search through functional parameter space to identify the equations optimally trading accuracy with simplicity. Working with the SPARC data, we find the best functions typically satisfy gobs∝gbar at high gbar, although the coefficient of proportionality is not clearly unity and the deep-MOND limit gobs∝gbar−−−√ as gbar→0 is little evident at all. By generating mock data according to MOND with or without the external field effect, we find that symbolic regression would not be expected to identify the generating function or reconstruct successfully the asymptotic slopes. We conclude that the limited dynamical range and significant uncertainties of the SPARC RAR preclude a definitive statement of its functional form, and hence that this data alone can neither demonstrate nor rule out law-like gravitational behaviour.

    Comments: 12+4 pages, 4 figures, 3 tables; MNRAS submitted
    Subjects: Astrophysics of Galaxies (astro-ph.GA); Cosmology and Nongalactic Astrophysics (astro-ph.CO); Instrumentation and Methods for Astrophysics (astro-ph.IM); Machine Learning (cs.LG)
    Cite as: arXiv:2301.04368 [astro-ph.GA]

    Like

    1. Like they say, they’re looking for the best functional form for the radial acceleration relation. There are lots of possibilities if you just fit the kinematic data, which probe reliably down to a tenth of a0 but not much lower. The field of possibilities narrows if one includes the MOND-predicted slope as the asymptotic limit in the low acceleration regime. They’re not doing that, choosing to take an empirical approach, which is eminently reasonable. There are lensing data (see https://tritonstation.com/2021/06/28/the-rar-extended-by-weak-lensing/) that probe to much lower acceleration; including those would strike out the choices that don’t converge towards MOND.

      Like

      1. They’re not doing that, choosing to take an empirical approach, which is eminently reasonable.

        so does this means that “We conclude that the limited dynamical range and significant uncertainties of the SPARC RAR preclude a definitive statement of its functional form, and hence that this data alone can neither demonstrate nor rule out law-like gravitational behaviour.”

        since we have ” an empirical approach, which is eminently reasonable” conclusion ?

        Like

        1. Sigh. The statement you quote is both true and misleading. *By itself* the SPARC data alone don’t pick out a unique functional form. But it also comes pretty damn close – that there are lots of functions one can write that do sorta what the data do fails to describe the vast swath of arbitrary functions that it rules out. There is a narrow range of what the data do. This is a tiny range of the volume of parameter space that should be available to it in terms of dark matter (https://tritonstation.com/2022/08/24/define-better/). So, to the extent that this makes it sound like dark matter is OK (which it doesn’t really say, but I guess some might infer that) is simply a failure to define what dark matter predicts. It also doesn’t acknowledge that MOND and only MOND predicted a priori what we see, it is instead asking whether we would infer only one unique gravity law from these particular data.

          At this level of equivocation, one would also conclude that rotation curve aren’t [quite] flat.

          Like

  13. I notice that a couple of the inquiries above have to do with analyses of the SPARC data by people not directly involved with it. One example I’ve discussed before: https://tritonstation.com/2021/01/31/does-newtons-constant-vary/

    This reminds me of the “discovery” of the 126 GeV gamma ray line in Fermi data by people not one the Fermi team. If that signal were real, the people most directly involved would have noted it. They did not. Instead, others claimed a big discovery that they hadn’t noticed. As it turned out, it was bullshit and the people closest to the data knew that all along.

    I am not saying that there is no value in all publications using the SPARC data that we weren’t directly involved in. There is a lot that can be done that we didn’t have time to do ourselves. However, this is a game of scraping hard at the bottom of the barrel for ever diminishing returns.

    Like

    1. Turning the discussion back to cosmology, how do the ages of the oldest stars impact on proposed solutions to the Hubble tension?
      https://arxiv.org/abs/2302.07899

      It seems to me like the Planck cosmology should be right at the background level as a 10% higher Hubble constant would make the univese too young. But if that is the case, the high locally measured Hubble constant is best understood as an environmental effect like a local supervoid. There is also evidence of a transition in the Hubble constant at about redshift 0.5, which could be the edge of the void:
      https://arxiv.org/abs/2302.05709
      https://arxiv.org/abs/2212.00238

      It seems like the Planck cosmology works beyond that, and also gets the right age for the universe. There is also evidence for a local supervoid from galaxy number counts, but it is not supposed to go all the way out to redshift 0.5. Still, it is possible as our void analysis did have an extended tail to the posterior going to large void radii:
      https://tritonstation.com/2020/10/23/big-trouble-in-a-deep-void/

      I think a local void solution to the Hubble tension has more chance of working than most other proposals. But foreground lensing of the CMB is probably also stronger than predicted in LCDM, which would somewhat affect the cosmological parameters inferred from the CMB.

      Like

      1. I took the opposite lesson from the age of the oldest stars. We added Lambda in large part because the mass density made the universe too young. Now we’re considering further fudge factors like early dark energy to provide a bit more time in the early universe for structure to form. Maybe structure forms promptly in the early universe as Bob Sanders predicted, and we’re stuck in conventional cosmology chasing phantoms and making up new free parameters in order to mimic that now-observational fact.

        The first paper you cite assumes Omega_m = 0.3. That is conventionalist thinking. If we’re going to consider MOND, then we have to admit that FLRW is at best a first approximation to some underlying but unknown cosmology. From that perspective, the values of the cosmic parameters are just fudge factors without physical meaning.

        I note that the age of the oldest stars is very similar to 1/H0 for H0=73. I.e., the age of the universe is very close to the coasting limit. That’s a bit of a coincidence in LCDM: we’ve had just enough accelerated expansion to compensate the earlier deceleration to wind up right when we’d be if there were neither. There’s not much to distinguish the age-redshift relation for LCDM with Planck parameters and that with H0=73 but small Omega_m = Omega_b. Plot it yourself if you don’t believe me. So the age problem might mean H0 has to be lower, but it could just as easily mean Omega_m is lower. Not conventionally of course, but that would just mean conventional cosmology is not quite right, which is hardly a surprise if MOND is right.

        What a low Omega_m does wrong is the geometry, unless you re-invoke Lambda. So we have this weird dichotomy, where t(z) is consistent with low Omega_m and zero Lambda, while the angular diameter distance to the surface of last scattering is consistent with low Omega_m and big Lambda. Can’t have it both ways! Conventionally, at least. Maybe this is a clue to the underlying theory?

        As for voids… I was hesitant to abandon the cosmological principle. If we do – and perhaps we must at late times in MOND (this was already mentioned in Felten 1984) – then I would think the “local” topology would be complex, and the local H0 could be too low as easily as too high. More generally, it would be direction-dependent (no more isotropy). So on top of the huge redshift surveys that have been conducted over the past few decades, we now need to measure real distances to galaxies all over the sky out to some large redshift (z=1?). That is, we need them all to show SN Ia so as not to simply rely on redshift as a distance indicator: we need to make out distance and redshift independently because they aren’t guaranteed to be the same thing in an anisotropic universe. (On average, sure, the universe is still expanding. But it might be expanding faster in one direction and slower in another, or even vary along any given line of sight depending on the typical scale of anisotropy – which might not even have a “typical” scale, it might go fractal on us.)

        Would make a great long term project, but it would dwarf SDSS in scale and duration, so not gonna hold my breath.

        Like

        1. The Sanders 1998 paper predicts substantial inhomogeneities at late times in MOND. There is also strong evidence for anisotropy of the Hubble constant:
          https://doi.org/10.1051/0004-6361/202140296

          I was not aware that early dark energy is added to give more time for structure to form. I thought it was supposed to reduce the angular scale of the first acoustic peak in the CMB in order to increase the Hubble parameter we infer from it. In any case, constraints on the age of the Universe have rather different implications in very different cosmological models. I was assuming a conventional FRW cosmology at the background level with Omega_M close to 0.3 as this is what galaxy clusters seem to imply, even in MOND (Pointecouteau & Silk 2005): the ratio of baryons to total dynamical mass is similar in the most massive galaxy clusters and has a value close to 2*pi, which is also what the standard CMB fits require. This is something we can replicate in our vHDM cosmological simulations, where the dark matter fraction is close to 84% in the most massive structures but drops to zero in less massive structures – which are presumably just purely baryonic galaxies.

          As for significant inhomogeneities at late times: yes, these could just as easily cause the local Hubble constant to be lower than the global value instead of higher. But models in which there is the possibility of a 10% level difference either way would still solve the Hubble tension, even if this is only by raising the uncertainty due to cosmic variance. There might not be a fundamental explanation for why the local value has to be precisely 10% higher than the global value.

          I will need to look up the literature on what the age of the universe is in early dark energy models, but my understanding was that these only depart from standard cosmology a little bit prior to recombination, so overall their effect is to have an FRW cosmology with higher Hubble constant consistent with the local value but which still gets a realistic CMB. I suppose the age of the universe would be reduced in such models.

          Like

          1. A standard model cosmologist was advocating to me early dark energy as a “conventional” explanation for early structure formation. To be sure, it has been invoked to explain the Hubble tension. From my perspective, this is just another in a long line of epicycles that are being added on to save a broken world model. https://tritonstation.com/2022/11/11/tooth-fairies-auxiliary-hypotheses/

            If we take MOND seriously, then it doesn’t make much sense to me presume that the best-fit FLRW parameters mean anything. They’re just epicycles to: things we’ve added to approximate an underlying truth for which we have, as yet, no agreed theory. I don’t like that uncertainty, but neither do I expect to be able to snap my fingers and have the right theory appear instantaneously out of thin air.

            It’d be marvelous if we found WIMP-like dark matter rather than treating it as an article of faith. It would be stupendous if we found sterile neutrinos of the right mass and number density to solve all the problems facing MOND in cosmology and galaxy clusters. I do not think this much more likely than finding WIMPs, because in both cases we’re insisting on pounding a square peg (the data) into a round hole (FLRW-only cosmology).

            Like

    2. What about the claimed detection of the external field effect led by Chae? Do you think that is a genuine detection? Similar methods were used for testing modified inertia MOND, though focusing more on the inner parts of the rotation curves as these are less relevant for the external field effect.

      Like

      1. I was a part of that, and I thought we were right on the edge of what could be done. It was very hard for Chae to convince me that this first detection was real. Going further by parsing subsamples (inner/outer etc) seems over the edge to me. Maybe it is just more than this old man can be brought to believe, but I also sense an attitude that has become widespread that “the data are the data” and Bayes can solve it all. That’s only true if we believe every error bar, which is naive – and obviously not true for SPARC – https://arxiv.org/abs/2101.11644. We’re working to build SPARC 2.0 with a lot more galaxies and a uniform reanalysis of all the original rotation curve data, which should at least provide a consistent set of error bars, and hopefully ones that are correct in the statistical sense that is desired (though that is a lot harder to accomplish than it sounds, and remains to be seen). This wasn’t even computationally possible for SPARC, for reasons of software and manual labor more than computing time.
        Now if only some funding agency would see fit to fund this work.

        Like

    1. I was significantly involved in the studies related to the KBC void and El Gordo, which Elena is following up with a reanalysis based on more recent data. I mentioned in my review that the ten percent level agreement between the locally measured sigma_8 and the Planck prediction is not really great for a theory like MOND that presumably gets rather more structure on large scales. The cosmic shear and redshift space distortion measurements seem to limit substantially faster structure growth than predicted in LCDM and are consistent with it. I think the problem with MOND is that there are just a few admittedly well observed systems that are really problematic for LCDM and work much better in MOND, while there is a lot of evidence from large-scale structure that still favours LCDM. While I am not convinced by the two point correlation function of galaxies due to the need to invent an adjustable bias factor, other things like weak lensing are more direct, especially as the relation between light deflection and gravity is supposed to be the same in MOND as in GR. So I would say evidence on large scales still favours LCDM, but there are still plenty of things on galaxy scales and perhaps even star cluster tails regarding their asymmetric tidal tails which favour MOND. The most serious issue as far as I am concerned is the Local Group satellite planes and the high internal velocity dispersions of their member satellites. Combined with the similar properties of primordial and tidal dwarfs, this does cast strong doubt on whether the baryons and dark matter of a galaxy can be separated, even if you have a tidal dwarf galaxy which really ought to be purely baryonic in LCDM. This is a strong piece of evidence for modified dynamics being the explanation for flat rotation curves.

      Like

      1. Yes, he has but what seems significant is that his arguments are now getting a hearing in a larger public forum. The IAI has been hosting open debates on the standard models of theoretical physics for some time now. In essence the IAI has become a corrective to the institutional problem afflicting the scientific community, the relentless stifling of open debate by a “cult of consensus.”

        Like

  14. Kroupa does not raise the LCDM vs MOND issue at all in the IAI article. What he does explicitly state is that the entirety of the standard cosmological has been falsified:

    “In fact, the observations tell us that the Universe is structured on every scale, amounting to a falsification of the standard model of cosmology with extreme (more than 5 sigma) statistical confidence. A serious physicist would never again touch a theory that has been ruled out at such a significance level.”

    The “structured on every scale” argument falsifies the expanding universe/Big Bang model because it falsifies the Cosmological Principle and the CP is an axiomatic assumption of the FLRW model which is the basis of all existing expanding universe models including the standard model however defined.

    The good professor has thrown down the gauntlet. What remains to be seen is how the closed-minded community of cosmological theorists will deal with this affront to their “consensus” model. If past and current history is any guide Prof. Kroupa will soon be declared a crackpot or summarily retired to an emeritus post somewhere and his work ignored.

    Alternatively this might represent the first serious breach in the Great Wall of Silence imposed on dissenting views since Arp. The difference this time being that there seems to be a growing awareness that the standard model is seriously flawed as is the academic structure that allows a “cult of consensus” to stifle dissenting scientific views. The next few months and years should be interesting, one way or the other.

    Like

    1. This is a willful misreading so as to favor your own particular theory. First, the Cosmological Principle predates the Big Bang theory. Second, the BBT was formulated precisely because the Strong Cosmological Principle was found to be false. The SCP says that the universe is isotropic and homogenous in time as well as space.

      So the BBT depends on a violation of the SCP, which you appear(??) to accept.

      Like

      1. The cosmological principle is independent of the big bang theory. The cosmological principle states that the universe is homogeneous and isotropic. The big bang theory is simply the statement that the universe is expanding, which when extrapolated back in time results in a singularity. These two are independent: the cosmological principle holds in the old steady state model which contradicts the big bang theory, as it follows from the strong cosmological principle in the steady state model. The big bang theory holds in Alexander Deur’s gravitational self-interaction model, where gravitational self-interaction leads to a universe which is explicitly nonhomogeneous and anisotropic, and thus the cosmological principle fails, but where the universe is still expanding.

        Like

        1. Sorry, this is just historical revisionism. The expanding universe is one of three GR solutions derived by the Russian mathematician Alexander Friedman in 1922 at which time the known scale of the Cosmos was that of the Milky Way. The expanding universe solution was adopted once the redshift-distance relation discovered by Hubble was interpreted as the consequence of a Doppler-like recessional velocity.

          “Einstein’s field equations are not used in deriving the general form for the metric: it follows from the geometric properties of homogeneity and isotropy.”

          “The FLRW metric starts with the assumption of homogeneity and isotropy of space.+

          “(This) metric is a metric based on the exact solution of Einstein’s field equations of general relativity; it describes a homogeneous, isotropic, expanding (or otherwise, contracting) universe

          “These equations are the basis of the standard Big Bang cosmological model including the current ΛCDM model.”

          “This model is sometimes called the Standard Model of modern cosmology,[4] although such a description is also associated with the further developed Lambda-CDM model. The FLRW model was developed independently by the named authors in the 1920s and 1930s.”

          Quotes from this article: https://en.wikipedia.org/wiki/Friedmann%E2%80%93Lema%C3%AEtre%E2%80%93Robertson%E2%80%93Walker_metric

          Note also the fundamental role of FLRW in Dr. McGaugh’s most recent post here.

          Like

        2. It is not historical revisionism to point out that certain concepts have been conflated with each other historically. As you point out, historically, the concepts of the FLRW metric/cosmological principle and the metric expansion of the universe were all put together under what is commonly called the “standard Big Bang cosmological model”. As the cosmological principle has come under attack by recent observational evidence, it becomes fundamental to revisit which principles are in fact fundamental to the concept of the big bang and which principles were adopted ad hoc in the past due to insufficient computing power, observational evidence, and lack of more general models at the time (non-FLRW metrics were developed in the 1930s and 1940s). This means separating the cosmological principle from the metric expansion of the universe, and considering nonstandard Big Bang cosmological models where the cosmological principle fails but where the metric expansion of the universe still is occurring.

          The term “Big Bang” itself only was coined in 1949 by Fred Hoyle, to contrast expanding universe models with his preferred model of a static universe. Alexander Friedman and the other astronomers in the 1920s which developed the FLRW metric satisfying the cosmological principle never referred to any “Big Bang” model at all.

          Furthermore, the term Big Bang did not reach popular usage by mainstream cosmology until the 1970s.

          https://en.wikipedia.org/wiki/Big_Bang#Concept_history

          Some mainstream cosmologists like Adam Reiss use non-FLRW metrics in their Big Bang models like the Lemaître–Tolman metric, which was developed in 1934 a decade after the FLRW metric:

          https://en.wikipedia.org/wiki/Lema%C3%AEtre%E2%80%93Tolman_metric
          https://iopscience.iop.org/article/10.3847/1538-4357/ab0ebf

          That fact, along with the fact that the original use of the Big Bang by Fred Hoyle contrasted between two theories which both satisfy the cosmological principle, indicate that the Big Bang is independent of the cosmological principle:

          Like

      2. The difference between Pavel Kroupa and budrap is that Pavel Kroupa believes the cosmological principle is falsified; i.e. the FLRW metric doesn’t hold. Budrap believes the stronger statement that cosmological redshift is not caused by the metric expansion of the universe, which implies that the cosmological principle is false. However, the converse is not true: just because the cosmological principle is false, doesn’t mean that cosmological redshift is not caused by the metric expansion of the universe. Indeed, there are many cosmological models where the cosmological principle is false but the cosmological redshift is caused by the metric expansion of the universe. Two such models include the Lambda CDM model using the LTB metric instead of the FLRW metric (used to model a local void in the universe), as well as Alexandre Deur’s gravitational self-interaction model.

        Like

        1. “Budrap believes the stronger statement that cosmological redshift is not caused by the metric expansion of the universe, which implies that the cosmological principle is false.”

          Setting aside your fatuous claim to know what I believe, Madeleine, I can only point out to you a fact that I know and you apparently do not: There is no empirical evidence supporting the assumption that the cause of the cosmological redshift is some form of recessional velocity. None. If you choose to believe in the recessional velocity interpretation, that’s nice, but it’s not science.

          Science doesn’t care what you or I or anyone else believes. Science only cares about the facts of existence and the ordering thereof. If you can’t separate the facts from your beliefs it’s difficult, if not impossible, to do meaningful science. The redshift=recessional velocity=expanding universe construct has produced ridiculous, reality-challenged cosmological models that bear no resemblance to the physical reality we actually observe.

          Observed reality does not contain a singularity, inflation event, a big bang, expanding spacetime, dark matter or dark energy. Those things do not exist in the Cosmos we observe. They only exist in the models that contain them and that models therefore do not resemble the physical reality they purport to describe. If you want to believe in the model(s), that’s fine, but it’s not good science.

          Like

          1. “Science doesn’t care what you or I or anyone else believes. Science only cares about the facts of existence and the ordering thereof. If you can’t separate the facts from your beliefs it’s difficult, if not impossible, to do meaningful science.”

            The points you make about distinguishing facts vs beliefs could very well apply to every single aspect of what we consider to be physics: particles such as electrons, photons, quarks, neutrinos; force fields such as the electromagnetic field and gravitational fields, and space and time itself (first brought up by Immanuel Kant). If we really get down to the heart of it, when we say we are measuring gravity in outer space, we are really merely measuring the motion of astronomical objects relative to each other; the existence of the gravitational force is a metaphysical assumption that physicists and astronomers believe in because Newton’s and then Einstein’s theories of gravity do a good job at predicting the motion of astronomical objects. When we say we measure electric charge in our laboratories with an electrometer, we are merely measuring the distance between the two plates in the electrometer; the existence of the electromagnetic force is a metaphysical assumption that physicists and astronomers believe in because Maxwell’s electromagnetism and then quantum electrodynamics/electroweak theory/the Standard Model do a good job at predicting physical behaviour. We might as well declare that all physicists and astronomers are wrong because they rely on unproven assumptions to bootstrap their entire scientific endeavor. Similarly, by that standard, it is impossible to do meaningful science at all – as every scientific model and every system of knowledge relies on unproven assumptions; otherwise one runs afoul of the Munchhausen trilemma.

            Like

            1. Ms. Birchfield,

              Thank you for this response. As I am aware of your interest in certain topics in the foundations of mathematics, these remarks reflect the difficulties mathematicians face when sorting out how non-mathematicians are using mathematics.

              The “time” we all know is phenomenological and subjective. To do objective “science,” we must address the solipsist’s dilemma with an assumption. For science, this is tantamount to the assumption of a materially existent temporal dimension.

              Whatever be the correct view of the universe, a consequence of this assumption is that a sphere — as understood experientially — cannot have an inside or an outside. Mathematicians (and physicists) have provided for this with sphere eversions,

              Whatever fault people wish to ascribe to the uses of mathematics in a science reduced to physical reasoning about energy, those faults will lie with the mathematics which permits us to visualize this counterintuitive situation.

              Related to this sphere eversion is its halfway model, the Morin surface,

              https://en.m.wikipedia.org/wiki/Morin_surface

              The Morin surface relates to discussions about energy relative to its being a halfway model,

              https://en.m.wikipedia.org/wiki/Minimax_eversion

              related to Willmore energy,

              https://en.m.wikipedia.org/wiki/Willmore_energy

              If you read the section of the Morin surface article pertaining to its “structure,” you will find that “the math trick” is understood in terms of a mapping onto a tetrahedron.

              But, tetrahedra also relate to 4-dimensional geometry in a different way. The 3-dimensional projection of a tesseract can be understood as a tetrahedron connected to a point at infinity by four edges. Unfortunately, the Wikipedia article on the tesseract does not reflect this point at infinity,

              https://en.m.wikipedia.org/wiki/File:Tesseract_tetrahedron_shadow_matrices.svg

              So, it would seem that tetrahedral form is exhibited for both “infinity in the small” and “infinty in the large.” In this case, infinity in the large may be thought of a single-point compactification of a Tychonoff space.

              Curiously, the reason someone studying the foundations of mathematics might stumble upon this is because the 2-dimensional vertex projection of a tesseract — slightly deformed to avoid the double point — has the same connectivity as the free Boolean lattice on two generators,

              https://en.m.wikipedia.org/wiki/File:Hypercubeorder_binary.svg

              https://en.m.wikipedia.org/wiki/File:Free-boolean-algebra-hasse-diagram.svg

              There is good cause for this. Birkhoff and von Neumann attempted to understand quantum mechanics by introducing orthomodular algebraic logic. Motivated by the use of groups in quantum mathematics, every Boolean lattice is an orthomodular lattice by virtue of reflection groups.

              Let me observe, however, that what I have called a 2-dimensional projection also has a realization in three dimensions as rhombic dodecahedron with the double point at its center. There are other 4-dimensional polytopes which theorists attempt to decorate to understand symmetry in quantum mechanics. But, since I have no training in linear geometry, I ought to leave that to mathematicians like Dr. Wilson.

              As for the “sociological consequences” of placing too much reliance on the mathematical justification
              of beliefs, let me point out that it is an easy matter to find papers relating Curtis’ Miracle Octad Generator to tetrahedral symmetries.

              With regard to the phenomenological experience of “time” I often direct people to look at the aperiodic tilings of Kari and Culik. These may be extended to cubes in 3- dimensional space. They discuss this in the paper at the link,

              https://lib.jucs.org/article/27167/

              Numerologically, these are interesting in terms of 13 being the order of the finite projective plane of order 3 (for the planar tile count) and 21 being the order of the finite projective plane of order 4 (being the spatial cube count).

              Since I can only imagine that the scientists reading this are apalled, let me also point out that Mandel discusses measurement for use by scientists in his book, “The Statistical Analysis of Experimental Data.” Setting aside measurements for systems control and measurements for fundamental constants, Mandel maintains that measurement in science generates relations expressed through equations. This is problematic.

              A century of investigation into mathematical reasoning about “truth” involves compositionality down to terms intended to be able to denote “objects.” This is the foundationalism of the trilemma. It leads either to paradox or to transcendental hierarchies which never permit “truth” to be truly definite. Alternatives to this foundationalism are incompatible with “realism” and it is an easy matter to find papers and opinions from naive bloggers rejecting other conceptions if “truth” arising from other conceptions of logic.

              You simply cannot “ground” scientific “stories” on relations and expect to call them “truths.”

              In summary, and in support of yourself and Dr. Wilson, physicists need to stop blaming mathematics (although, misapplying mathematics by failing to understand the primacy of energy in physics is open game).

              Like

      3. If the cosmological principle and thus the FLRW Lambda CDM model is discarded by mainstream cosmology, then what will happen is that alternative cosmological models to the current FLRW Lambda CDM model will battle to become the new standard model of cosmology, some of which do imply an expanding universe, and some of which do not.

        If it happens that mainstream cosmology accepts another cosmological model in which some mechanism implies that cosmological redshift is caused by the metric expansion of the universe, then the FLRW Lambda CDM model simply got replaced by another big bang theory. However, if mainstream cosmology accepts a cosmological model where cosmological redshift is explained by a mechanism which is unrelated to the metric expansion of the universe, then the big bang is unneeded.

        Like

      4. Also, budrap is a supporter of the tired light explanation of cosmological redshift, and simply by assuming that the cosmological principle is false is not enough reason to favor tired light over the metric expansion of the universe.

        In addition, traditional tired light models have issues with special relativity, specifically time dilation of cosmological sources, and the existence of the cosmic microwave background, both of which would need to be explained by any tired light model in order for it to be successful. The former is a problem shared with MOND as a theory of gravity, since the original MOND is non-relativistic; however while there are relativistic MOND models such as TeVeS and AeST, the latter of which is more successful than CDM, I am not aware of any tired light model which is consistent with time dilation.

        Until supporters of the tired light paradigm provide a model which is consistent with the existing experimental evidence, I do not see how tired light would be favored over some non-FLRW big bang theory like Alexandre Deur’s gravitational self-interaction model, or some third alternate explanation for cosmological redshift that is not related to either the expansion of the universe or the tired light explanation.

        Like

        1. “I am not aware of any tired light model which is consistent with time dilation.”

          Let me help you out with that. There are two known causes of redshift. One is recessional velocity – the observer and emitter are moving apart. The other is the relativistic gravitational redshift.

          At first glance it might seem that the GR model could not produce a redshift that would scale with distance, but in fact it can if you use the proper analytical framework. That framework involves applying the GR redshift formula to an expanding spherical wavefront of light emitted by a galaxy.

          The redshift formula is applied at successive cosmologically significant distances to the wavefront, recalculating the mass term at each interval using a reasonable estimate of the mass density; the result is a redshift correlated with distance. It’s crude but effective – you can do it on a spreadsheet and as a bonus, since we are using GR, time dilation is built-in. Give it a try.

          Like

          1. That framework involves applying the GR redshift formula to an expanding spherical wavefront of light emitted by a galaxy.

            How does this produce the blue shift of Andromeda?

            Like

      5. I’d say the best opportunity for those against big bang theories in general to disprove the expansion of the universe would be by extending the Tolman surface brightness test to larger redshift. Currently it has been tested to z=5, which is insufficient for distinguishing between static and expanding universes as the results at z=5 are consistent with both. But expanding the Tolman surface brightness test to z=10 or z=15 is likely to resolve the question in favour of either a static universe or an expanding universe, and is likely within the capabillities of the JWST. This is better than discovering stars at very high redshift galaxies via the JWST and future infrared and submillimetre telescopes, as discovered previously in this page, since those require around z=200 to disprove the expansion of the universe, which will not be reached by telescopes anytime soon.

        Nonetheless, until evidence appears which is inconsistent with the expansion of the universe and/or an alternate model appears which is more consistent with experimental and observational evidence, the current evidence and available models still favor big bang theories.

        Like

        1. “I’d say the best opportunity for those against big bang theories in general to disprove the expansion of the universe would be by extending the Tolman surface brightness test to larger redshift.”

          That gets science backwards. It is no more necessary for me to disprove the expansion of the universe than to disprove the existence of angels. Those who wish to make extraordinary claims regarding the nature of physical reality are the ones obligated to provide scientific (empirical) evidence for those claims.

          In that regard angels and the expanding universe conjecture, both lacking empirical evidence, exist on the same metaphysical plane. It doesn’t matter how much you believe in them, neither are an observed part of physical reality.

          Like

        2. For me, it is always a pleasure to read your argumentation.
          It is clearly structured and convincing.

          And yes, I, too, find that the supernova brightness curves falsify tired light.

          Like

          1. “And yes, I, too, find that the supernova brightness curves falsify tired light.”

            By the way, Stefan, I never said that the brightness curves falsify tired light. Quite the contrary: I said that it was the best opportunity for opponents of the big bang theory to prove themselves correct.

            An analysis from 2014 by Lerner, Falmono, and Scarpa showed that, up to z=5, the surface brightness test is consistent with a static universe. However, they also said that “The agreement of the SB data with the hypotheses of a non-expanding, Euclidean Universe and of redshift proportional to distance is not sufficient by itself to confirm what would be a radical transformation in our understanding of both the structure and evolution of the cosmos and of the propagation of light.”

            https://www.worldscientific.com/doi/abs/10.1142/S0218271814500588

            So what is needed is more observational data, because there isn’t enough data to cleanly separate out a static universe from an expanding universe via the Tolman surface brightness test.

            The issues with tired light lie elsewhere, especially with its seeming contradictions with general relativity at this moment in time. But that is mostly a model building issue. The most glaring issue with MOND as an alternative to dark matter for a longest time is that MOND is non-relativistic, which is a model building issue. However, over the past few years, its proponents have come up with a model of MOND, AeST, which is relativistic and is consistent with the existing data as far as we know it:

            https://journals.aps.org/prd/abstract/10.1103/PhysRevD.106.104041

            Something similar can be done with tired light models; budrap above claims that he has a model which is consistent with general relativity, by using relativistic gravitational redshift. Hopefully he or someone else could publish a paper with the specifics of the model spelled out in detail, and how it fares with the most recent astrophysical and cosmological evidence in 2023. However, unless that work is done first, mainstream cosmology would see no reason to consider relativistic gravitational redshift as a viable alternative to the various expanding universe big bang models, because its proponents haven’t shown it to be a working model yet.

            Like

  15. Your first point is a non sequitur. That the CP predates the BB is as inarguable as it is irrelevant to Kroupa’s arguments (and by extension mine). Your second point is also a non sequitur since neither Kroupa nor I invoked the SCP, not to mention the fact that since the CP is falsified so is the SCP.

    Like

    1. The point being that violation of either the SCP or the CP would support a Big Bang, not be evidence against it. The whole point of inflation theory was to paper over the fact that the BBT worked much better if the CP were violated than if it weren’t. You’re arguing against your own position.

      How old you believe the universe to be? (Or, since you don’t believe in a ‘universe’, how old do you believe the oldest visible stars to be?

      Like

Comments are closed.