This post continues the series summarizing our ApJ paper on high redshift galaxies. To keep it finite, I will focus here on the growth of stellar mass. The earlier post discussed what we expect in theory. This depends both on mass assembly (slow in LCDM, fast in MOND), how the assembled mass is converted into stars, and how those stars shine in light we can detect. We know a lot about stars and their evolution, so for this post I will assume we know how to convert a given star formation history into the evolution of the light it produces. There are of course caveats to that which we discuss in the paper, and perhaps will get to in a future post. It’s exhausting to be exhaustive, so not today, Satan.

The principle assumption we are obliged to make, at least to start, is that light traces mass. As mass assembles, some of it turns into stars, and those stars produce light. The astrophysics of stars and the light they produce is the same in any structure formation theory, so with this basic assumption, we can test the build-up of mass. In another post we will discuss some of the ways in which we might break this obvious assumption in order to save a favored theory. For now, we assume the obvious assumption holds, and what we see at high redshift provides a picture of how mass assembles.

Before JWST

This is not a new project; people have been doing it fo for decades. We like to think in terms of individual galaxies, but there are lots out there, so an important concept is the luminosity function, which describes the number of galaxies as a function of how bright they are. Here are some examples:

Figure 3. from Franck & McGaugh (2017) showing the number of galaxies as a function of their brightness in the 4.5 micron band of the Spitzer Space Telescope in candidate protoclusters from z = 2 to 6. Each panel notes the number of galaxies contributing to the Schechter luminosity function+ fit (gray bands), the apparent magnitude m* corresponding to the typical luminosity L*, and the redshift range. The magnitude m* is characteristic of how bright typical galaxies are at each redshift.

One reason to construct these luminosity functions is to quantify what is typical. Hundreds of galaxies inform each fit. The luminosity L* is representative of the typical galaxy, not just anecdotal individual examples. At each redshift, L* corresponds to an observed apparent magnitude m*, which we plot here:

Figure 3 from McGaugh et al. (2024)The redshift dependence of the Spitzer [4.5] apparent magnitude m* of Schechter function fits to populations of galaxies in clusters and candidate protoclusters; each point represents the characteristic brightness of the galaxies in each cluster. The apparent brightness of galaxies gets fainter with increasing redshift because galaxies are more distant, with the amount they dim depending also on their evolution (lines). The purple line is the monolithic exponential model we discussed last time. The orange line is the prediction of the Millennium simulation (the state of the art at the time Jay Franck wrote his thesis) and the Munich galaxy formation model based on it. The open squares are the result of applying the same algorithm to the simulation as used on the data; this is what we would have observed if the universe looked like LCDM as depicted by the Munich model. The real universe does not look like that.

We plot faint to bright going up the y-axis; the numbers get smaller because of the backwards definition of the magnitude scale (which dates to ancient times in which the stars that appeared brightest to the human eye were “of the first magnitude,” then the next brightest of the second magnitude, and so on). The x-axis shows redshift. The top axis shows the corresponding age of the universe for vanilla LCDM parameters. Each point shows the apparent magnitude that is typical as informed by observations of dozens to hundreds of individual galaxies. Each galaxy has a spectroscopic redshift, which we made a requirement for inclusion in the sample. These are very accurate; no photometric redshifts are used to make the plot above.

One thing that impressed me when Jay made the initial version of this plot is how well the models match the evolution of m* at z < 2, which is most of cosmic time (the past ten billion years). This encourages one that the assumption adopted above, that we understand the evolution of stars well enough to do this, might actually be correct. I was, and remain, especially impressed with how well the monolithic model with a simple exponential star formation history matches these data. It’s as if the inferences the community had made about the evolution of giant elliptical galaxies from local observations were correct.

The new thing that Jay’s work showed was that the evolution of typical cluster galaxies at z > 2 persists in tracking the monolithic model that formed early (zf = 10). There is a lot of scatter in the higher redshift data even though there is little at lower redshift. This is to be expected for both observational reasons – the data get rattier at larger distances – and theoretical ones: the exponential star formation history we assume is at best a crude average; at early times when short-lived but bright massive stars are present there will inevitably be stochastic variation around this trend. At later times the law of averages takes over and the scatter should settle down. That’s pretty much what we see.

What we don’t see is the decline in typical brightness predicted by contemporaneous LCDM models. The specific example shown is the Munich galaxy formation model based on the Millennium simulation. However, the prediction is generic: galaxies get faint at high redshift because they haven’t finished assembling yet. This is not a problem of misunderstanding stellar evolution, it is a failure of the hierarchical assembly paradigm.

In order to identify [proto]clusters at high redshift, Jay devised an algorithm to identify galaxies in close proximity on the sky and in redshift space, in excess of the average density around them. One question we had was whether the trend predicted by the LCDM model (the orange line above) would be reproduced in the data when analyzed in this way. To check, Jay made mock observations of a simulated lookback cone using the same algorithm. The results (not previously published) are the open squares in the plot above. These track the “right” answer known directly in the form of the orange line. Consequently, if the universe had looked as predicted, we could tell. It doesn’t.

The above plot is in terms of apparent magnitude. It is interesting to turn this into the corresponding stellar mass. There has also been work done on the subject after Jay’s, so I wanted to include it. An early version of a plot mapping m* to stellar mass and redshift to cosmic time that I came up with was this:

The stellar mass of L* galaxies as a function of cosmic age. Data as noted in the inset. The purple/orange lines represent the monolithic/hierarchical models, as above.

The more recent data (which also predate JWST) follow the same trend as the preceding data. All the data follow the path of the monolithic model. Note that the bulk of the stars are formed in situ in the first few billion years; the stellar mass barely changes after that. There is quite a bit of stellar evolution during this time, which is why m* in the figure above changes in a complicated fashion while the stellar mass remains constant. This again provides some encouragement that we understand how to model stellar populations.

The data in the first billion years are not entirely self-consistent. For example, the yellow points are rather higher in mass than the cyan points. This difference is not one in population modeling, but rather in how much of a correction is made for non-stellar, nebular emission. So as not to go down that rabbit hole, I chose to adopt the lowest stellar mass estimates for the figure that appears in the paper (below). Note that this is the most conservative choice; I’m trying to be as favorable to LCDM as is reasonably plausible.

Figure 4 from McGaugh et al. (2024)The characteristic stellar mass as a function of time with the corresponding redshift noted at the top.

There were more recent models as well as more recent data, so I wanted to include those. There are, in fact, way too many models to illustrate without creating a confusing forest of lines, so in the end I chose a couple of popular ones, Illustris and FIRE. Illustris is the descendant of Millennium, and shows identical behavior. FIRE has a different scheme for forming stars, and does so more rapidly than Illustris. However, its predictions still fall well short of the data. This is because both simulations share the same LCDM cosmology with the same merger tree assembly of structure. Assembling the mass promptly enough is the problem; it isn’t simply a matter of making stars faster.

I’ll show one more version of this plot to illustrate the predicted evolutionary trajectories. In the plots above, I only show models that end up with the mass of a typical local giant elliptical. Galaxies come in a variety of masses, so what does that look like?

The stellar mass of galaxies as a function of cosmic age. Data as above. The orange lines represent the hierarchical models that result in different final masses at z = 0.

The curves of stellar growth predicted by LCDM have pretty much the same shape, just different amplitude. The most massive case illustrated above is reasonable insofar as there are real galaxies that massive, but they are rare. They are also rare in simulations, which make the predicted curve a bit jagged as there aren’t enough examples to define a smooth trajectory as there are for lower mass objects. More importantly, the shape is wrong. One can imagine that the galaxies we see at high redshift are abnormally massive, but even the most massive galaxies don’t start out that big at high redshift. Moreover, they continue to grow hierarchically in LCDM, so they wind up too big. In contrast, the data look like the monolithic model that we made on a lark, no muss, no fuss, no need to adjust anything.

This really shouldn’t have come as a surprise. We already knew that galaxies were impossibly massive at z ~ 4 before JWST discovered that this was also true at z ~ 10. The a priori prediction that LCDM has made since its inception (earlier models show the same thing) fails. More recent models fail, though I have faith that they will eventually succeed. This is the path theorists has always taken, and the obvious path here, as I remarked previously, is to make star formation (or at least light production) artificially more efficient so that the hierarchical model looks like the monolithic model. For completeness, I indulge in this myself in the paper (section 6.3) as an exercise in what it takes to save the phenomenon.

A two year delay

Regular readers of this blog will recall that in addition to the predictions I emphasized when JWST was launched, I also made a number of posts about the JWST results as they started to come in back in 2022. I had also prepared the above as a science paper that is now sections 1 to 3 of McGaugh et al. (2024). The idea was to have it ready to go so I could add a brief section on the new JWST results and submit right away – back in 2022. The early results were much as expected, but I did not rush to publish. Instead, it has taken over two years since then to complete what turned into a much longer manuscript. There are many reasons for this, but the scientific reason is that I didn’t believe many of the initial reports.

JWST was new and exciting and people fell all over themselves to publish things quickly. Too quickly. To do so, they relied on a calibration of the telescope plus detector system made while it was on the ground prior to launch. This is not the same as calibrating it on the sky, which is essential but takes some time. Consequently, some of the initial estimates were off.

Stellar masses and redshifts of galaxies from Labbe et al. The pink squares are the initial estimates that appeared in their first preprint in July 2022. The black squares with error bars are from the version published in February 2023. The shaded regions represent where galaxies are too massive too early for LCDM. The lighter region is where galaxies shouldn’t exist; the darker region is a where they cannot exist.

In the example above, all of the galaxies had both their initial mass and redshift estimates change with the updated calibration. So I was right to be skeptical, and wait for an improved analysis. I was also right that while some cases would change, the basic interpretation would not. All that happened in the example above was that the galaxies moved from the “can’t exist in LCDM” region (dark blue) into the “really shouldn’t exist in LCDM” region (light blue). However, the widespread impression was that we couldn’t trust photometric redshifts at all, so I didn’t see what new I could justifiably add in 2022. This was, after all, the attitude Jay and I had taken in his CCPC survey where we required spectroscopic redshifts.

So I held off. But then it became impossible to keep up with the fire hose of data that ensued. Every time I got the chance to update the manuscript, I found some interesting new result had been published that I had to include. New things were being discovered faster than I could read the literature. I found myself stuck in the Red Queen’s dilemma, running as fast as possible just to stay in place.

Ultimately, I think the delay was worthwhile. Lots new was learned, and actual spectroscopic redshifts began to appear. (Spectroscopy takes more telescope time than photometry – spreading out the light reduces the signal-to-noise per pixel, necessitating longer exposure times, so it always lags behind. One also discovers the galaxies in the same images that are used for photometry, so it also gets a head start.) Consequently, there is a lot more in the paper than I had planned on. This is another long blog post, so I will end it where I had planned for the original paper to end, with the updated version of the plot above.

Massive galaxies at high redshift from JWST

The stellar masses of galaxies discovered by JWST as a function of redshift is shown below. Unlike most of the plots above, these are individual galaxies rather than typical L* galaxies. Many are based on photometric redshifts, but those in solid black have spectroscopic redshifts. There are many galaxies that reside in a region they should not, at least according to LCDM models: their mass is too large at the observed redshift.

Figure 6 from McGaugh et al. (2024)Mass estimates for high-redshift galaxies from JWST. Colored points based on photometric redshifts are from Adams et al. (2023; dark blue triangles), Atek et al. (2023; green circles), Labbé et al. (2023; open squares), Naidu et al. (2022; open star), Harikane et al. (2023; yellow diamonds), Casey et al. (2024; light blue left-pointing triangles), and Robertson et al. (2024; orange right-pointing triangles). Black points from Wang et al. (2023; squares), Carniani et al. (2024; triangles), Harikane et al. (2024; circles) and Castellano et al. (2024; star) have spectroscopic redshifts. The upper limit for the most massive galaxy in TNG100 (Springel et al. 2018) as assessed by Keller et al. (2023) is shown by the light blue line. This is consistent with the maximum stellar mass expected from the stellar mass–halo mass relation of Behroozi et al. (2020; solid blue line). These merge smoothly into the trend predicted by Yung et al. (2019b) for galaxies with a space density of 10−5 dex−1 Mpc−3 (dashed blue line), though L. Yung et al. (2023) have revised this upward by ∼0.4 dex (dotted blue line). This closely follows the most massive objects in TNG300 (Pillepich et al. 2018; red line). The light gray region represents the parameter space in which galaxies were not expected in LCDM. The dark gray area is excluded by the limit on the available baryon mass (Behroozi & Silk 2018; Boylan-Kolchin 2023). [Note added: I copied this from the caption in our paper, but the links all seem to go to that rather than to each of the cited papers. You can get to them from our reference list if you want, but it’ll take some extra clicks. It looks like AAS has set it up this way to combat trawling by bots.]

One can see what I mean about a fire hose of results from the number of references given here. Despite the challenges of keeping track of all this, I take heart in the fact that many different groups are finding similar results. Even the results that were initially wrong remain problematic for LCDM. Despite all the masses and redshifts changing when the calibration was updated, the bulk of the data (the white squares, which are the black squares in the preceding plot) remain in the problematic region. The same result is replicated many times over by others.

The challenge, as usual, is assessing what LCDM actually predicts. The entire region of this plot is well away from the region predicted for typical galaxies. To reside here, a galaxy must be an outlier. But how extreme an outlier?

The dark gray region is the no-go zone. This is where dark matter halos do not have enough baryons to make the observed mass of stars. It should be impossible for galaxies to be here. I can think of ways to get around this, but that’s material for a future post. For now, it suffices to know that there should be no galaxies in the dark gray region. Indeed, there are not. A few straddle the edge, but nothing is definitively in that region given the uncertainties. So LCDM is not outright falsified by these data. This bar is set very low, as the galaxies that do skirt the edge require that basically all of the available baryons have been converted into starts practically instantaneously. This is not a reasonable.

Not with ten thousand simulations could you do this.

So what is a reasonable expectation for this diagram? That’s hard to say, but that’s what the white and light gray region attempts to depict. Galaxies might plausibly be in the white region but should not be in the light gray region for any sensible star formation efficiency.

One problem with this statement is that it isn’t clear what a sensible star formation efficiency is. We have a good idea of what it needs to be, on average, at low redshift. There is no clear indication that it changes as a function of redshift – at least until we hit results like this. Then we have to be on guard for confirmation bias in which we simply make the star formation efficiency be what we need it to be. (This is essentially what I advocate as the least unreasonable option in section 6.3 of the ApJ paper.)

OK, but what should the limit be? Keller et al. (2023) made a meta-analysis of the available simulations; I have used his analysis and my own reading of the literature to establish the lower boundary of the light gray area. It is conceivable that you would get the occasional galaxy this massive (the white region is OK), but not more so (the light gray region is not OK). The boundary is the most extreme galaxy in each simulation, so as far from typical as possible. The light gray region is really not OK; the only question is where exactly it sets in.

The exact location of this boundary is not easy to define. Different simulations give different answers for different reasons. These are extremal statistics; we’re asking what the one most massive galaxy is in an entire simulation. Higher resolution simulations perceive the formation of small structures like galaxies sooner, but large simulations have more opportunity for extreme events to happen. Which “wins” in terms of making the rare big galaxy early is a competition between these effects that appears, in my reading, to depend on details of simulation implementation that are unlikely to be representative of physical reality (even assuming LCDM is the correct underlying physics).

To make my own assessment, I reviewed the accessible simulations (they don’t all provide the necessary information) to fine the very most massive simulated galaxy as a function of redshift. As ever, I am looking for the case that is most favorable to LCDM. The version I found comes from the large-box, next generation Illustris simulation TNG300. This is the red line a bit into the gray area above. Galaxies really, really should not exist above or to the right of that line. Not only have I adopted the most generous simulation estimate I could find, I have also chosen not to normalize to the area surveyed by JWST. One should do this, but the area so far surveyed is tiny, so the line slides down. Even if galaxies as massive as this exist in TNG300, we have to have been really lucky to point JWST at that spot on a first go. So the red line is doubly generous, and yet there are still galaxies that exceed this limit.

The bottom line is that yes, JWST data pose a real problem for LCDM. It has been amusing watching this break people’s brains. I’ve seen papers that say this is a problem for LCDM because you’d have to turn more than half of the available baryons into stars and that’s crazy talk, and others that say LCDM is absolutely OK because there are enough baryons. The observational result is the same – galaxies with very high stellar-to-dark halo mass ratios, but the interpretation appears to be different because one group of authors is treating the light gray region as forbidden while the other sets the bar at the dark gray region. So the difference in interpretation is not a conflict in the data, but an inconsistency in what [we think] LCDM predicts.

That’s enough for today. Galaxy data at high redshift are clearly in conflict with the a priori predictions of LCDM. This was true before JWST, and remains true with JWST. Whether the observations can be reconciled with LCDM I leave as an exercise for scientists in the field, or at least until another post.


+A minor technical note: the Schechter function is widely used to describe the luminosity function of galaxies, so it provides a common language with which to quantify both their characteristic luminosity L* and space density Φ*. I make use of it here to quantify the brightness of the typical galaxy. It is, of course, not perfect. As we go from low to high redshift, the luminosity function becomes less Schechter-like and more power law-like, an evolution that you can see in Jay Franck’s plot. We chose to use Schechter fits for consistency with the previous work of Mancone et al. (2010) and Wylezalek et al. (2014), and also to down-weight the influence of the few very bright galaxies should they be active galactic nuclei or some other form of contaminant. Long story short, plausible contaminants (no photometric redshifts were used; sample galaxies all have spectroscopic redshifts) cannot explain the bulk of the data; our estimates of m* are robust and, if anything, underestimate how bright galaxies typically are.

17 thoughts on “Measuring the growth of the stellar mass of galaxies over cosmic time

  1. I appreciate this deep dive into your paper; and the Utube Video discussing the same.

    As Stacy has implied, retooling the early LCDM growth puts even more pressure on the current interpretation that the CMB is an record of a ‘uniform’ primal event. It is fair to say that, in spite of enormous amounts of new data, the convergence of the Big Bang nucleosynthesis to a ‘precision’ peak has led us to a blank space. How far do we have to retrace our steps? At least fifty, but more likely at least a hundred years of predictive retooling.

    1. Reality has a hierarchical structure, as evidenced by numerous observations. This structure arises from the emergence of new/underlying properties when complexity reaches specific thresholds. From a quantum substrate to classical reality, from living beings to stars and galaxies, each level reflects this process of emergence.

      This hierarchical perspective can also be applied to the Big Bang, which can be reinterpreted as an emergent event—like a phase transition—from a quantum substrate. As complexity reached a critical threshold in the quantum substrate, classical reality, along with space and time, emerged. This phase-transition-like event occurred throughout the quantum substrate, resulting in an almost uniform classical universe.

      The Big Bang was not a creation event but rather a transformation—an emergence of classical reality from something that always existed. Complexity, as the driving force behind underlying emergent properties, offers a unifying idea, eliminating the need for any notion of “creation.”

      1. This would be a reasonable supposition if there were more structure in the CMB. That is the root problem with early development of galaxies, and in my opinion, an insurmountable problem if the red dots are not primal.

        The Hubble Tension IS the difference between the extrapolated result of Big Bang physics and what we actually see, hence the blank space.

        When Perlmutter and company stretched the Supernova distance curves, they minimized the apparent acceleration in every way that they could imagine. They did not include any reddening or any selection effects in their estimates, because doing so would only make things worse – more difference between the supernova value of Ho and the Planck mission value of the same.

        It is also worth recounting some of the history of the CMB Ho value: Prior to the Planck mission, the WMAP and COBE missions did not have in-situ hard baseline standards. (COBE did not even have a reliable clock.) Estimates were made that were based on the intensity of the microwave emissions of Jupiter and such. As researchers tried to pull together the year-after-year of the WMAP curves, they had to redo the temperature calibration virtually pixel by pixel. Although I have never seen it stated as such, I think part of the calibration process included bumping the results against the supernova Ho value. The hard tension emerged after Planck measured temperatures against a hard calibrated baseline.

        1. NASA’s Lambda website used to have all of the WMAP results readily available. I thought it would make a good exercise for students in my cosmology class to compare before & after fits from release to release, but the fits to each release seem to have disappeared. The data are all there, as are the best-fit parameters, but I no longer see the files of the best-fit power spectra that they once had posted. I recall these not being great at predicting the part of the ensuing data release that hadn’t already been analyzed.

          As for the calibration, that has been a persistent annoyance. But the best fit H0 appears to vary smoothly from WMAP to Planck as the last fit L increases, so while the tension became significant with Planck’s smaller uncertainties, the trend away from the local value was already in progress.

  2. Not sufficiently well-versed in the technical details, I found this exposition fascinating. One aspect of how your personal history seems to impact your reasoning is the care you take to accommodate the cold dark matter models and the patience to wait for more reliable spectroscopic data.

    I don’t believe this temperament is merely from a particular perspective on what constitutes “good science.” The conundrum of how MOND predicted results for your early data seems to have focused your attention on how to weigh the facts more carefully.

    It shows in how you explain yourself.

    Thank you, again, for a fascinating post.

    1. It did – and I recognized that there was a conundrum before I even realized MOND was a thing. That conundrum persists to this day without a satisfactory explanation in terms of dark matter, but no one in the field seems to be aware of it, much less concerned about it. Life is much easier if one isn’t so careful.

  3. Two papers from yesterday’s pre-prints:

    arXiv:2501.03325
    A multi-wavelength investigation of spiral structures in z>1 galaxies with JWST
    Boris S. Kalita et al
    “In conclusion, we find a multi-faceted nature of spiral arms at z>1, similar to that in the local Universe. ”

    But this is not a problem, as also pre-published here:

    arXiv:2501.03217
    No evidence (yet) for increased star-formation efficiency at early times
    C. T. Donnan, J. S. Dunlop, R. J. McLure, D. J. McLeod, F. Cullen

    “Here we develop and test a simple theoretical model which shows that these observations are unsurprising, but instead are arguably as expected if one assumes a non-evolving halo-mass dependent galaxy-formation efficiency consistent with that observed today.”

    These are the papers Stacy is warning us about, and they are part of the grieving process, because a theory is only good as its failures to predict.

    A decade ago at the conclusion of the CCC2 conference, we were ask to make our predictions about the JWST. None of the presenters from anywhere, and this was a world-wide assembly of both mainstream and every imaginable alternative theorists; “expected” what we are seeing, except for one: I started arguing that the distance modules is wrong in 2002; and I did so based upon what was then a modest evolution in the light curves of supernovae Ia. I argued that the addition of the Dark Energy component was a bridge too far, given that the cosmic time constraints were already strained. I argued that since we can bisect the light curves of supernovae and see that they are not evolving, we should not conclude evolution of these key cosmic indicators is occurring in red-shifted space. It appeared more likely to me that one cosmic assumption: That the CMB tightly constrains the age of the universe, is the artifact that we are failing to grasp.

    The ‘little red dots’ are not surprising if you first conclude the relativistic distance modulus is an artifact of wishful thinking. They are galaxies in many phases of evolution. They appear to be small and compact because the relativistic distance modulus distorts the width parameter.

    This is what we learned as we gained an understanding of the Hubble deep space probes ‘distorted and twisted galaxies’ at the edge of the observable visible universe. They were normal galaxies viewed through a distorted lens. And this lesson, this shrunken part of the elephant, is now in plain view for all to see: Little red dots, mature galaxies, that exist in cosmic time unmeasurable.

      1. Stacey, how would we know if there was a discrepency between the observed time on cosmic scales and the local 4D time employed in models or observed here locally? Is the data telling us that there may be a discrepency which isn’t yet accounted for in the chosen metric? I am using the term time, because for cosmic observations we need no distinction between time and distance while locally we do need the distinction. However it could be a spacetime discrepency or curvature that appears in cosmic observations but not locally. Flatness and linearity only gets you so far in my experience.

        1. Good question; I’m not sure how to answer. One thing that lends credence to the usual interpretation of time in the cosmic expansion is that the decay of supernova light curves shows the expected 1+z dilation. Much of the decline of Type Ia SN is due to the radioactive decay of Nickel-56, a known rate from nuclear physics, and the decline of low and high redshift SN match when the 1+z factor is accounted for.

    1. Dark matter and dark energy were introduced based on the assumption that General Relativity (GR) has an unlimited range of applicability across all levels of complexity(naive reductionism). However, this assumption is false for any theory.

      When proponents of the Lambda-CDM model claim it provides a coherent narrative for the universe’s origin and evolution, while MOND does not, obviously they are once again assuming that GR has an unlimited complexity range of applicability, so that universe’s story including its “initial singularity” is fundamentally flawed because it’s seen through a distorted lens.

      Any claims or results derived from applying GR beyond its validated range of applicability(simple gravitational systems) is questionable.

      1. By the way all efforts of unification between General Relativity and Quantum mechanics are also based on the assumption that GR has an unlimited complexity range of applicability, and since GR has a very limited range of applicability all these efforts seem to be meaningless, even more if we consider that space and time are classical emergent properties and since GR is a theory of space and time any notion of “quantum gravity” as an extension of GR is meaningless too.

        1. In MOND I get what you mean and agree. The same might not apply to MI (Milgromian Inertia), correct me if I’m wrong – that route might save the nice gauge theory picture and its explanatory power. It would IMHO really be a pity if quantum gravity theories have to be discarded as you suggest.

          1. I agree with your intent. Maybe such unexpected discoveries as what Stacey is presenting here are actually necessary in order to gain a better understanding of the relationship between QM and GR. I’d rather make that assumtion than one where many decades of work is simply deemed as wasted.

Comments are closed.