As discussed in recent posts, the appearance of massive galaxies in the early universe was predicted a priori by MOND (Sanders 1998, Sanders 2008, Eappen et al. 2022). This is problematic for LCDM. How problematic? That’s always the rub.

The problem that JWST observations pose for LCDM is that there is a population of galaxies in the high redshift universe that appear to evolve as giant monoliths rather than assembling hierarchically. Put that way, it is a fatal flaw: hierarchical assembly of mass is fundamental to the paradigm. But we don’t observe mass, we observe light. So the obvious “fix” is to adjust the mapping of observed light to predicted dark halo mass in order to match the observations. How plausible is this?

Before trying to wriggle out of the basic result, note that doing so is not plausible from the outset. We need to make the curve of growth of the largest progenitors “look like” the monolithic model. They shouldn’t, by construction, so everything that follows is a fudge to try to avoid the obvious conclusion. But this sort of fudging has been done so many times before in so many ways (the “Frenk Principle” was coined nearly thirty years ago) that many scientists in the field have known nothing else. They seem to think that this is how science is supposed to work. This in turn feeds a convenient attitude that evades the duty to acknowledge that a theory is in trouble when it persistently has to be adjusted to make itself look like a competitor.
That noted, let’s wriggle!
Observational dodges
The first dodge is denial: somehow the JWST data are wrong or misleading. Early on, there were plausible concerns about the validity of some (some) photometric redshifts. There are enough spectroscopic redshifts now that this point is moot.
A related concern is that we “got lucky” with where we pointed JWST to start with, and the results so far are not typical of the universe at large. This is not quite as crazy as it sounds: the field of view of JWST is tiny, so there is no guarantee that the first snapshot will be representative. Moreover, a number of the first pointings intentionally targeted rich fields containing massive clusters, i.e., regions known to be atypical. However, as observations have accumulated, I have seen no indications of a reversal of our first impression, but rather lots of corroboration. So this hedge also now borders on reality denial.
A third observational concern that we worried a lot about in Franck & McGaugh (2017) is contamination by active galactic nuclei (AGN). Luminosity produced by accretion onto supermassive black holes (e.g., quasars) was more common in the early universe. Perhaps some of the light we are attributing to stars is actually produced by AGN. That’s a real concern, but long story short, AGN contamination isn’t enough to explain everything else away. Indeed, the AGN themselves are a problem in their own right: how do we make the supermassive black holes that power AGN so rapidly that they appear already in the early universe? Like the galaxies they inhabit, the black holes that power AGN should take a long time to assemble in the absence of the heavy seeds naturally provided by MOND but not dark matter.
An evergreen concern in astronomy is extinction by dust. Dust could play a role (Ferrara et al. 2023), but this would be a weird effect for it to have. Dust is made by stars, so we naively expect it to build up along with them. In order to explain high redshift JWST data with dust we have to do the opposite: make a lot of dust very early without a lot of stars, then eject it systematically from galaxies so that the net extinction declines with time – a galactic reveal sort of like a cosmic version of the dance of the seven veils. The rate of ejection for all galaxies must necessarily be fine-tuned to balance the barely evolving UV luminosity function with the rapidly evolving dark matter halo mass function. This evolution of the extinction has to coordinate with the dark matter evolution over a rather small window of cosmic time, there being only ∼108 yr between z = 14 and 11. This seems like an implausible way to explain an unchanging luminosity density, which is more naturally explained by simply having stars form and be there for their natural lifetimes.

The basic observation is that there is too much UV light produced by galaxies at all redshifts z > 9. What we’d rather have is the stellar mass function. JWST was designed to see optical light at the redshift of galaxy formation, but the universe surprised us and formed so many stars so early that we are stuck making inferences with the UV anyway. The relation of UV light to mass is dodgy, providing a knob to twist. So up next is the physics of light production.
In our discussion to this point, we have assumed that we know how to compute the luminosity evolution of a stellar population given a prescription for its star formation history. This is no small feat. This subject has a rich history with plenty of ups and downs, like most of astronomy. I’m not going to attempt to review all that here. I think we have this figured out well enough to do what we need to do for the purposes of our discussion here, but there are some obvious knobs to turn, so let’s turn ’em.
Blame the stars!
As noted above, we predict mass but observe light. So the program now is to squeeze more light out of less mass. Early dark matter halos too small? No problem; just make them brighter. More specifically, we need to make models in which the small dark matter halos that form first are better at producing photons from the small amount of baryons that they possess than are their low-redshift descendants. We have observational constraints on the latter; local star formation is inefficient, but maybe that wasn’t always the case. So the first obvious thing to try is to make star formation more efficient.
Super Efficient Star Formation
First, note that stellar populations evolve pretty much as we expect for stars, so this is a bit tricky. We have to retain the evolution we understand well for most of cosmic time while giving a big boost at early times. One way to do that is to have two distinct modes of star formation: the one we think of as normal that persists to this day, and an additional mode of super-efficient star formation (SEFS) at play in the early universe. This way we retain the usual results while potentially giving us the extra boost that we need to explain the JWST data. We argue that this is the least implausible path to preserving LCDM. We’re trying to make it work, and anticipate the arguments Dr. Z would make.
This SESF mode of star formation needs to be very efficient indeed, as there are galaxies that appear to have converted essentially all of their available baryons into stars. Let’s pause to observe that this is pretty silly. Space is very empty; it is hard to get enough mass together to form stars at all: there’s good reason that it is inefficient locally! The early universe is a bit denser by virtue of being smaller; at z = 9 the expansion factor is only 1/(1+z) = 0.1 of what it is now, so the density is (1+z)3 = 1,000 times greater. ON AVERAGE. That’s not really a big boost when it comes to forming structures like stars since the initial condition was extraordinarily uniform. The lack of early structure by far outweighs the difference in density; that is precisely why we’re having a problem. Still, I can at least imagine that there are regions that experience a cascade of violent relaxation and SESF once some threshold in gas density is exceeded that differentiates the normal model of star formation from SESF. Why a threshold in the gas? Because there’s not anything obvious in the dark matter picture to distinguish the galaxies that result from one or the other mode. CDM itself is scale free, after all, so we have to imagine a scale set by baryons that funnels protogalaxies into one mode or the other. Why, physically, is there a particular gas density that makes that happen? That’s a great question.

There have been observational indications that local star formation is related to a gas surface density threshold, so maybe there’s another threshold that kicks it up another notch. That’s just a plausibility argument, but that’s the straw I’m clutching at to justify SESF as the least implausible option. We know there’s at least one way in which a surface density scale might matter to star formation.
Writing out the (1+z)3 argument for the density above tickled the memory that I’d seen something similar claimed elsewhere. Looking it up, indeed Boylan-Kolchin (2024) does this, getting an extra (1+z)3 [for a total of (1+z)6] by invoking a surface density Σ that follows from an acceleration scale g: Σ=g/(πG). Very MONDish, that. At any rate, the extra boost is claimed to lift a corner of dark matter halo parameter space into the realm of viability. So, sure. Why not make that step two.
However we do it, making stars super-efficiently is what the data appear to require – if we confine our consideration to the mass predicted by LCDM. It’s a way of covering the lack of mass with an surplus of stars. Any mechanism that makes stars more efficiently will boost the dotted lines in the M*-z diagram above in the right direction. Do they map into the data (and the monolithic model) as needed? Unclear! All we’ve done so far is offer plausibility arguments that maybe it could be so, not demonstrate a model that works without fine-tuning that woulda coulda shoulda made the right prediction in the first place.
The ideas become less plausible from here.
Blame the IMF!
The next obvious idea after making more stars in total is to just make more of the high mass stars that produce UV photons. The IMF is a classic boogeyman to accomplish this. I discussed this briefly before, and it came up in a related discussion in which it was suggested that “in the end what will probably happen is that the IMF will be found to be highly redshift dependent.”
OK, so, first, what is the IMF? The Initial Mass Function is the spectrum of masses with which stars form: how many stars of each mass, ranging from the brown dwarf limit (0.08 M☉) to the most massive stars formed (around 100 M☉). The numbers of stars formed in any star forming event is a strong function of mass: low mass stars are common, high mass stars are rare. Here, though, is the rub: integrating over the whole population, low mass stars contain most of the mass, but high mass stars produce most of the light. This makes the conversion of mass to light quite sensitive to the IMF.
The number of UV photons produced by a stellar population is especially sensitive to the IMF as only the most massive and short-lived O and B stars produce them. This is low-hanging fruit for the desperate theorist: just a few more of those UV-bright, short-lived stars, please! If we adjust the IMF to produce more of these high mass stars, then they crank out lots more UV photons (which goes in the direction we need) but they don’t contribute much to the total mass. Better yet, they don’t live long. They’re like icicles as murder weapons in mystery stories: they do their damage then melt away, leaving no further evidence. (Strictly speaking that’s not true: they leave corpses in the form of neutron stars or stellar mass black holes, but those are practically invisible. They also explode as supernovae, boosting the production of metals, but the amount is uncertain enough to get away with murder.)
There is a good plausibility argument for a variable IMF. To form a star, gravity has to overcome gas pressure to induce collapse. Gas pressure depends on temperature, and interstellar gas can cool more efficiently when it contains some metals (here I mean metals in the astronomy sense, which is everything in the periodic table that’s not hydrogen or helium). It doesn’t take much; a little oxygen (one of the first products of supernova explosions) goes a long way to make cooling more efficient than a primordial gas composed of only hydrogen and helium. Consequently, low metallicity regions have higher gas temperatures, so it makes sense that gas clouds would need more gravity to collapse, leading to higher mass stars. The early universe started with zero metals, and it takes time for stars to make them and to return them to the interstellar medium, so voila: metallicity varies with time so the IMF varies with redshift.
This sound physical argument is simple enough to make that it can be done in a small part of a blog post. This has helped it persist in our collective astronomical awareness for many decades. Unfortunately, it appears to have bugger-all to do with reality.
If metalliticy plays a strong role in determining the IMF, we would expect to see it in stellar populations of different metallicity. We measure the IMF for solar metallicity stars in the solar neighborhood. Globular clusters are composed of stars formed shortly after the Big Bang and have low metallicities. So following this line of argument, we anticipate that they would have a different IMF. There is no evidence that this is the case. Still, we only really need to tweak the high-mass end of the IMF, and those stars died a long time ago, so maybe this argument applies for them if not for the long-lived, low-mass stars that we observe today.
In addition to counting individual stars, we can get a constraint on the galaxy-wide average IMF from the scatter in the Tully-Fisher relation. The physical relation depends on mass, but we rely on light to trace that. So if the IMF varies wildly from galaxy to galaxy, it will induce scatter in Tully-Fisher. This is not observed; the amount of intrinsic scatter that we see is consistent with that expected for stochastic variations in the star formation history for a fixed IMF. That’s a pretty strong constraint, as it doesn’t take much variation in the IMF to cause a lot of scatter that we don’t see. This constraint applies to entire galaxies, so it tolerates variations in the IMF in individual star forming events, but whatever is setting the IMF apparently tends to the same result when averaged over the many star forming events it takes to build a galaxy.
Variation in the IMF has come up repeatedly over the years because it provides so much convenient flexibility. Early in my career, it was commonly invoked to explain the variation in spectral hardness with metallicity. If one looks at the spectra of HII regions (interstellar gas ionized by hot young stars), there is a trend for lower metallicity HII regions to be ionized by hotter stars. The argument above was invoked: clearly the IMF tended to have more high mass stars in low metallicity environments. However, the light emitted by stars also depends on metallicity; low metallicity stars are bluer than their high metallicity equivalents because there are few UV absorption lines from iron in their atmospheres. Taking care to treat the stars and interstellar gas self-consistentlty and integrating over a fixed IMF, I showed that the observed variation in spectral hardness was entirely explained by the variation in metallicity. There didn’t need to be more high mass stars in low metallicity regions, the stars were just hotter because that’s what happens in low metallicity stars. (I didn’t set out to do this; I was just trying to calibrate an abundance indicator that I would need for my thesis.)
Another example where excess high mass stars were invoked was to explain the apparently high optical depth to the surface of last scattering reported by WMAP. If those words don’t mean anything to you, don’t worry – all it means is that a couple of decades ago, we thought we needed lots more UV photons at high redshift (z ~ 17) than CDM naturally provided. The solution was, you guessed it, an IMF rich in high mass stars. Indeed, this result launched a thousand papers on supermassive Population III stars that didn’t pan out for reasons that were easily anticipated at the time. Nowadays, analysis to the Planck data suggest a much lower optical depth than initially inferred by WMAP, but JWST is observing too many UV photons at high redshift to remain consistent with Plank. This apparent tension for LCDM is a natural consequence of early structure formation in MOND; indeed, it is another thing that was specifically predicted (see section 3.1 of McGaugh 2004).
I relate all these stories of encounters with variations in the high mass end of the IMF because they’ve never once panned out. Maybe this time will be different.
Stochastic Star Formation
What else can we think up? There’s always another possibility. It’s a big universe, after all.
One suggestion I haven’t discussed yet is that high redshift galaxies appear overly bright from stochastic fluctuations in their early star formation. This again invokes the dubious relation between stellar mass and UV light, but in a more subtle way than simply stocking the IMF with a bunch more high mass stars. Instead, it notes that the instantaneous star formation rate is stochastic. The massive stars that produces all the UV light are short-lived, so the number present will fluctuate up and down. Over time, this averages out, but there hasn’t been much time yet in the early universe. So maybe the high redshift galaxies that seem to be over-luminous are just those that happen to be near a peak in the ups and downs of star formation. Galaxies will be brightest and most noticeable in this peak phase, so the real mass is less than it appears – albeit there must be a lot of galaxies in the off phase for every one that we see in the on phase.

This makes a lot of sense to me. Indeed, it should happen at some level, especially in the chaotic early universe. It is also what I infer to be going on to explain why some measurements scatter above the monolithic line. That is the baseline star formation history for this population, with some scatter up and down at early times. Simply scattering from the orange LCDM line isn’t going to look like the purple monolithic line. The shape is wrong and the amplitude difference is too great to overcome in this fashion.
What else?
I’m sure we’ll come up with something, but I think I’ve covered everything I’ve heard so far. Indeed, most of these possibilities are obvious enough that I thought them up myself and wrote about them in McGaugh et al (2024). I don’t see anything in the wide-ranging discussion at KITP that wasn’t already in my paper.
I note this because I want to point out that we are following a well-worn script. This is the part where I tick off all the possibilities for more complicated LCDM models and point out their shortcomings. I expect the same response:
That’s too long to read. Dr. Z says it works, so he must be right since we already know that LCDM is correct.
Triton Station, 8 February 2022
People will argue about which of these auxiliary hypotheses is preferable. MOND is not an auxiliary hypothesis, but an entirely different paradigm, so it won’t be part of the discussion. After some debate, one of the auxiliaries (SESF not IMF!) will be adopted as the “standard” picture. This will be repeated until it becomes familiar, and once it is familiar it will seem that it was always so, and then people will assert that there was never a problem, indeed, that we expected it all along. This self-gaslighting reminds me of Feynman’s warning:
The first principle is that you must not fool yourself and you are the easiest person to fool.
Richard Feynman
What is persistently lacking in the community is any willingness to acknowledge, let alone engage with, the deeper question of why we have to keep invoking ad hoc patches to somehow match what MOND correctly predicted a priori. The sociology of invoking arbitrary auxiliary hypotheses to make these sorts of excuses for LCDM has been so consistently on display for so long that I wrote this parody a year ago:
It always seems to come down to special pleading:

And the community loves LCDM, so we fall for it every time.

PS – to appreciate the paraphrased quotes here, you need to hear it as it would be spoken by the pictured actors. So if you do not instantly recognize this scene from the Blues Brothers, you need to correct this shortcoming in your cultural education to get the full effect of the reference.
The hard reality is that scientists, exactly like fanatic believers, will protect their worldviews(religion) from dissenting views (heretics) with almost the same tactics used by fanatic cultists.
And fighting fanatics become really tiresome after a short while. Engaging with such dogmatic resistance can feel futile, akin to Don Quixote tilting at windmills. At some point, the effort becomes exhausting, especially when the focus should be on following the evidence wherever it leads, rather than preserving the status quo.
For mainstream scientists General Relativity is a sacred cow with universal applicability and that’s the underlying problem.
MOND is terrifying for the people dreaming of a Theory of Everything or searching for “unification” because if GR is not universally valid these dreams become less “inspiring” or even meaningful.
Your persistence is amazing and an example to follow.
I didn’t see any discussion of time dilation as a possible explanation that has been brought up before with respect to observations that seemingly break LCDM. It seems more than plausible to me.
There are many time dilation thought experiments one can make to explain a wide range of observations that break LCDM. Time dilation may also be the reason why MOND “works”. It seems plausible to me that MOND doesn’t actually modify Newtonian Dynamics, but rather the perception of the dynamics.
In the case of the early universe observations, people assume that the local physical processes in the early universe were dominated by MOND, when it may be instead that the observation is what is dominated by MOND.
We can measure time dilation. It exists. Is it fully characterized? Apparently not. I suspect clocks in the early universe appear to run slower than here. The age of the universe is a fixed number of ticks of the clock. If the early universe is subject to time dilation to a distant observer such as ourselves, then the apparent age would be older than the local age. The light travel time is not affected by dilation. So at some scale, looking back X billion years (subtracting Y of our clock ticks) is the same as subtracting a lower number of early universe ticks. The early universe itself is older than we think it is simply because we haven’t subtracted the same number of early universe clock ticks due to time dilation.
But where does this dilation come from, we already accounted for everything, right? Absolutely not.
Incidentally, this needn’t break GR. We know for example that the Nariai Solution is an exact solution of the Einstein field equations for a Schwarzschild-deSitter spacetime. An interesting feature of this solution is that there appears a black hole horizon which may be the same size as, and nearly coincident with, a cosmic horizon. Surely there must be some very interesting time dilation effects when an observer looks at such a horizon!
You will not see a discussion of time dilation on this blog.
Alright. I would certainly be interested if you know of a better place to talk about it, or if you know of any past or active work in this area that you could direct me to. Also is the reason this won’t be discussed because it is too difficult to get to a meaningful level of understanding among a diverse group?
It’s not my thing, and I don’t expect it to be of relevance to the issues I’ve discussed. Perhaps I’m wrong about that, but I can’t chase down everything.
Totally understand. It is easy to speculate, but difficult to do all the work to chase down that speculation. It could be even worse if unrecognized complementarity is involved in sets of observations, because someone could do a lot of promising work, only to have someone “disprove” the assertion by bringing up something that has a subtle theoretical incompatibility which wasn’t realized. Keep up the effort though, you are definitely making a huge difference!
arXiv:2501.05924 (astro-ph)
[Submitted on 10 Jan 2025]
Comparing radial migration in dark matter and MOND regimes
R. Nagy, F. Janák, M. Šturc, M. Jurčík, E. Puha
..We investigated DM and MOND approaches. The outcome of the simulation shows that the radial migration is much more pronounced in the MOND regime compared to the DM one. Compared to the DM approach, in the MOND regime, we observe up to five times as many stars with a maximum change in the guiding radius of more than 1.5 kpc during the time interval from 2−6 Gyr.
your attention
Yes, that’s an intriguing result.
could real observations of data points of galaxies show five times as many stars with a maximum change in the guiding radius of more than 1.5 kpc during the time interval from 2−6 Gy
is any observations
To observe this, you’d have to count stars that have migrated. How does one know which stars have migrated?
yes i was wondering about this
could astronomers tell
I always put off reading Sanders 1998 for some reason, but I finally looked and I’m now excited by one aspect of his conclusions, which is that there is a critical radius in MONDian cosmology outside of which MOND effects can’t happen, and that the MONDian domain starts out smaller than horizon scales, but it expands faster than the universe, and around z=3, the MONDian region has expanded to include the entire Hubble volume.
Part of what excites me about this, is that it seems ripe for holographic study i.e. people should try to make a model of emergent space-time in which this behavior occurs. Unfortunately, unlike the situation with respect to Anti de Sitter, there is no consensus on the correct approach to holography in De Sitter space. But there are quite a few ideas, and this seems a great opportunity to bring the theoretical frontier of quantum gravity research, into contact with the phenomenological frontier.
But before I jump to conclusions, does this idea of Sanders – MONDian domains expanding to the horizon around z=3 – still look viable, here in the 2020s?
As I recall, and its been a couple decades since I worked on them directly, the MOND radius catching up with the cosmic horizon is what happens in the absence of a cosmological constant. In this limit, the universe becomes anisotropic on large scales at late times. There are hints of this, so yes, it is observationally possible. When there is a cosmological constant, things saturate in Bob’s analysis, and the scale of homogeneity sets in around 300 Mpc (the exact scale is sensitive to cosmic parameters). That also could be so; I’ve seen analyses in which the large scale structure is fractal up to around that scale, at which point the signal peters out. The data to do that sort of analysis should continue to improve.
This all assumes we can do what Bob showed was plausible: treat the whole universe as F[L]RW and small regions within it as MONDian. That seems to be a good approximation to start, but surely can’t be the final answer. Still, it is worth noting that such a universe is already very close to the de Sitter limit, and that is also within the realm of possibility.
Speaking about holography, maybe the approach shouldn’t be de Sitter. There is one possibility that I find more intuitive and physically meaningful, at least in a speculative way.
“However, in the identified spacetime [Schwarzschild-de Sitter], there does exist a globally stationary [quantum] state in the case of 𝜅1 =𝜅2 (sometimes referred to as the Nariai solution) which can be obtained by “Euclidean methods”. https://journals.aps.org/prd/abstract/10.1103/PhysRevD.111.025013
The part I speculate about is that the Schwarzschild and the de Sitter horizons should be treated as complementary horizons.
This to me creates a dual interpretation of the universe as both static and dynamic. Susskind relates the condition as the “Inside Out Transition”, but he considers the processes that transform one horizon into the other. I don’t see why we wouldn’t consider them in a complementary relationship, since this principle is already fundamental to QM.
Thank you again for your explanations. My heart sunk when visited the Quanta link and realized that much of this was deliberation for the bobble-headed happy faces in their Geach-Kaplan reality
https://en.m.wikipedia.org/wiki/Nonfirstorderizability#Geach-Kaplan_sentence
Other links led me to Putnam’s “no miracle” argument. It is to your credit that you specifically combine this with a priori prediction for hypotheses.
Most people simply refuse to accept that deep biological time creates a problem for scientific claims unsupported by empirical evidence. It is called the EAAN,
https://en.m.wikipedia.org/wiki/Evolutionary_argument_against_naturalism
Your “well worn script” plays into this as well. Clearly, if “everything is physics,” then “string theory proves naturalism,” right? lol.
Oh! Maybe some physics isn’t REAL Physics. That is the problem with “in principle argumentation” from generic scientific principles. Yet, that “well worn script” proves those arguments too —until it can no longer do so.
The unfortunate thing about EAAN is that it gets dragged through the mud because of reactionary rebuttals to postmodernism and theism (Yes, anti-theists are not agnostic). Deep biological time is still among our “best science” and reactionary rebuttals are not emprical data.
The “no miracles” stance clearly shows why deduction must be constrained by statistical inference and more mundane observations. But, it, too, must be applied in the context of good science as in your explanation. Typically, its application is careless and is an example of overreach with respect to Feynman’s first principle.
The acid test for good science is giving the same emphasis to results that go against ones view. The DM paradigm often ignores observations that don’t fit it, or tries to explain them away. MOND supporters need to make sure they don’t focus too much on its successes, and do something similar. MOND does far better than DM overall, particularly because of the key element of prediction vs postdiction, but the reality is there are places where we need DM, and places where we need MOND. The ‘best view we have’ involves both, which is why hybrid theories have appeared recently. But even if we’re allowed to use both, that still doesn’t fix everything – it’s clear that something else is going on. For one example of many, there may be ‘giant structures’ that connect the universe up across very large scales – galaxy spin-filament alignments are hard to explain
https://www.vice.com/en/article/new-observations-reveal-how-giant-structures-in-space-connect-the-universe-and-form-galaxies . As I’ve said, in RT the cause and effect sequence is reversed: the galaxies come first, then emit what becomes the filaments, much of which are thought to be DM.
And evidence has been growing for an RAR in clusters, which last year crossed a line into a ‘distinct’ RAR. On some graphs, with the full gamut of accelerations, this can look like a few points slightly off the line, but it means a lot more than that. The RAR for clusters uses a different ‘constant’, a++ instead of a0, and yet in both RARs the transition starts at the same place, so they look connected. This clue should not be underemphasised by MOND supporters (particularly if they’re the main discoverer of the RAR). It makes MOND look like just one area of something wider. And vertical velocities in galaxies seem to land between Milgrom and Newton. So all in all, it’s impossible to see the present situation as one group arguing for the truth, the other group in denial. It’s often like that if you zoom in on a particular issue, but overall, as always, there’s more to find out.
Certainly I agree that “the acid test for good science is giving the same emphasis to results that go against one’s view” since that’s exactly how I got to where I am having started out as a true believer in dark matter. But then you accuse me under-emphasizing results that I’ve obtained myself (with collaborators). Pray tell, how do we know what the “right” amount of emphasis is?
You say the situation is not clear cut, yet seem to think certain observations that are not clear cut actually are. Clusters for example: indeed, the data for clusters are offset from the galaxy RAR. That much is clear cut. But you go further to assert that there is a definitive acceleration scale a++. That is not clear cut. We can fit it that way, and I’ve been involved in work that does so, but the inference of a separate scale a++ is entirely – 100% – a consequence of the choice of interpolation function that forces the fit to reach the Newtonian regime rapidly. There is nothing about the data themselves that require a new scale a++; that’s an additional layer of interpretation that is possible but not required. I’ve written about this here before; there is also a nice summary of the conundrum by Mark Huisjes at https://continentalhotspot.com/2024/08/16/26-does-the-bullet-cluster-disprove-mond/ and https://continentalhotspot.com/2024/08/20/27-what-is-the-mond-cluster-conundrum/
Similarly for the vertical motions in the Milky Way falling between Newton and MOND. I’m the one who pointed that out. I consider it to be a serious problem. But there appears to be lots of non-equilibrium phenomena in the Milk Way whose impact on this we do not yet understand, so how much emphasis should we choose put on that? I would put a lot of emphasis on it if I was certain about what it meant, but I’m not. Accepting that there are things we don’t really understand is an acid test for being a good scientist; we often go down the wrong path when our overconfidence directs us along paths of overly-specific interpretations.
As I’ve said many times before, how we interpret these data depends on how we weigh the discordant lines of evidence. That’s inevitably a judgement call, but before we get that far we have to agree on what the data actually show.
On the other hand, there are some results that are clear cut. Are those over-emphasized?
I probably should have said ‘one test for….’ not, ‘the acid test’, because as you say, emphasis is not something that can be pinned down. I always felt you used good science to get to a view in which MOND can’t be swept under the carpet, simply because it does better than everything else. And I know you came to that from what was more or less the opposite view.
I’m sure you’re right about clusters – I know there’s a lot more scatter, but you’ve made other points about the data that I didn’t know about, and your understanding of the data is beyond mine, to put it mildly. At the time I just read three or four papers on clusters, including the one you contributed to which called it a distinct RAR, and they talked about a different acceleration scale, a++, underneath the RAR for clusters. That idea seemed to be on the map, though one of many ideas, and in a new area of study.
I stand further back than some people, and try to see the wood more than the trees, which has advantages and disadvantages – but I think we should have people looking from different viewpoints, who work together. To me, the idea from RT/PSG that the filaments of the cosmic web were emitted, and came after galaxies, not before, is a good example of this. From my point of view I can see very good reasons to believe that – it’s supported by the near proof, for one thing – but I don’t know that area well enough to know how to support it directly from the data. Hopefully over the next few years the relationship between galaxies and filaments, which is a mystery at present, will become more clear, and with some new missions coming up, the clues will come into focus.
You will not find a single theory that accommodates all scenarios (hierarchical levels), and that’s the underlying assumption that is intrinsically wrong.
A complex set of quantum objects transitions into a classical object once a certain complexity threshold is reached. The threshold theorem in quantum computing is a practical demonstration of this principle. Attempting to give quantum mechanics universal applicability leads to nonsensical conclusions, such as the many-worlds interpretation. Complexity is a boundary for quantum mechanics, quantum behavior.
The same issue arises with General Relativity (GR). Beyond a certain complexity threshold, GR fails to provide meaningful predictions or explanations without resorting to speculative constructs like dark matter and dark energy. GR’s failure at the level of galaxy-scale complexity without dark matter should be sufficient to question its applicability beyond simpler systems. MOND success at galaxy level complexity only reaffirms GR limitations.
It’s important to note that GR and Newtonian gravitational theory share the same domain of applicability: simple gravitational systems. GR is a refinement of Newtonian gravity but does not extend its domain of applicability; both operate within the same hierarchical level.
This seems to be hard to accept, but different hierarchical levels require different approaches (theories) because each level has irreducible emergent properties that are absent in the levels beneath it.
Complexity acts a boundary for the range of applicability of any theory, and there’s no escape from it. This limitation is supported not only by results in formal mathematics but also by these ongoing struggles. Moreover, the scalability limits of practical applications further reflect the same fundamental principle: complexity imposes a boundary for applicability.
PS. I only meant MOND supporters should make sure they don’t do a little of what DM supporters do – obviously I think the open minded side of things is in acknowledging MOND’s results.
Sure. But there’s no way to discuss this without going too hard for some people or not enough for others. There’s too much of a distribution in both attitude and knowledge. All the while the scientific audience that should be most engaged is instead covering their ears going “LA LA LA can’t hear you!”
Heck, that’s the healthy version. I once gave a talk at Wayne State where I mostly talked about data and dark matter, only mentioning MOND at the end. A young particle physicist lost her shit: “It is way too early to even consider something so radical!” as if I hadn’t been working on it since she was in middle school. That level of ignorance remains endemic to the field and I’m tired of waiting for them to get over themselves and start to catch up.
You have pointed out many times that proponents of DM can move the DM around and effectively claim after the fact that observations are/were consistent with DM abundance. So I have a question, and I’m being kind of serious because I think it could really shock them if it were true, are they able to move all the dark matter and put it behind a Schwarzschild horizon and still agree with observation? This is different from saying DM doesn’t exist. It is saying you just didn’t know where to put it. And wouldn’t it be nice to go ahead and tell them where they can put it?
Black holes are one conceivable form of dark matter, but it can’t be just one; they have to be arranged just so in a distribution that we constrain observationally. So no, I don’t think it would work to move it around in the way you suggest.
I’m suggesting something a little different I think. For example, in the case of a model universe that has a Schwarzschild horizon which is complementary to a de Sitter horizon, all of the DM would appear as a distant background to every observation, so long as the observation was looking for its effect.
Sounds sorta like mirror matter, but in either case, I don’t know how we could tell.
That is a good point, and I had not heard of mirror matter before. It is now on my radar, though I don’t think it would matter too much if a positive result for what I had suggested could not be distinguished from hypothetical mirror matter. The point would have been made, and the shock should still be there.
This is because I am suggesting moving the dark matter outside of the observable universe. If observational data can still be made consistent by doing that, then that is going to be quite a shocking result indeed.
The motivation for suggesting this might be so is that perhaps Dark MATTER and Modified Newtonian DYNAMICS are complementary, and relatable to the suggested complementary nature of the cosmic horizon. The DM representing a depiction of a static universe, where the observer is located outside of a Schwarzschild cosmic horizon. MOND representing a depiction of a dynamic universe where the observer is located inside a de Sitter cosmic horizon.
It would be no easy task, but just the notion that the DM could be moved outside the observable universe, and yet have the same descriptive power, would be a considerable breakthrough.
Not sure if I responded to that point already.
Basically, I don’t think it would matter too much if a positive result for what I had suggested could not be distinguished from hypothetical mirror matter. The point would have been made, and the shock should still be there.
This is because I am suggesting moving the dark matter outside of the observable universe. If observational data can still be made consistent by doing that, then that is going to be quite a shocking result indeed.
The motivation for suggesting this might be so is that perhaps Dark MATTER and Modified Newtonian DYNAMICS are complementary, and relatable to the suggested complementary nature of the cosmic horizon. The DM representing a depiction of a static universe, where the observer is located outside of a Schwarzschild cosmic horizon. MOND representing a depiction of a dynamic universe where the observer is located inside a de Sitter cosmic horizon.
It would be no easy task, but just the notion that the DM could be moved outside the observable universe, and yet have the same descriptive power, would be a considerable breakthrough.
Having no idea myself how difficult it would be to model all the dark matter as residing in a background beyond the observable universe, I used a Schwarzschild calculator just to do a rough insanity check. It actually checks out fairly well for back of the envelope.
I got a gravity of about 1.0 x10^-10 m/s2 corresponding to a Schwarzschild radius of 46 Mly with a mass of about 3×10^53 kg. I really wasn’t expecting an approximation of a0 to pop out of that, but I guess it would have to be close if the original suggestion made any sense. What do think about that?
Sorry I made a typo. I used 46Gly as the radius.
Yes, the scale a0 in the kinematic data is also present in cosmology. There are various ways to phrase this; the first pointed out was a0 ~ c H0 but you point out another way to see this coincidence. Coincidence or clue?
Thank you for the Mark Huisjes site links. Just the ticket for those of us needing gentle introductions.
Thank Mark! He’s a frequent contributor here.
Glad you liked it 😁! More articles will be coming soon about tides and dwarf galaxies so subscribe if you want to get a notification!
Do you have thoughts on the recent paper claiming that Dark Energy doesn’t exist? Would this have a measurable effect on structure growth and BAO in the early universe?
There are always some, so I’m not sure which you mean. The Dark Energy Survey was claiming a time-variable dark energy. Another claimed some alternative cosmology fit better, but that only analyzed supernova data and didn’t attempt to explain all the other cosmic observables that require a cosmological constant (within the framework of GR).
My gosh this is one of the most interesting posts ever. I was quite excited to look over Mark Huisjes two articles referenced by Stacy up thread that deal with MOND’s ability to deal with galaxy clusters. I had started reading one of those posts back in the summer, but outdoor activities became a distraction. As someone dabbling in homebrew theory ideas, and something of a MOND purist (hoping there is some way to overcome the roughly 2-to-1 disparity in MOND’s prediction of mass in clusters), I was keenly interested in what Mark had to say. I’m just starting to read the 2nd article, but was greatly encouraged by the 1st article that MOND’s issue with mass in clusters is not insurmoutable.
The book referenced in the title to this blog entry by John Green is delightful and well worth the read.
It looks like your YT link at the bottom might be wrong. That one took me to a Pink Panther clip. You probably want something like this: https://youtu.be/ftt4f2H3GDs
btw, can you come up with a way to make a cosmology acronym for SCMODS?
First link (spoken by the pictured actors) is to that; the PP link was intentional.
When not used by law emforcement, SCMODS is more of a detector technology – Scientific Complementary Metal–Oxide–Doped-Semiconductor
I never cared to speculate much about wormholes, other than that they are mathematical objects and a theoretical curiosity. However, now I am beginning to see a use for them. If a0 is the surface gravity of a Nariai black hole, then how are we able to measure a0 in galaxy dynamics? Well, in the Nariai spacetime there are two interrelated black holes. One that is the size of the cosmic horizon, and the other one that sits at a pole of the spacetime but can be of negligible size. The two are connected mathematically by a wormhole. Contrary to what Susskind has presented, it seems more likely to me that the observer resides at this pole. In this case, a0 may both be felt both by the observer and at the surface of the Nariai black hole at the horizon – and is that a realization of the wormhole?
Question about the Tolman test: why is there a square of (1+z) attributed to the object being “closer” than it actually is and thus appearing bigger?
I would argue that the object we see (at the time that the light we see emanated from it) was exactly the size we see now at that specific time it sent the photons we observe now? Or is this a Euclidean metric with velocities from Hubble’s law instead of expanding space?
Then reasoning from Euclidean metric with velocities from Hubble, if time dilation from the gravity of a massive black hole at 46Gly distance (JB’s idea) plays a role, this gives an additional factor of (1+z), following t_0 = t_f sqrt(1-(v_e/c)^2) = t_f / (1+z). Here t_f is the time we observe while t_0 is the time passed there, and v_e is the escape velocity there. So we observe a factor (1+z) more time than what passed there, dilating the surface brightness by an extra factor of (1+z).
Is this analysis any good?
In the RW metric, it is convenient to define a luminosity distance DL and an angular size distance DA in analogy to the distance in the familiar Euclidean geometry D. In Euclidean geometry D = DL = DA, and an object of size R subtends an angle T = R/D. In RW, it being a different metric, the relation to Euclid is DL = D(1+z) and DA = 1/(1+z). Surface brightness then gets dimmed relative to our Euclidean proclivities by the ratio (DA/DL)^2 = 1/(1+z)^4.
Thanks for the clarification!
Essentially what I posted is a calculation that in a Euclidean metric, with gravitational time dilation due to gravity in all directions toward the edge of the observable universe, a Tolman test result of 1/(1+z)^3 is expected.
Here is a recap Stacy should you care to chase down any of this.
For lack of a better name, the Nariai-UT (Underlying Theory) spacetime should be interpretted as having a complementary cosmic horizon that comprises both the Schwarzschild (Black Hole) and de Sitter (Big Bang) horizons.
These stretched horizons are connected by wormholes to two complementary and indistinguishable observers sitting at the poles of the spacetime.
The Nariai Black Hole contains the Dark Matter in a natural way, since this matter is outside the observable universe, but exists as a gravitational background to every observation.
The surface gravity of this Nariai Black Hole is predicted to be the MOND acceleration a0. This acceleration enters measurements via connection to the observer through a wormhole to the stretched horizon containing the Nariai Black Hole.
This connection to a0 measurement demonstrates a physical realization of a wormhole. It connects Mach’s Principle with the Holographic Principle, and may be related to the source of inertia.
That sounds cool, and would be great if it does indeed connect Mach with holography, which at least sounds plausible and is of the right acceleration scale. But how does MOND behavior emerge? It isn’t just the acceleration scale, but Newton on the high side and asymptotic MOND on the low side.
The MOND behavior may not emege at all in the systems that are being studied. We would need to take careful account of the metric, and what might be considered observational distortions. The clue to me is that we can’t yet determine if MOND should modify gravity, or inertia. So is the observer able to chose how MOND emerges?
It could be either of those, yes. But I don’t think there is a choice for the observer in the sense of a frame of reference.
Yes, that seems true. The choice may be to modify either gravity or inertia, then properly account for the effects of relativity in the metric that exhibits the cosmological complementarity, such as the proposed Nariai-UT. That is an important step as you point out, but I would focus as well on the origin of a0 itself. If this is a physical realization of a wormhole, that is a much bigger discovery IMO.
Did my answer make any sense that MOND may not emerge within the systems? Do you think that an explanation could be that it emerges in the way it does because of the metric?
I think your question is at the heart of the issue, and will eventually demonstrate whether this new theory fails.
If the DM can be set as the gravitational background in DM analysis with appropriate metric choice, then why can’t MOND acceleration be set as a dynamic property of the observer?
If you can’t disprove that, then the next logical step is try to connect the phenomena, and I hope that is where you can assert that you may have discovered a physical realization of a wormhole. How does that sound?
I don’t know. I think we need a period of chaos in which there is an active community of people trying out these ideas. None of us are Einstein, and he got a lot of help from Lorenz and Hilbert. We lack the critical mass to make progress.
In the paper that I referenced earlier in this thread, there is an additional degree of freedom given to the observer in the Nariai spacetime that is an interesting distinction. It would be a good idea to account for the mathematical descriptions relating the observer to what is observed in the inner regions of the spacetime, as well as this additional qubit assigned to the observer. That seems like a very fundamental distinction. Maybe it is a way of accounting for the entropy of complementarity? I don’t know, but it seems like a good reference point for the necessary departures from the standard paradigm.
Hello,
Forgive me for going off topic, my question relates to a previous post. Has an explanation compatible with the existence of dark matter halos been given of the observations published in “Radial acceleration relation of galaxies with joint kinematic and weak-lensing data “T. Mistele & … Journal of Cosmology and Astroparticle Physics, Vol 2024 (2024) doi.org/10.1051/0004-6361/202040108?
Thank you for your communication efforts on this blog; they are very useful!
Jean
I am not aware of a viable explanation, or even a serious attempt at one.
The most natural thing to expect in LCDM is a contribution at large radii from the so-called two-halo term. Basically, the mass of everything else out there starts to contribute to the lensing signal. One can make a statistical estimate of this, and there are claims that it works out (see the discussion in https://tritonstation.com/2024/01/08/discussion-of-dark-matter-and-modified-gravity/) in an average sense. That’s why we concentrated on isolated galaxies: to get at the one-halo term. There will still be other stuff out there, but we can minimize that, and the correction becomes small. The signal we see is not consistent with LCDM, and I have yet to hear an explanation of how it might be, beyond hoping that the two-halo contribution somehow works out just so.
The other aspect to consider is whether the dark energy can be described by either the gravity of the Nariai Black Hole, or the associated observer’s acceleration.
One last hypothetical statement Stacy, because I feel like I am just babbling on here and you already know all of this speculation.
The way I interpret this theory is that the observer chooses somehow to be outside the black hole, in which case the metric acceleration comes through the wormhole and emits at the observer, or the observer chooses to be inside the expanding universe in which case the metric expansion also comes through the wormhole, flowing out of the horizon and into the observer.