As predicted, JWST has been seeing big galaxies at high redshift. There are now many papers on the subject, ranging in tone from “this is a huge problem for LCDM” to “this is not a problem for LCDM at all” – a dichotomy that persists. So – which is it?
It will take some time to sort out. There are several important aspects to the problem, one of which is agreeing on what LCDM actually predicts. It is fairly robust at predicting the number density of dark matter halos as a function of mass. To convert that into something observable requires understanding how baryons find their way into dark matter halos at early times, how those baryons condense into regions dense enough to form stars, what kinds of stars form there (thus determining observables like luminosity and spectral shape), and what happens in the immediate aftermath of early star formation (does feedback shut off star formation quickly or does it persist or is there some distribution over all possibilities). This is what simulators attempt to do. It is hard work, and they are a long way from agreeing with each other. Many of them appear to be a long way from agreeing with themselves, as their answers continue to evolve – sometimes because of genuine progress in the simulations, but sometimes in response to unanticipated* observations.
Observationally, we can hope to measure at least two distinct things: the masses of individual galaxies, and their number density – how many galaxies of a particular mass exist in a specified volume. I have mostly been worried about the first issue, as it appears that individual galaxies got too big too fast. In the hierarchical galaxy formation picture of LCDM, the massive galaxies of today were assembled from many smaller protogalaxies over an extended period of time, so big galaxies don’t emerge until comparatively late: it takes about seven billion years for a typical bright galaxy to assemble half its stellar mass. (The same hierarchical process is accelerated in MOND so galaxies can already be massive at z ≈ 10.) That there are examples of individual galaxies that are already massive in the early universe is a big issue.
How common should massive galaxies be? There are always early adopters: objects that grew faster than average for their mass. We’ll always see the brightest things first, so is what we’re seeing with JWST typical? Or is it just the bright tip of an iceberg that is perfectly reasonable in LCDM? This is what the luminosity function helps quantify: just how many galaxies of each mass are there? If we can quantify that, then we can quantify how many we should be able to see with a given survey of specified depth and sky coverage.
Astronomers have been measuring the galaxy luminosity function for a long time. Doing so at high redshift has always been an ambition, so JWST is hardly the first telescope to contribute to the subject. It is the newest and best, opening a regime where we had hoped to see protogalactic fragments directly. Instead, the first thing we see are galaxies bigger than we expected (in LCDM). This has been building for some time, so let’s take a step back to provide some context.
Steinhardt et al. (2016) pointed out what they call “the impossibly early galaxy problem.” They quantified this by comparing the observed luminosity function in various redshift bins to that predicted by LCDM. We’ve discussed their Fig. 1 before, so let’s look now at their Fig. 4:

In a perfect model, the points (data) would match the lines (theory) of the same color (redshift). This is not the case – observed galaxies are persistently brighter than predicted. Making that prediction is subject to all the conversions from dark matter mass to stellar mass to observed luminosity we mentioned above, so they also show what they expect and what it would take to match the data. These are the different lines in the top panel. There is a lot of discussion of this in their paper that boils down to these lines are different, and we cannot plausibly make them the same.
The word “plausibly” is doing a lot of work in that last sentence. Just because one set of authors finds something to be impossible (despite their best efforts) doesn’t mean anyone else accepts that. We usually don’t, even when we should**.
It occurs to me that not every reader may appreciate how redshift corresponds to cosmic time. So here is a graph for vanilla LCDM parameters:

Things don’t change much if we adopt slightly different cosmologies: this aspect of LCDM is well established. We used to think it would take a least a couple of billion years to form a big galaxy, so anything at z > 3 is surprising from that perspective. That’s not wrong, as there is an inverse relation between age and redshift, with increasing redshifts crammed into an ever smaller window of time. So while z = 5 and 10 sound very different, there is only about 700 Myr between them. That sounds like a long time to you and me, but the sun will only complete 3 orbits around the Galaxy in that time. This is why it is hard to imagine an object as large as the Milky Way starting from the near-homogeneity of the very early universe then having time to expand, decouple, recollapse, and form into something coherent so “quickly.” There is a much larger distance for material to travel than the current circumference of the solar circle, and not much time in which to do it. If we want to get it done by z = 10, there is less than 500 Myr available – about two orbits of the sun. We just can’t get there fast enough.
We’ve quickly become jaded to the absurdly high redshifts revealed by JWST, but there’s not much difference in cosmic time between these seemingly ever higher redshifts. Very early epochs were already being probed before JSWT; JWST just brings them into excruciating focus. To provide some historical perspective about what “high redshift” means, here is a quote from Schramm (1992). The full text is behind a paywall, so I’ll just quote a relevant paragraph:
Pushing the opposite direction from the “zone of mystery” epoch [the dark ages] between the background radiation and the existence of objects at high redshift is the discovery of objects at higher and higher redshift. The higher the redshift of objects found, the harder it is to have the slow growth of Figure 5 [SCDM] explain their existence. Some high redshift objects can be dismissed as statistical fluctuations if the bulk of objects still formed late. In the last year, the number of quasars with redshifts > 4 has gone to 30, with one having a redshift as large as 4.9… While such constraints are not yet a serious problem for linear growth models, eventually they might be.
David Schramm, 1992
Here we have a cosmologist already concerned 30 years ago that objects exist at z > 4. Crazy, that! Back then, the standard model was SCDM; one of the reasons to switch to LCDM was to address exactly this problem. That only buys us a couple of billion years, so now we’re smack up against the same problem all over again, just shifted to higher redshift. Some people are even invoking statistical fluctuations: same as it ever was.
Consequently, a critical question is how common these massive galaxies are. Sure, massive galaxies exist before we expected them. But are they just statistical fluctuations? This is a question we can address with the luminosity function.
Here is the situation just before JWST was launched. Yung et al. (2019) made a good faith effort to establish a prior: they made predictions for what JWST would see. This is how science is supposed to work. In the figure below, I compare that to what was known (Stefanon et al. 2021) from the Spitzer Space Telescope, in many ways the predecessor to JSWT:

If you just look at the mass functions in the left panel, things look pretty good. This is one of the dangers of the logarithmic plots necessary to illustrate the large dynamic range of astronomical data: large differences may look small in log-log space. So I also plot the ratio of densities at right. There one can see a clear excess in the number density of high mass galaxies. There are nearly an order of magnitude more 1010 M☉ galaxies than expected at z ≈ 8!
For technical reasons I don’t care to delve into, it is difficult to get the volume estimate right when constructing the luminosity function. So I can imagine there might be some systematic effects to scale the ratio up or down. That wouldn’t do anything to explain the bump at high masses, and it is rather harder to get the shape wrong, especially at the bright end. The faint end of the luminosity function is the hard part!
The Spitzer data already probes the early universe, before JWST reported results. As those have come in, it has started to be possible to construct luminosity functions at very high redshift. Here are some measurements from Harikane et al. (2023), Finkelstein et al. (2023), and Robertson et al. (2023) together with revised predictions from Yung et al. (2024).

Again, we see that there is an excess of bright galaxies at the highest redshifts.
As we look to progressively higher redshift, the light we observe shifts from familiar optical bands to the ultraviolet. This was a huge part of the motivation to build JWST: it is optimized for the infrared, so we can observed the redshifted optical light as our eyes would see it. Astronomers always push to the edge of what a telescope can do, so we start to run into this problem again at the highest redshifts. The mapping of ultraviolet light to stellar mass is one of the harder tasks in stellar population work, much less mapping that to a dark matter halo mass. So one promising conventional idea is “the up-scattering in UV luminosity of small, abundant halos due to stochastic, high efficiency star formation during the initial phases of galaxy formation (unregulated star formation)” discussed$ by Finkelstein et al. (2023). I like this because, yeah, we expect lots of little halos, star formation is messy and star formation during the first phases of galaxy formation should be especially messy, so it is easy to imagine little halos stochastically lighting up in the UV. But can this be enough?
It remains to be seen if the observations can be explained by this or any of the usual tweaks to star formation. It seems like a big gap to overcome. I mean, just look at the left panel of the final figure above. The observed UV luminosity function is barely evolving while the prediction of LCDM is dropping like a rock. Indeed, the mass functions get jagged, which may be an indication that there are so few dark matter halos in the simulation volume at the redshift in question that they do not suffice to define a smooth mass function. Indeed, Harikane et al. estimate a luminosity density of ∼7 × 10−6 mag.−1 Mpc−3 at 𝑧≈16. This point is omitted from the figure above because the corresponding prediction is NAN (not a number): there just isn’t anything big enough in the simulation to do be so bright that early.
There is good reason to be skeptical of the data at 𝑧≈16. There is also good reason to be skeptical of the simulations. These have yet to converge, and even the predictions of the same group continue to evolve. Yung et al. (2019) did the right thing to establish a prior before JWST’s launch, but they haven’t stuck by it. The density of rare, massive galaxies has gone up by a factor of 2 to 2.5 in Yung et al. (2024). They attribute this to the use of higher resolution simulations, which may very well be correct: in order to track the formation of the earliest structures, you have to resolve them. But it doesn’t exactly inspire confidence that we actually know what LCDM predicts, and it feels like the same sort of moving of the goalposts that I’ve witnessed over and over and over and over and over again.
It always seems to come down to special pleading:

And the community loves LCDM, so we fall for it every time.

*There is always a danger in turning knobs to fit the data, and there are plenty of knobs to turn. So what LCDM predicts is a very serious matter – a theory is only as good as its prior, and we should be skeptical if theorists keep adjusting what that is in response to observations they failed to predict. This is true even in the absence of the existential threat of MOND which implies that the entire field of cosmological simulations is betrayed by its most fundamental assumptions, reducing it to “garbage in, garbage out.”
**When I first found that MOND had predicted our observations of low surface brightness galaxies where dark matter had not, despite my best efforts to make it work out, Ortwin Gerhard asked me if he “had to believe it.” My instant reaction was “this is astronomy, we don’t have to believe anything.” More seriously, this question applies on many levels: do we believe the data? do we believe the interpretation? is this the only possible conclusion? At the time, I had already tried very hard to fix it, and had failed. Still, I was willing to imagine there might be some way out, and maybe someone could figure out something I had not. Since that time, lots of other people have tried and also failed. This has not kept some of them from claiming that they have succeeded, but they never seem to address the underlying problem, and most of these models are mere variations on things I tried and dismissed as obviously unworkable.
Now, as then, what we are obliged to believe is the data, to the limits of their accuracy. The data have improved substantially, and at this point it is clear that the radial acceleration relation exists+ and has remarkably small intrinsic scatter. What we can always argue about is the interpretation: sure, it looks exactly like MOND, and MOND was the only theory that predicted it in advance, and we haven’t been able to come up with a reasonable explanation in terms of dark matter, but perhaps one can be found in some dark matter model that does not yet exist.
+Of course, there will always be some people behind the times and in a state of denial, as this subject seems to defeat rationalism in the hearts and minds of particle physicists in the same way Darwin still enrages some of the more religiously inclined.
$I directly quote Finkelstein’s coauthor Mauro Giavalisco from an email exchange.
Thanks for the nice contribution and blog post.
Numerical resolution will be a necessary-but-insufficient condition to find out what the heck LCDM actually predicts for high-z galaxy formation. The recipes currently in use either converge to a smoothed-out answer that we know is wrong in detail (e.g. “effective EOS” ISM models), or use recipes that cannot conceivably converge to the actual formation and feedback of individual stars (e.g. FIRE, SMUGGLE, VINTERGATAN).
Thanks for the comment. I had refrained from going into these details, which are good to air. I’ve seen low resolution sims that make things too early (they break up when better resolved with the same code) and vice-versa, so indeed it will be necessary for many aspects of these simulations to convergence
He was born one day and just after a week he already got a Ph. D, a wife and two children. How is that possible? Don’t you know about statistical fluctuations?
the uncertainty in the Ph.D’s energy must be huge
(delta E) (delta t) > h
Yes, great discussion. Any reason to question whether the apparent or observed age of the universe varies with redshift? At least that could make the too big too soon problem go away.
The LCDM age-redshift relation seems to be in good agreement with observations where tested. The applicable tests get hazy at z > 2 and we know squat about the dark ages immediately preceding these high z galaxies. So it is conceivable that the standard cosmological model will break when enough data come in, e.g., https://tritonstation.com/2018/08/09/the-next-cosmic-frontier-21cm-absorption-at-high-redshift/ Most people have chosen to disbelieve the EDGES result, for example, but eventually such experiments will let us trace out H(z) at z > 10. [I mention EDGES because if it turns out to be right after all, things are already broken in that LCDM cannot explain their data. The most obvious change is to the expansion history H(z), but this is so hardwired into the minds of cosmologists it hasn’t occurred to most of them to even conceive of this as a possibility.]
There is something here that to me seems wrong: “As we look to progressively higher redshift, the light we observe shifts from familiar optical bands to the ultraviolet. This was a huge part of the motivation to build JWST: it is optimized for the infrared, so we can observed the redshifted optical light as our eyes would see it.”
Isn’t it a shift to infrared and not UV? Or aren’t the UV bands shifted to optical bands?
I don’t know exactly what you wanted to say in that paragraph.
I think I understand what Stacy is getting at, but I don’t think he phrased it well. Yes the UV bands are shifted to the visible (and the visible to the IR). The difficulty is that when we observe distant galaxies in the visible (as with Hubble) we are looking at radiation that was in the UV when it left the galaxy. We don’t have much data on nearby galaxies in the UV because Hubble only went down to 115 nm, at z=3 that would be shifted into the visible at 460 nm; what we see from Hubble at 115 nm at z=3 we cannot relate to measurements of nearby galaxies because it would correspond to an emission wavelength of 28.75 nm. So if we want to ask the question (for example) how does stellar luminosity in the far UV change with metallicity – maybe young low-metallicity stars are more luminous in the UV – it is hard to calibrate it. With JWST, even if we go to z=10, we are probing emission wavelengths where we have good data for nearby galaxies so we can tell if these distant galaxies have an excessive UV luminosity just by comparing their measured UV/visible light (using Hubble) with their infrared (using JWST). One still has to worry about absorption over the path from the galaxy to the telescope, but this will make the UV luminosity less rather than more.
Thanks, I had the same question in mind as Apass.
Yes, JWST makes z=4 optical emission observable in the infrared. If you keep pushing to higher z, eventually the optical bands get pushed out of JWST range of wavelength sensitivity, and you once again winde up looking at restframe ultraviolet.
Sorry for the confusion; I guess this is one of those things that is so obvious to me I forget it needs much explaining.
could you or any one comment on
arXiv:2401.11534 (astro-ph)
[Submitted on 21 Jan 2024]Comparing dark matter and MOND hyphotheses from the distribution function of A, F, early-G stars in the solar neighbourhood
M. A. Syaifudin (1), M. I. Arifyanto (2 and 3), H. R. T. Wulandari (2 and 3), F. A. M. Mulki
Dark matter is hypothetical matter believed to address the missing mass problem in galaxies. However, alternative theories, such as Modified Newtonian Dynamics (MOND), have been notably successful in explaining the missing mass problem in various astrophysical systems. The vertical distribution function of stars in the solar neighbourhood serves as a proxy to constrain galactic dynamics in accordance to its contents. We employ both the vertical positional and velocity distribution of stars in cylindrical coordinates with a radius of 150 pc and a half-height of 200 pc from the galactic plane. Our tracers consist of main-sequence A, F, and early-G stars from the GAIA, RAVE, APOGEE, GALAH, and LAMOST catalogues. We attempt to solve the missing mass in the solar neighbourhood, interpreting it as either dark matter or MOND. Subsequently, we compare both hypotheses newtonian gravity with dark matter and MOND, using the Bayes factor (BF) to determine which one is more favoured by the data. We found that the inferred dark matter in the solar neighbourhood is in range of ∼(0.01-0.07) M⊙ pc−3. We also determine that the MOND hypothesis's acceleration parameter a0 is (1.26±0.13)×10−10 m s−2 for simple interpolating function. The average of bayes factor for all tracers between the two hypotheses is logBF∼0.1, meaning no strong evidence in favour of either the dark matter or MOND hypotheses.Comments: 21 pages, 8 figures, 6 tables, submitted to mnras, under reviewSubjects: Astrophysics of Galaxies (astro-ph.GA)Cite as: arXiv:2401.11534 [astro-ph.GA]
arXiv:2401.10202 (astro-ph)[Submitted on 18 Jan 2024]SPARC galaxies prefer Dark Matter over MONDMariia Khelashvili, Anton Rudakovskyi, Sabine Hossenfelder
We currently have two different hypotheses to solve the missing mass problem: dark matter (DM) and modified Newtonian dynamics (MOND). In this work, we use Bayesian inference applied to the Spitzer Photometry and Accurate Rotation Curves (SPARC) galaxies' rotation curves to see which hypothesis fares better. For this, we represent DM by two widely used cusped and cored profiles, Navarro-Frenk-White (NFW) and Burkert. We parameterize MOND by a widely used radial-acceleration relation (RAR). Our results show a preference for the cored DM profile with high Bayes factors in a substantial fraction of galaxies. Interestingly enough, MOND is typically preferred by those galaxies which lack precise rotation curve data. Our study also confirms that the choice of prior has a significant impact on the credible interval of the characteristic MOND acceleration. Overall, our analysis comes out in favor of dark matter.Comments: 10 pages, 7 figuresSubjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); Astrophysics of Galaxies (astro-ph.GA)Cite as: arXiv:2401.10202 [astro-ph.CO]
I see it like this:
RAR (radial-acceleration-relation) is an observation.
The velocity of the outer stars of a galaxy is ideally constant.
Ideally, it is independent of the distance from the center of the galaxy.
Nobody contradicts this observation.
e.g.
https://en.wikipedia.org/wiki/Modified_Newtonian_dynamics#/media/File:M33_rotation_curve_HI.gif
MOND provides a mathematical description of this observation.
MOND also correctly describes details of this observation, e.g. Renzo’s rule.
But we have no explanation of where this rule comes from.
Most scientists try to derive MOND using dark matter and Newtonian gravity.
Many have tried.
But all have failed.
That’s why I won’t even try this approach.
I like the following formula best:
a(deep-MOND region) = sqrt(a(Newton)*a0)
Here the acceleration in the deep MOND region arises as the geometric mean
of the Newtonian acceleration and the MOND parameter a0.
Milgrom had already noted: a0~cH/6, i.e.,
a0 could be due to the expansion of the universe.
Everything can be compared with our solar system.
Here are the observations of the planets (e.g. Tycho Brahe).
These observations are very well described by Kepler’s laws.
Newton’s law of gravitation generalizes this even further.
One hint is provided by the general theory of relativity with its:
Masses curve space and the masses in turn move in this curved space.
A substantial model for our space might help to derive both
Newtonian gravitation and curvature, as well as MOND and a0.
The first paper I have yet to read, but that the find a0 in agreement with other methods comes out more favorably for MOND than the last time I assessed vertical motions in the Milky Way.
The second paper is just wrong. I wrote about it before: https://tritonstation.com/2023/11/16/full-speed-in-reverse/. Among other problems, they fail to establish a proper prior for LCDM: they’re not testing what LCDM predicts, they’re just fitting French curves and finding that the most flexible French curves give the best fits.
sabine hossenfelder
Did astrophysicists mistake a statistical error for a new law of nature?
Have you ever heard the story about a fox, a scorpion and a river?
Replace the fox with a scientist, the river with a problem and the scorpion with a statistical analysis.
“Astronomers always push to the edge of what a telescope can do …”
Why? So the data is always on the edge of usefulness (if we are generous in our assessment and let’s be honest, we usually are). So we constantly stare at few smeared pixels produced by a device which operates on another statistical basis and is subject to whole ‘universe’ of influences beyond our control and awareness so we have to throw in few more levels of statistics … Look at the bright side. Such data allows virtually any theory and conclusion, feeds torrent of papers, cements grants and medals and justifies conferences and debate ad nausea because god damn it, we are enlightened!
Imagine figuring out friction law by observing a rock sliding down the slope of Olympus Mons. But but … science has always worked this way! Like there were material scientists far in the past who said to their colleagues: “Listen folks, you keep on working on that extraction of iron from ore while we fantasies about stronger, lighter sword that does not rust. Our theoretical framework will come handy a millennia down the line when other magical stuff we know nothing about is discovered. Of course your descendants will actually have to discover such stuff and figure out a way to produce it in useful quantities but that’s just labour! Our scribbles will come handy to them when they simply take iron, mix that shit in and see what happens, trust us!”
Yes, I’m over the top. By the looks of it, not by much.
Thanks for the post, which brings the data into focus. Incidentally, Sabine Hossenfelder, who criticised MOND recently, has just posted a video saying she doesn’t believe DM exists, and that MOND predicts galaxies should start to form fast and early – ‘by now it’s clear that data actually agree better with modified gravity’. She probably saw what you said about Sanders’ work on it, so people are coming around.
Once one has become sensitised to the way paradigms (sensu Thomas Kuhn) control thinking at the highest level of explanation – sensitised perhaps by breaking free from a paradigm that once seemed unquestionable and now seems a house of cards – one might suppose there would be wisdom in being particularly cautious when referring to paradigms outside one’s field. Instead failure to get on board the new MOND way of thinking is likened to not agreeing with the ultimate example of a paradigm which no rational person should question: Darwinism. Those few who do question it do so only because they are ‘religiously inclined’. But of course! Within the atheistic/materialist paradigm which no professional cosmologist would dream of challenging one can hardly think otherwise.
I am reminded of how just a few months ago, Stacy, you wrote that “the Apollo astronauts brought back rocks that helped determine the age of the solar system (4.568 billion years) … and the period of ‘late heavy bombardment’ when most big lunar craters were formed, a mere 3.9 billion years ago.” In fact neither of these statements was true. The ruling paradigm concerning the Moon’s origin has long been that it formed in a Giant Impact between proto-Earth and another planet considerably later than 4.568 Ga. Some lunar chronologists now argue that it could have been as late ~4.4 Ga (Borg & Carlson 2023). As for the Late Heavy Bombardment, which also once had the status of a paradigm, few lunar scientists subscribe to it nowadays. Nearly all rocks that date to 3.9 Ga either originated from one and the same massive event – the impact that produced the Imbrium crater – or had their chronometers reset by the impact.
As for Darwinism, perhaps you have never heard of the Cambrian Explosion; are not aware that the first 5/6ths of the fossil record (from 4.0 Ga to 0.6 Ga) consists of little more than bacteria and algae; and do not appreciate that the idea of a common ‘tree of life’ is sustained only because scientists consider it (like the Big Bang) to be the only game in town, propping it up by (among other devices) characterising all similarities that cut across the preferred phylogenies as ‘convergences’.
Modern cosmology is ruled by many high-level paradigms, of which Dark Matter is just one: e.g. that redshift records cosmic expansion, that everything began with a Big Bang, that when we see two or more galaxies close together they must be merging rather than splitting, and that 68% of the universe is dark energy. … Could it be that Dark Matter is not the only paradigm that needs to be critically examined? Science at its best is about testing everything, about keeping an open mind, about holding nothing sacred, not even atheism.
Yikes, Steven! Really? You advocate keeping an open mind, but your comment misrepresents what it means to do so.
I am not an evolutionary biologist by any stretch, but even with my limited knowledge I can see gross dismissal of the strength of evidence for evolution. While a discussion of evolution is really off-topic for this blog, the motivation for it isn’t. Keeping an open mind doesn’t mean keeping an empty mind.
You don’t have to trust the scientists who study evolution to know that it occurs: the emergence of antibiotic-resistant bacteria is sufficient empirical evidence to show evolution occurs. This is classic evolution at work: a species of bacteria enters a host (a person’s body) and multiplies exponentially; the person then takes an antibiotic, which introduces a hostile environment that kills off the bacteria because the bacteria have no natural protections against it (which is why the antibiotic works). Due to random mutations, most of which are deadly to a bacterium, rarely a “beneficial” (to the bacterium) mutation allows a bacterium to survive the hostile environment and multiply, and its “offspring” multiply exponentially while the unprotected bacteria die, making the new variant the dominant one in the body. This is classic natural selection, occurring within one person’s body. But the drug-resistant bacteria can hop to other people, who will now find the same antibiotic less effective.
It is irrelevant that the environmental stress on the bacteria was introduced by people rather than occurring “naturally” — the fact that the process occurs is what’s relevant. It occurs rapidly enough to happen within a few years because bacteria multiply so quickly. It isn’t hard to see that given millions and tens of millions of years (can you even imagine how much can change during that time?) the same process can lead to a lot of changes in a species because there are a lot of different kinds of stresses in a lot of different places in the world. Since not all members of a species are subject to the same environmental stresses, a given mutation may help some but not others, and members of the species diverge from each other.
(You also fail to mention that much of the evidence for evolution comes from examining DNA from different species. This is much stronger than relying on fossils.)
If you want to challenge an established “paradigm” or theory, or argue that others should seriously question it, you need to first understand the reasons its proponents adopted it. Then you need to carefully show the flaws in those reasons and why a particular alternative has fewer problems.
You undoubtedly notice that Stacy does just that. He thoroughly understands the arguments for dark matter, and has said numerous times that he used to think that dark matter was the right explanation. And then he found specific, identifiable flaws and investigated them. And his arguments in this blog have been specific and intellectually honest — he acknowledges what is unclear, what works and what doesn’t, both for dark matter and modified gravity. I don’t think your comment and “argument” for open mindedness comes close to meeting that standard.
I only mention Darwinism as an example of something people can remain in denial about indefinitely. There are other examples if you prefer, but the point stands: whatever we decide, however well established it becomes, regardless of topic, there can always be some who persist in thinking otherwise.
I got banned from physicsforums for pointing out that chemical evolution can be simply tried in laboratories: take a flask of bacteria, kill them all and you should have all ingredients for abiogenesis. Add in the lab whatever non-living stuff you think will help, but life will not (re)emerge. I’ve got some twenty other arguments up my sleeve, but there the discussion got modded out.
“Life will not (re)emerge.”How do you know that? As far as we know, it took billions of years for life to emerge on Earth. But you can’t wait that long for an experiment like this.
The more time you add, the more the 2nd law of thermodynamics restricts the entropy of the system. True, energy from the sun is added all the time, but in a homogeneous way not allowing for highly entropic structures like we find in life. It’s like you let erosion do its work on a mountain for millions of years and an exact full statue of Caesar is the result.
Thermodynamics, in the context of your example, is statistical distribution of probabilities of possible outcomes. Erosion carving statue of Caesar out of a mountain is possible outcome, just extremely unlikely.
yes, i also have the same thought experiment, but a unicorn emerges.
checkmate?
Good grief. You have ignored/disregarded entropy, even though you are aware of it (as evidenced by a later comment). Just because the number of atoms of each element hasn’t changed doesn’t mean their configuration has the same entropy before death and afterward — it will almost certainly be higher afterward. Why do you expect time-reverse symmetry to hold when entropy is increasing?
You also neglect that Earth is not a closed system — energy constantly comes from the sun, and energy radiates back into space. Energy from the sun drives weather and climate processes, and creates local conditions which your flask “experiment” doesn’t mimic. And, as stefanfreundt1964 pointed out, you ignore the not-very-minor issue of time scales.
Why should anyone at physicsforums (or here) take such not-well-though-out arguments seriously?
from the famous Anderson article ‘more is different’,
‘more’ also applies to ’more time’.
First of all, I’m not interested in convincing anyone of an act of creation. Nature speaks for itself, but for extraordinary claims (such an act would technically be a miracle) extraordinary evidence would required. I shared this first argument because evolution was the topic and because I think banning people because they go against something we hold sacred is social engineering.
I do not expext time symmetry to hold, believers in abiogenesis do – I can’t see a more promising setting for abiogenesis to occur. And weather can be simulated in a lab.
And there’s other arguments, here are important ones:
1. Population models. Like plotting galaxy movements back in time leads to the Big Bang hypothesis, plotting population back in time starting from data at around 10000-1000 BC as exponential function gives 2 people around 73000 BC rather than 2.8 million BC.
2. Irreducible complexity, such as shown by Michael Behe. Darwin said himself that if irreducible complexity was found, his theory is proven false. Irreducible complexity is for instance three mechanisms that rely on each other’s workings before an organism with these mechanisms can survive, in the most “simple” organisms like bacteria or microbes.
3. Peter Borger’s work and thoughts about the ENCODE project. Many arguments arise from this, for instance, ENCODE has shown that almost all of our DNA is functional in some evolutionary relevant way (not much junk DNA). Another is that genes can often stand in for one another (which means natural selection doesn’t filter harmful mutations out). Another argument is that some essential and pretty long DNA sequences are duplicated 4, 6 or even 8 times exactly the same way, while one copy would be sufficient for survival. That such patterns come up by chance is discouraged strongly by statistics.
4. Soft tissue in dinosaur bones. No matter how mummified a sample might have been, I cannot believe soft tissue would survive tens of millions of years.
Any of you who can improve my opinion of the theory of evolution? Then email me at 19maarten91@gmail.com, I don’t like to hijack this thread for this subject.
Courageous of you to leave your email address in a clickable form – expect lots of spam (but not the spam ham).
I’d direct you to potholer54 - Peter Hadfield’s youtube channel for a very good review of your arguments – just search for videos dealing with creationism.
I would not comment more here because this is not the proper venue, but Peter’s channel may be useful also for others.
I would point out that Darwinism says nothing about the creation of life from abiological matter; it is entirely about the origin of speciation from existing life as the full title suggests “On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life“. While we wouldn’t use the discredited term “races” these days, it was an assumption common to Victorian scientists like Darwin – we can point to his half-cousin Francis Galton, founder of Eugenics, as a classic example of this.
All we can say about the origin of life on Earth is, if it arose here (i.e. excluding pansperma, which only pushes the problem back) then a laboratory the size of the Earth (or at least the same surface area as the Earth) and a timescale of a billion years should suffice to prove whether life can come from abiological matter (the first signs of life on Earth are the bacterial mats we call stromatolites, the oldest fossils of which have been dated to 3.5 billion years ago).
As Archimedes said “Give me a lever long enough and a fulcrum on which to place it, and I shall move the world.” Absolutely true, but it’s a statement of a principle and says nothing about the scale required.
In the ‘What JWST will see’ blog, where Stacy predicted ‘big galaxies at high redshift!’, at one point he mentions straw man arguments, in which what someone is saying gets altered to a weaker version, to be knocked down easily.
We’re not discussing this here, but briefly, creationism is often used as a straw man, a weak version that’s easy to knock down. The real question is just: was the universe intended? And in fact all of religion can be a straw man in relation to that – it just distracts from the question, by constantly taking you to weak versions of one of the two possible answers.
At the end of your debate with Simon White he says “I checked my own (DM) models and it would be easy to see things up to redshift 25…” 1:59:30.
So he says. There are certainly tiny proto-proto-DM halos at that redshift; the trick is making enough baryons turn into stars that are bright enough to be the objects JWST sees. See https://tritonstation.com/2022/08/05/a-few-early-results-from-jwst/ and https://tritonstation.com/2023/03/10/cant-be-explained-by-science/
Apparently I was correct in observing that controversial topics can and will be debated far into the future, no matter how well established a paradigm becomes. Note that I do not equate “established” with “correct.”
That’s all I was saying. I was not equating MOND with Darwinism, nor do I see how the Cambrian explosion or the soft tissues of dinosaurs (both very interesting) is in anyway related to anything I said. Please don’t put words in my mouth.
I saw it as an analogy about reality denial, didn’t expect it to go far off topic. (Some would say taking things too literally goes with the territory)
I have a question about the EFE. Although it has been found for certain, it seems to appear in some situations, not others. Is there any hint of a pattern so far, such as with the relative strengths of the fields, or the direction, perhaps as in radial/vertical velocities?
In what case is the EFE not seen when it should be?
Well, that’s part of what I’m asking. I’ve seen one or two possible mentions of it not showing up, one was from an early paper of Hernandez, with results that have since been improved on, but in ‘Wide binary weirdness’ Stacy said:
One of the first papers to address this is Hernandez et al (2022). They found a boost in speed that looks like MOND but is not MOND. Rather, it is consistent with the larger speed that is predicted by MOND in the absence of the EFE. This implies that the radial acceleration relation depicted above is absolute, and somehow more fundamental than MOND. This would require a new theory that is very similar to MOND but lacks the EFE, which seems necessary in other situations. Weird.
Perhaps unrelated to the EFE, but of interest in a similar way, in ‘Is the Milky way’s rotation curve declining’, he mentions Newtonian vertical velocities in our galaxy, in which the radial force is as in MOND, but the vertical force looks Newtonian.
I’m just wondering about any exceptions, as clues may be found there – am working on a lateral interpretation. The exceptions were particularly interesting when it looked like Banik might be right, during the backwards and forwards last year. Now it looks more likely that he’s wrong, but obviously if he was right, MOND would look more like an effect, that arises in specific situations, and less like a universal gravity theory.
Whatever it is, the exceptions hold good clues – just as in the flyby anomaly, one flyby that showed no anomaly was the only one on a flightpath symmetrical to the equator or spin axis. That clue led one of the NASA team to crack into the puzzle enough to get to an empirical formula.
Well, that’s part of what I’m asking. I’ve seen one or two possible mentions of it not showing up, one was from an early paper of Hernandez, with results that have since been improved on, but in ‘Wide binary weirdness’ Stacy said:
_One of the first papers to address this is __Hernandez et al (2022)_ [16]_. They found a boost in speed that looks like MOND but is not MOND. Rather, it is consistent with the larger speed that is predicted by MOND in the absence of the EFE. This implies that the radial acceleration relation depicted above is absolute, and somehow more fundamental than MOND. This would require a new theory that is very similar to MOND but lacks the EFE, which seems necessary in __other_ [17]_ __situations_ [18]_. Weird._
Perhaps unrelated to the EFE, but of interest in a similar way, in ‘Is the Milky way’s rotation curve declining’, he mentions Newtonian vertical velocities in our galaxy, in which the radial force is as in MOND, but the vertical force looks Newtonian.
I’m just wondering about any exceptions, as clues may be found there – am working on a lateral interpretation. The exceptions were particularly interesting when it looked like Banik might be right, during the backwards and forwards last year. Now it looks more likely that he’s wrong, but obviously if he was right, MOND would look more like an effect, that arises in specific situations, and less like a universal gravity theory.
Whatever it is, the exceptions hold good clues – just as in the flyby anomaly, one flyby that showed no anomaly was the only one on a flightpath symmetrical to the equator or spin axis. That clue led one of the NASA team to crack into the puzzle enough to get to an empirical formula.
@ Jonathan K
Hello,
I may have missed something, but what convinced you that Indranil Banik et alt. were wrong? Thank you for your attention.
I think that is still up in the air. Hernandez and Chae are convinced he is wrong, and vice-versa. There are a series of relevant papers I should comment on here, but just keeping up is exhausting.
One also needs to be careful about what theory is excluded by what data. If Banik+ are correct, then that is bad for modified gravity theories like AQUAL and QUMOND. Those are not the only possible theories, and such a result might point more towards a modification of inertia (Milgrom’s original hypothesis) rather than of gravity.
Stacy understands the details far better than I do, and what he says affected my view of it. Banik’s work initially looked good to me, but he showed one bit of it in particular to be questionable, and said the smaller, cleaner samples of Hernandez are more reliable.
Last year the apparent likeliness went up and down – Banik shed doubt on Chae’s findings, then Chae came back with another paper that looked better. But it’s too early to know, and statistical results are capable of getting it wrong – the jury is not just out, some of them are way out. The fact that it’s two teams against one in itself suggests Banik may be wrong.
But all three teams seem to think MOND works in galaxies, and Banik et al suggest adapting MOND, going into detail about how that might be done (well, DM has been adjusted more than MOND in the past, to put it mildly). They discuss work looking at a distance-dependent threshold below which it doesn’t arise, proportional to M1/4, or below 0.1 pc, and suggest an upper limit to the equivalent pdm density of ≲ 20 M⊙/pc3. But it now looks less likely that adjustments like that will be needed.
Thank you both for your answers.
“Statistical results are capable of getting it wrong … The fact that it’s two teams against one in itself suggests Banik may be wrong.”
You don’t consider “two teams against one” to be a statistical argument then?
Trying to imagine how a 1920’s Albert Einstein would respond if he was presented with all the latest astronomical observations and analyses; flat rotation curves, RAR, high-redshift galaxies, CMB, gravitational lensing, LIGO/VIRGO gravity wave detection, discovery of Higgs boson/field, verification of black holes, etc. I’m sure he would be both puzzled but also delighted that so many of the phenomena predicted from GR have come true.
However, if he was also told of the current state of cosmology; the standard model of cosmology, the adoption of complex and shifting modelling of DM/DE to explain a growing list of observed discrepancies (with zero detection of any proposed particles after over 30 years of trying), I have have a feeling he would shake his head and say something like; ” hmm….back to the drawing board”. But think he might find Milgrom’s theory, the emergence of a0 constant, and subsequent work as an interesting place to start looking.
Maybe he would seek an extension of his equivalence principle to describe the cosmological data of this millennium.
Yes, it is good to wonder what the great physicists of previous generations would make of this. While I am reluctant to put words in the mouth of any particular figure who can no longer speak for him/herself, I feel comfortable saying many of them, if presented with the RAR, would immediately see that there was something fundamental to be explained. A corollary is that they would reject as absurd the suggestion that this somehow emerges from unseen mass. The primary reason we don’t do that today is because the notion of dark matter is already embedded in out communal consciousness.
I came across an interesting paper published about 20 years ago: Einstein’s Struggle for a Machian Gravitation Theory by Carl Hoefer. The abstract from the paper is copied below:
The story of Einstein’s struggle to create a general theory of relativity, and his early discontentment with the final form of the theory (1915), is well known in broad outline. Thanks to the work of John Norton and others, much of the fine detail of the story is also now known. One aspect of Einstein’s work in this period has, however, been relatively neglected: Einstein’s commitment to Mach’s ideas on inertia, and the influence this commitment had on Einstein’s work on general relativity from 1907 to 1918. In this paper published writings and archival material are examined, to try to reconstruct the details of Einstein’s thinking about inertia and gravitation, and the role that Mach’s ideas played in Einstein’s crucial work on the general theory. By the end, a clear picture of Einstein’s conceptions of Mach’s ideas on inertia, and their philosophical motivations, will emerge. Several surprising conclusions also emerge: Einstein’s desire for a Machian gravitation theory was the central force driving his work from 1912 to 1915, keeping him going despite numerous frustrating setbacks; Einstein’s continued commitment to Mach’s ideas in 1916–1917 kept him at work trying various strategies of modification of the field equations, in order to exclude anti-Machian solutions (including the addition of the cosmological constant in 1917); and as late as early 1918, Einstein was ready to call the whole General Theory a failure if no way of squaring it with Mach’s ideas on inertia could be found. But by 1920 Einstein advocated a view that granted spacetime (under the name ‘ether’) independent existence with physical qualities of its own, a complete break with his earlier Machian views.
The author has gone to considerable lengths to research from archival records many details of Einstein’s evolving views both during the period of his development of GRT and post, when he was trying to develop a view of cosmology based on GR. It’s an interesting read (albeit a bit long at nearly 50 pages) and gives considerable insight into the influences that shaped his views over that period.
Yeah. I was aware that Einstein wanted to build Mach’s Principle into GR, but ultimately failed. It does seem like the issue is with inertia.
Yes, I think so too.
I find Dr. McGaugh’s ** footnote interesting in the context of evolutionary biology.
One word that has been used to describe scientific theories in relation to the attitude of scientists is “fallibilism.” A more meaningful characterization might be “abductive reasoning with non-monotonic defeasibility.” (And, yes, simply replacing ‘falsifiability’ with ‘defeasibility’ would not stop the papers insisting that proposals are “scientific” based upon a philosophical criterion.)
In many cases, however, I believe this characterization to be empty in the sense of “Christian on Sunday,” or (for mathematicians) “formalist on Sunday.”
My reason for this is precisely aimed at belief in scientific theories.
I cannot see how one asserts “truth” for physical theories when human beings are presumed to be evolved biological organisms. The philosophical biases of the late 19th century and early 20th centuries persist. The “unity of science” *must* be a reduction to physics because “idealism” is unacceptable. Science as a “social construct” is unacceptable despite the possible “correctness” of evolution from a prelinguistic primate.
Do not confuse my deliberation in this regard as “anti-science.” Like Dr. McGaugh, I had an early interest in science — but, it had been biology. And, the first definition of “intelligence” that I had ever read described it as a phenotype which enabled human beings to survive the Pleistocene Epoch.
This is a far cry from a socially-motivated test which is based upon pitting “teacher’s pets” against innumerate snd dyslexic individuals. When I had entered college, I had been apalled by so many of my classmates bragging about being “geniuses” based on test scores.
(So, Dr. McGaugh, it is not simply physics departments.)
Perhaps as merely a matter of aesthetics, I happen to accept science as a distinguished category of knowledge. But, it is simply problematic to explain knowledge in terms of the historical “true, justified belief.” When studied without “truth,” proofs can never realize a semantics other than “assertion.”
Words “mean” whatever I say they mean (“How many fingers…?)
My questions in this regard ought not bother any “fallibilist.” But, I know they are highly objectionable in most forums where physics is considered “truth.”
What I have learned from mathematics is that it is terribly difficult to differentiate truth from belief. And, as foundational physics approaches limits to its ability to take measurements, the sociology attached to belief has become pronounced.
Thank you very much for this blog, Dr. McGaugh.
Indeed – no scientific theory can ever be proven; at best it can be shown to work over some finite range of conditions. That principle of doubt is often forgotten in practice, with established theories being practically equated with Truth.
On the subject of falsifiability
Let’s imagine we will have done it. Grand Unified Theory Of Absolutely Everything. How universe came to be, how it evolved, how does every bloody thing in it function, everything. Fanfare, fireworks, champagne and caviar.
Except such theory is not falsifiable since its falsification would fall outside of the universe.
If we assume a theory is perfectly true while analyzing its falsifiability, we’re calculating the probability a basketball is outside the basket given the fact that it is in the basket.