I’ve been wanting to expand on the previous post ever since I wrote it, which is over a month ago now. It has been a busy end to the semester. Plus, there’s a lot to say – nothing that hasn’t been said before, somewhere, somehow, yet still a lot to cobble together into a coherent story – if that’s even possible. This will be a long post, and there will be more after to narrate the story of our big paper in the ApJ. My sole ambition here is to express the predictions of galaxy formation theory in LCDM and MOND in the broadest strokes.
A theory is only as good as its prior. We can always fudge things after the fact, so what matters most is what we predict in advance. What do we expect for the timescale of galaxy formation? To tell you what I’m going to tell you, it takes a long time to build a massive galaxy in LCDM, but it happens much faster in MOND.
Basic Considerations
What does it take to make a galaxy? A typical giant elliptical galaxy has a stellar mass of 9 x 1010 M☉. That’s a bit more than our own Milky Way, which has a stellar mass of 5 or 6 x 1010 M☉ (depending who you ask) with another 1010 M☉ or so in gas. So, in classic astronomy/cosmology style, let’s round off and say a big galaxy is about 1011 M☉. That’s a hundred billion stars, give or take.

How much of the universe does it take to make one big galaxy? The critical density of the universe is the over/under point for whether an expanding universe expands forever, or has enough self-gravity to halt the expansion and ultimately recollapse. Numerically, this quantity is ρcrit = 3H02/(8πG), which for H0 = 73 km/s/Mpc works out to 10-29 g/cm3 or 1.5 x 10-7 M☉/pc3. This is a very small number, but provides the benchmark against which we measure densities in cosmology. The density of any substance X is ΩX = ρX/ρcrit. The stars and gas in galaxies are made of baryons, and we know the baryon density pretty well from Big Bang Nucleosynthesis: Ωb = 0.04. That means the average density of normal matter is very low, only about 4 x 10-31 g/cm3. That’s less than one hydrogen atom per cubic meter – most of space is an excellent vacuum!
This being the case, we need to scoop up a large volume to make a big galaxy. Going through the math, to gather up enough mass to make a 1011 M☉ galaxy, we need a sphere with a radius of 1.6 Mpc. That’s in today’s universe; in the past the universe was denser by (1+z)3, so at z = 10 that’s “only” 140 kpc. Still, modern galaxies are much smaller than that; the effective edge of the disk of the Milky Way is at a radius of about 20 kpc, and most of the baryonic mass is concentrated well inside that: the typical half-light radius of a 1011 M☉ galaxy is around 6 kpc. That’s a long way to collapse.
Monolithic Galaxy Formation
Given this much information, an early concept was monolithic galaxy formation. We have a big ball of gas in the early universe that collapses to form a galaxy. Why and how this got started was fuzzy. But we knew how much mass we needed and the volume it had to come from, so we can consider what happens as the gas collapses to create a galaxy.
Here we hit a big astrophysical reality check. Just how does the gas collapse? It has to dissipate energy to do so, and cool to form stars. Once stars form, they may feed energy back into the surrounding gas, reheating it and potentially preventing the formation of more stars. These processes are nontrivial to compute ab initio, and attempting to do so obsesses much of the community. We don’t agree on how these things work, so they are the knobs theorists can turn to change an answer they don’t like.
Even if we don’t understand star formation in detail, we do observe that stars have formed, and can estimate how many. Moreover, we do understand pretty well how stars evolve once formed. Hence a common approach is to build stellar population models with some prescribed star formation history and see what works. Spiral galaxies like the Milky Way formed a lot of stars in the past, and continue to do so today. To make 5 x 1010 M☉ of stars in 13 Gyr requires an average star formation rate of 4 M☉/yr. The current measured star formation rate of the Milky Way is estimated to be 2 ± 0.7 M☉/yr, so the star formation rate has been nearly constant (averaging over stochastic variations) over time, perhaps with a gradual decline. Giant elliptical galaxies, in contrast, are “red and dead”: they have no current star formation and appear to have made most of their stars long ago. Rather than a roughly constant rate of star formation, they peaked early and declined rapidly. The cessation of star formation is also called quenching.
A common way to formulate the star formation rate in galaxies as a whole is the exponential star formation rate, SFR(t) = SFR0 e-t/τ. A spiral galaxy has a low baseline star formation rate SFR0 and a long burn time τ ~ 10 Gyr while an elliptical galaxy has a high initial star formation rate and a short e-folding time like τ ~ 1 Gyr. Many variations on this theme are possible, and are of great interest astronomically, but this basic distinction suffices for our discussion here. From the perspective of the observed mass and stellar populations of local galaxies, the standard picture for a giant elliptical was a large, monolithic island universe that formed the vast majority of its stars early on then quenched with a short e-folding timescale.
Galaxies as Island Universes
The density parameter Ω provides another useful way to think about galaxy formation. As cosmologists, we obsess about the global value of Ω because it determines the expansion history and ultimate fate of the universe. Here it has a more modest application. We can think of the region in the early universe that will ultimately become a galaxy as its own little closed universe. With a density parameter Ω > 1, it is destined to recollapse.
A fun and funny fact of the Friedmann equation is that the matter density parameter Ωm → 1 at early times, so the early universe when galaxies form is matter dominated. It is also very uniform (more on that below). So any subset that is a bit more dense than average will have Ω > 1 just because the average is very close to Ω = 1. We can then treat this region as its own little universe (a “top-hat overdensity”) and use the Friedmann equation to solve for its evolution, as in this sketch:

That’s great, right? We have a simple, analytic solution derived from first principles that explains how a galaxy forms. We can plug in the numbers to find how long it takes to form our basic, big 1011 M☉ galaxy and… immediately encounter a problem. We need to know how overdense our protogalaxy starts out. Is its effective initial Ωm = 2? 10? What value, at what time? The higher it is, the faster the evolution from initially expanding along with the rest of the universe to decoupling from the Hubble flow to collapsing. We know the math but we still need to know the initial condition.
Annoying Initial Conditions
The initial condition for galaxy formation is observed in the cosmic microwave background (CMB) at z = 1090. Where today’s universe is remarkably lumpy, the early universe is incredibly uniform. It is so smooth that it is homogeneous and isotropic to one part in a hundred thousand. This is annoyingly smooth, in fact. It would help to have some lumps – primordial seeds with Ω > 1 – from which structure can grow. The observed seeds are too tiny; the typical initial amplitude is 10-5 so Ωm = 1.00001. That takes forever to decouple and recollapse; it hasn’t yet had time to happen.

We would like to know how the big galaxies of today – enormous agglomerations of stars and gas and dust separated by inconceivably vast distances – came to be. How can this happen starting from such homogeneous initial conditions, where all the mass is equally distributed? Gravity is an attractive force that makes the rich get richer, so it will grow the slight initial differences in density, but it is also weak and slow to act. A basic result in gravitational perturbation theory is that overdensities grow at the same rate the universe expands, which is inversely related to redshift. So if we see tiny fluctuations in density with amplitude 10-5 at z = 1000, they should have only grown by a factor of 1000 and still be small today (10-2 at z = 0). But we see structures of much higher contrast than that. You can’t here from there.
The rich large scale structure we see today is impossible starting from the smooth observed initial conditions. Yet here we are, so we have to do something to goose the process. This is one of the original motivations for invoking cold dark matter (CDM). If there is a substance that does not interact with photons, it can start to clump up early without leaving too large a mark on the relic radiation field. In effect, the initial fluctuations in mass are larger, just in the invisible substance. (That’s not to say the CDM doesn’t leave a mark on the CMB; it does, but it is subtle and entirely another story.) So the idea is that dark matter forms gravitational structures first, and the baryons fall in later to make galaxies.

With the right amount of CDM – and it has to be just the right amount of a dynamically cold form of non-baryonic dark matter (stuff we still don’t know actually exists) – we can explain how the growth factor is 105 since recombination instead of a mere 103. The dark matter got a head start over the stuff we can see; it looks like 105 because the normal matter lagged behind, being entangled with the radiation field in a way the dark matter was not.
This has been the imperative need in structure formation theory for so long that it has become undisputed lore; an element of the belief system so deeply embedded that it is practically impossible to question. I risk getting ahead of the story, but it is important to point out that, like the interpretation of so much of the relevant astrophysical data, this belief assumes that gravity is normal. This assumption dictates the growth rate of structure, which in turn dictates the need to invoke CDM to allow structure to form in the available time. If we drop this assumption, then we have to work out what happens in each and every alternative that we might consider. That definitely gets ahead of the story, so first let’s understand what we should expect in LCDM.
Hierarchical Galaxy formation in LCDM
LCDM predicts some things remarkably well but others not so much. The dark matter is well-behaved, responding only to gravity. Baryons, on the other hand, are messy – one has to worry about hydrodynamics in the gas, star formation, feedback, dust, and probably even magnetic fields. In a nutshell, LCDM simulations are very good at predicting the assembly of dark mass, but converting that into observational predictions relies on our incomplete knowledge of messy astrophysics. We know what the mass should be doing, but we don’t know so well how that translates to what we see. Mass good, light bad.
Starting with the assembly of mass, the first thing we learn is that the story of monolithic galaxy formation outlined above has to be wrong. Early density fluctuations start out tiny, even in dark matter. God didn’t plunk down island universes of galaxy mass then say “let there be galaxies!” The annoying initial conditions mean that little dark matter halos form first. These subsequently merge hierarchically to make ever bigger halos. Rather than top-down monolithic galaxy formation, we have the bottom-up hierarchical formation of dark matter halos.
The hierarchical agglomeration of dark matter halos into ever larger objects is often depicted as a merger tree. Here are four examples from the high resolution Illustris TNG50 simulation (Pillepich et al. 2019; Nelson et al. 2019).

The hierarchical assembly of mass is generic in CDM. Indeed, it is one of its most robust predictions. Dark matter halos start small, and grow larger by a succession of many mergers. This gradual agglomeration is slow: note how tiny the dark matter halos at z = 10 are.
Strictly speaking, it isn’t even meaningful to talk about a single galaxy over the span of a Hubble time. It is hard to avoid this mental trap: surely the Milky Way has always been the Milky Way? so one imagines its evolution over time. This is monolithic thinking. Hierarchically, “the galaxy” refers at best to the largest progenitor, the object that traces the left edge of the merger trees above. But the other protogalactic chunks that eventually merge together are as much part of the final galaxy as the progenitor that happens to be largest.
This complicated picture is complicated further by what we can see being stars, not mass. The luminosity we observe forms through a combination of in situ growth (star formation in the largest progenitor) and ex situ growth through merging. There is no reason for some preferred set of protogalaxies to form stars faster than the others (though of course there is some scatter about the mean), so presumably the light traces the mass of stars formed traces the underlying dark mass. Presumably.
That we should see lots of little protogalaxies at high redshift is nicely illustrated by this lookback cone from Yung et al (2022). Here the color and size of each point corresponds to the stellar mass. Massive objects are common at low redshift but become progressively rare at high redshift, petering out at z > 4 and basically absent at z = 10. This realization of the observable stellar mass tracks the assembly of dark mass seen in merger trees.

This is what we expect to see in LCDM: lots of small protogalaxies at high redshift; the building blocks of later galaxies that had not yet merged. The observation of galaxies much brighter than this at high redshift by JWST poses a fundamental challenge to the paradigm: mass appears not to be subdivided as expected. So it is entirely justifiable that people have been freaking out that what we see are bright galaxies that are apparently already massive. That shouldn’t happen; it wasn’t predicted to happen; how can this be happening?
That’s all background that is assumed knowledge for our ApJ paper, so we’re only now getting to its Figure 1. This combines one of the merger trees above with its stellar mass evolution. The left panel shows the assembly of dark mass; the right pane shows the growth of stellar mass in the largest progenitor. This is what we expect to see in observations.

Fig. 1 from McGaugh et al (2024): A merger tree for a model galaxy from the TNG50-1 simulation (Pillepich et al. 2019; Nelson et al. 2019, left panel) selected to have M∗ ≈ 9 × 1010 M⊙ at z = 0; i.e., the stellar mass of a local L∗ giant elliptical galaxy (Driver et al. 2022). Mass assembles hierarchically, starting from small halos at high redshift (bottom edge) with the largest progenitor traced along the left of edge of the merger tree. The growth of stellar mass of the largest progenitor is shown in the right panel. This example (jagged line) is close to the median (dashed line) of comparable mass objects (Rodriguez-Gomez et al. 2016), and within the range of the scatter (the shaded band shows the 16th – 84th percentiles). A monolithic model that forms at zf = 10 and evolves with an exponentially declining star formation rate with τ = 1 Gyr (purple line) is shown for comparison. The latter model forms most of its stars earlier than occurs in the simulation.
For comparison, we also show the stellar mass growth of a monolithic model for a giant elliptical galaxy. This is the classic picture we had for such galaxies before we realized that galaxy formation had to be hierarchical. This particular monolithic model forms at zf = 10 and follows an exponential star formation rate with τ = 1 Gyr. It is one of the models published by Franck & McGaugh (2017). It is, in fact, the first model I asked Jay to construct when he started the project. Not because we expected it to best describe the data, as it turns out to do, but because the simple exponential model is a touchstone of stellar population modeling. It was a starter model: do this basic thing first to make sure you’re doing it right. We chose τ = 1 Gyr because that was the typical number bandied about for elliptical galaxies, and zf = 10 because that seemed ridiculously early for a massive galaxy to form. At the time we built the model, it was ludicrously early to imagine a massive galaxy would form, from an LCDM perspective. A formation redshift zf = 10 was, less than a decade ago, practically indistinguishable from the beginning of time, so we expected it to provide a limit that the data would not possibly approach.
In a remarkably short period, JWST has transformed z = 10 from inconceivable to run of the mill. I’m not going to go into the data yet – this all-theory post is already a lot – but to offer one spoiler: the data are consistent with this monolithic model. If we want to “fix” LCDM, we have to make the red line into the purple line for enough objects to explain the data. That proves to be challenging. But that’s moving the goalposts; the prediction was that we should see little protogalaxies at high redshift, not massive, monolith-style objects. Just look at the merger trees at z = 10!
Accelerated Structure Formation in MOND
In order to address these issues in MOND, we have to go back to the beginning. What is the evolution of a spherical region (a top-hat overdensity) that might collapse to form a galaxy? How does a spherical region under the influence of MOND evolve within an expanding universe?
The solution to this problem was first found by Felten (1984), who was trying to play the Newtonian cosmology trick in MOND. In conventional dynamics, one can solve the equation of motion for a point on the surface of a uniform sphere that is initially expanding and recover the essence of the Friedmann equation. It was reasonable to check if cosmology might be that simple in MOND. It was not. The appearance of a0 as a physical scale makes the solution scale-dependent: there is no general solution that one can imagine applies to the universe as a whole.
Felten reasonably saw this as a failure. There were, however, some appealing aspects of his solution. For one, there was no such thing as a critical density. All MOND universes would eventually recollapse irrespective of their density (in the absence of the repulsion provided by a cosmological constant). It could take a very long time, which depended on the density, but the ultimate fate was always the same. There was no special value of Ω, and hence no flatness problem. The latter obsessed people at the time, so I’m somewhat surprised that no one seems to have made this connection. Too soon*, I guess.
There it sat for many years, an obscure solution for an obscure theory to which no one gave credence. When I became interested in the problem a decade later, I started methodically checking all the classic results. I was surprised to find how many things we needed dark matter to explain were just as well (or better) explained by MOND. My exact quote was “surprised the bejeepers out of us.” So, what about galaxy formation?
I started with the top-hat overdensity, and had the epiphany that Felten had already obtained the solution. He had been trying to solve all of cosmology, which didn’t work. But he had solved the evolution of a spherical region that starts out expanding with the rest of the universe but subsequently collapses under the influence of MOND. The overdensity didn’t need to be large, it just needed to be in the low acceleration regime. Something like the red cycloidal line in the second plot above could happen in a finite time. But how much?
The solution depends on scale and needs to be solved numerically. I am not the greatest programmer, and I had a lot else on my plate at the time. I was in no rush, as I figured I was the only one working on it. This is usually a good assumption with MOND, but not in this case. Bob Sanders had had the same epiphany around the same time, which I discovered when I received his manuscript to referee. So all credit is due to Bob: he said these things first.
First, he noted that galaxy formation in MOND is still hierarchical. Small things form first. Crudely speaking, structure formation is very similar to the conventional case, but now the goose comes from the change in the force law rather than extra dark mass. MOND is nonlinear, so the whole process gets accelerated. To compare with the linear growth of CDM:

The net effect is the same. A cosmic web of large scale structure emerges. They look qualitatively similar, but everything happens faster in MOND. This is why observations have persistently revealed structures that are more massive and were in place earlier than expected in contemporaneous LCDM models.

In MOND, small objects like globular clusters form first, but galaxies of a range of masses all collapse on a relatively short cosmic timescale. How short? Let’s consider our typical 1011 M☉ galaxy. Solving Felten’s equation for the evolution of a sphere numerically, peak expansion is reached after 300 Myr and collapse happens in a similar time. The whole galaxy is in place speedy quick, and the initial conditions don’t really matter: a uniform, initially expanding sphere in the low acceleration regime will behave this way. From our distant vantage point thirteen billion years later, the whole process looks almost monolithic (the purple line above) even though it is a chaotic hierarchical mess for the first few hundred million years (z > 14). In particular, it is easy to form half of the stellar mass early on: the mass is already assembled.

This is what JWST sees: galaxies that are already massive when the universe is just half a billion years old. I’m sure I should say more but I’m exhausted now and you may be too, so I’m gonna stop here by noting that in 1998, when Bob Sanders predicted that “Objects of galaxy mass are the first virialized objects to form (by z=10),” the contemporaneous prediction of LCDM was that “present-day disc [galaxies] were assembled recently (at z<=1)” and “there is nothing above redshift 7.” One of these predictions has been realized. It is rare in science that such a clear a priori prediction comes true, let alone one that seemed so unreasonable at the time, and which took a quarter century to corroborate.
*I am not quite this old: I was still an undergraduate in 1984. I hadn’t even decided to be an astronomer at that point; I certainly hadn’t started following the literature. The first time I heard of MOND was in a graduate course taught by Doug Richstone in 1988. He only mentioned it in passing while talking about dark matter, writing the equation on the board and saying maybe it could be this. I recall staring at it for a long few seconds, then shaking my head and muttering “no way.” I then completely forgot about it, not thinking about it again until it came up in our data for low surface brightness galaxies. I expect most other professionals have the same initial reaction, which is fair. The test of character comes when it crops up in their data, as it is doing now for the high redshift galaxy community.
This was a really fantastic read.
It’s really not surprising that General Relativity failed to account for MOND behavior, GR was developed using the available data on very simple systems with a precision unable to detect non linear effects, effects that manifest or emerge in very complex gravitational systems, like you can’t detect or predict the liquid properties of water by studying a few atoms of hydrogen and oxygen, or detect intelligence by studying a few neurons.
Some properties and behaviors are only apparent/observable after a certain complexity threshold is reached, not because some emergent property spontaneously appeared but because the non linear effects present at the components level were beyond the precision needed to detect them unequivocally in the available empirical evidence, as is the case still for MOND in wide binaries.
GR fails in very complex gravitational systems without introducing dark matter as the work of Stacy McGaugh has shown multiple times, the “results” of GR at universe scale complexity are even more doubtful.
Complexity is indeed an ubiquitous barrier.
Great gift for Christmas ! Thanks.
Jean
Prof. McGaugh,
Hello Stacy,
What is meant by the criticism “MOND doesn’t have a cosmology”?
Clearly, Felten has had a pop at making one, in the style of Friedmann. And has made predictions that are being borne out in observation. A quick google search shows plenty of other studies. What’s missing?
This question has so many answers. We don’t know the MOND equivalent of the Friedmann equation; Felten’s solution may work for objects embedded in an expanding universe but it can’t encompass the entire universe as one must specify a scale. Even if it works for a scale so big that it encompasses the cosmic horizon, that’s still not satisfactory. That’d be like explaining a hint of curvature in the Earth’s horizon without knowing the whole planet is round.
Beyond that, MOND itself does not even begin to tell us the geometry of the universe. For that, you need a relativistic theory. As you note, there are indeed a number of suggestions for that, but it is far from obvious that any of them are entirely satisfactory. I like to think this would be a quickly solved problem if we expended on it even a fraction of the effort we’ve lavished on hunting for dark matter, but we don’t.
Losing dark matter is like losing a dear friend; most of the community is still in the denial and anger stages of grief. Since one can never see its invisible corpse, I fear we’ll be stuck here indefinitely.
The obvious, big elephant in the room is that General Relativity (GR) can be applied universally. In hindsight, this assumption is remarkably naive. Theories are almost always developed to describe simple systems, relying on the empirical evidence available for these systems. This evidence typically dismisses small anomalies in the data as negligible. However, it is precisely within these small anomalies that the seeds for the properties and behavior of higher hierarchical structures are found. At higher hierarchical levels, the energy densities of interactions are orders of magnitude lower than those in simpler systems. This significant difference in energy densities leads to decoupling: the properties and behavior of higher hierarchical levels appear decoupled or independent from the laws governing their discrete components.
No theory can escape this reality. For example, you cannot use the Standard Model to model weather patterns, nor can GR accurately model galactic rotation without introducing the ad hoc concept of dark matter—let alone address even higher hierarchical structures.
Assuming that a single theory, be it GR or MOND, can be used to describe multiple hierarchical levels in cosmology is a wrong assumption. Naive reductionism is intrinsically flawed.
Certainly this is not always or even mostly true — you need a theory of the atom to explain Brownian motion. That’s a scale factor of approximately a trillion.
Stacy, one question about MOND breaking GR geometry. Does this happen also in modified inertia? I was thinking that if inertia is modified instead of gravity, that doesn’t at all change the geometry of spacetime. Is that right?
I don’t know that MOND breaks GR geometry; I just don’t know what the geometry might be in an underlying theory that contains both.
That said, my intuition is the same as yours: while modified inertia seems anathema to GR, it is not obvious that the geometry changes in that case.
Good point, I meant it alters GR geometry slightly where it has not been measured yet.
Still trying to digest the last 3 or 4 posts.
This is where I go for NEW physics. The only action in town other than Sabine.
My day is “search for NOT EVEN WRONG”, Woit’s web site.
Then click on Sabine Hossenfelder to see what is doing!
Then click on you name for your latest post if any.
Not much going on in the Standard Model. So MOND is the only action in town.
Anyways Merry Christmas to all.
Happy Hunting.
S. Vik
There’s also Pavel Kroupa’s Dark Matter Crisis blog at
https://darkmattercrisis.wordpress.com/
Speaking of Not Even Wrong , I find interesting also the blog of Robert Wilson (Hidden Assumptions) although it”s rather mathy (group talk) at moments. He comments here from time to time and also on Peter Woit’s Not Even Wrong (when Peter is not dissatisfied with the math pointers that Robert gives…). It’s Interesting to see the perspective of a group theorist (pure math) on the standard model / unification.
Great explainer of the new paper in Astrophysical Journal.
There have been different reactions to it by the science popularisers online. Becky Smethurst recently published a video (https://youtu.be/HJ8F0pfNTgM?si=HaJIra_KIaV32doP) about the paper, trying to muster any and all defences for dark matter. There were some comments that I found really revealing. At 8:40, she says “I think my colleagues and I would have found it very weird if that hadn’t been the case [that the new data are in conflict with LCDM], because you’ve got to remember that our best model of the universe has been developed to match the observations we have from the Hubble Space Telescope and ground-based telescopes that do these big galaxy surveys that can’t see as distant, as far back as JWST. So JWST was always going to shake things up, we just didn’t know how…”, and later, at 20:35, she adds: “what this demonstrates is how our best model of the universe, lambda CDM, has been calibrated on the observations we had before at lower redshifts, closer to us, with the likes of Hubble Space Telescope…”
Basically, she is admitting that LCDM works post-hoc to explain results by adjusting its parameters instead of making testable predictions, and what is more, she doesn’t seem to have any problem with that whatsoever.
What does MOND have to say about early supermassive black hole formation in high redshift galaxies?
MOND helps with this insofar as it makes big seeds fast – basically, the presence of big black holes scales with the galaxies in which they form. I think it is important to distinguish between providing this big initial mass, which MOND does help with, and forming the black hole itself. The latter step has to happen in the high acceleration regime pretty much by definition, so gets no help from MOND. But a large part of the issue is assembling enough mass in the first place, so making early black holes goes hand in hand with making big early galaxies.
There’s an underlying irony here:
Questioning the reality of dark matter inherently challenges General Relativity’s applicability across unlimited complexity scales. In the MOND regime, at very low accelerations or weak gravitational fields, GR seems to be inaccurate. If GR fails at low accelerations, it is only natural to question its accuracy at very high accelerations or in the presence of extremely strong gravitational fields. This, in turn, raises doubts about the existence of black holes—long considered a sacred cow in cosmology.
If challenging the existence of dark matter has faced significant resistance and censorship, questioning the reality of black holes has been outright blacklisted.
For example, see Wolfgang Kundt’s paper: “Astrophysics Without Black Holes, and Without Extragalactic Gamma-Ray Bursts.”
I think that’s exactly right. I didn’t see your point that LCDM works post-hoc is implied in what she says, but it’s spot on. She mentions wide binaries, and only Banik’s paper, which has since looked more questionable. She also said ‘MOND isn’t being swept under the carpet’, and I thought, well, just most of its success. Her take on the situation was biased, and she’s talking mainly to people who don’t know who to believe, which makes her feel able to talk that way. Sabine, by contrast, said ‘Webb Falsified Dark Matter Prediction – And No One Cares’, which is a lot nearer the truth. But regardless of who says what, there’s far more awareness of MOND this year, and directly from recent papers – people know it’s looking better than an underdog these days.
In many ways, this response follows the usual script (https://tritonstation.com/2022/02/08/a-script-for-every-observational-test/), which I think is largely a result of cognitive dissonance: evidence contradictory to deeply held beliefs just doesn’t compute. But there is also a lot of bias, with the wide binary results being a good example: Banik’s result is widely cited because it gets the “right” answer; other results are less well known.
I don’t see a way out of this mess without either detecting dark matter or falsifying the very notion, but notions are not falsifiable. Once convinced of the existence of invisible mass, how do we persuade ourselves we might have been wrong?
Perhaps wide binary data analysis will get good enough to pin it down. Another thing Becky Smethurst does is say GR has ‘passed every test’, without telling her viewers that LCDM includes GR, and that if one is weakened, so is the other.
This is a classic mindset and failure point. GR does indeed pass many tests, but MOND does not conflict with any of those. Nevertheless, many scientists interpret the unrelated tests of GR as making MOND impossible. The only place where they overlap is in the low acceleration regime, where the need to invoke dark matter *is* the failure.
At what point does the timescale of the Big Bang framework come into question?
Presumably it’s not science, if it can’t be falsified, but it seems the current cosmology can only be patched. Inflation, Dark Matter, Dark Energy are all rather significant gaps between theory and observation, yet there is literally no consideration within the field that might signify the general theory is flawed.
I keep pointing out that if intergalactic space were to expand, the speed of the light crossing it should logically increase, in order to remain constant, but that is ignored, because it can’t be refuted.
Given two metrics are being derived from the speed and spectrum of the same light and it is an expanding space and not tired light theory, speed is still the denominator.
The description is the universe expanded from about 22 million lightyears at the end of the Inflation stage, to some 48 billion lightyears now, but it took light 13.8 billion to cross this expanding space. So what is light speed measuring, if not intergalactic space? As Einstein said, space is what you measure with a ruler and the ruler being used is still light speed. That makes it the denominator.
If cosmic redshift is an optical effect, compounding on itself, that would do away with the need for Dark Energy.
If we were to consider the possibility the centripetal effect of gravity is what produces the properties associated with mass and matter, such that mass is an effect of gravity, not the other way around, the issue of Dark Matter would dissipate.
As for Inflation, that is simply a way to explain away a background uniformity that would be inherent to an infinite universe, with the background radiation simply being ever further sources, harmonized by their infinity.
It would also explain those “tiny red dots” the Webb is finding, as ever further galaxies.
One way light does redshift over distance, apparently, is as multi spectrum packets, as the higher frequencies dissipate faster. Yet that would mean we are sampling a wave front and the quantification of light is an artifact of its detection and measurement. A loading, or threshold theory.
Google;
On the evolution of localized wave packets governed by a dissipative wave equation
C.I. Christov *
Department of Mathematics, University of Louisiana at Lafayette, P.O. Box 1010, Lafayette, LA 70504-1010, USA
Received 10 January 2007; received in revised form 9 May 2007; accepted 16 May 2007 Available online 26 May 2007
A Challenge to Quantized Absorption by Experiment and Theory
By Eric S. Reiter, Unquantum Laboratory, eric@unquantum.net, Pacifica, CA. July, 2012.
That’s a lot more than can fit in one blog post, let along a reply. I have chosen to restrict my considerations to objects embedded in an expanding universe, but indeed, one may wonder about the background cosmology: both the time-redshift relation and the geometry. For the t(z) relation I have adopted LCDM because it is empirically grounded in a number of observables, even if the underlying theory proves to be inadequate. That in itself is worth a lengthy post, and we don’t have good constraints at z > 2, but I think that LCDM is a convenient lower limit: plausible MOND-inspired cosmologies could have galaxies form even earlier/at higher redshift. The geometry is an even nuttier problem. The so-called little red dots are consistent with a size evolution of (1+z), which is a lot: they cannot plausibly transform from what we see at high z to anything we see at z=0, but neither can they simply go away. That makes me worry about very fundamental issues with the metric: if we were approaching this as a test, we would assume the sizes of galaxies do not evolve (much, and certainly not in this crazy fashion), and conclude that cosmology flunks the Tolman test. See Li 2023.
Still, alternatives to FLRW remain largely conjectural; see
https://tritonstation.com/2023/02/20/imagine-if-you-can/
I will gladly leave the actual science up to those with the background. Though I do see a basic psychological dynamic, where the signals we extract from the noise are what resonates and synchronizes with our prior knowledge. Yet this process goes, not only back to childhood memories and lessons learned, but as over-all sociological and cultural constructs. With religions as childhood memories and lessons of cultures, such that once theory becomes doctrine, it is carved in stone.
Consider that to the Ancients, gods were metaphors and democracy and republicanism originated in pantheistic cultures. I could delve much deeper into this aspect of the dynamic, but to keep it in the science realm, ask yourself how much quantum theory is instinctively derived from atomism, continuing through to string theory. That there is has to be some irreducible “thing.”
Yet the reality seems more the interplay between such nodes and the networks giving rise to them, of which they become focal points.
With the essence of the node as synchronization and that of the network being harmonization. Think “entangled particles.” All the resonances and reverberations in the middle.
I find in many ways, that we, as linear goal seeking organisms in this cyclical, circular, reciprocal feedback generated reality, haven’t really come to terms with the implications of the earth being round, not flat.
That even in academia, the young have to cater to their elders to get ahead, so the old ideas can only be patched, not refuted, until the situation reaches the point of total breakdown. The feedback loops don’t have circuit breakers, until it all melts down.
For the first link I think you meant to link to
https://iopscience.iop.org/article/10.3847/2041-8213/acdb49
Thanks. That seems to be a similar paper. The problem is the one I try to link to a pdf I’ve downloaded enough times my computer digs it too fast for me to cut and paste the link. Here is the introduction;
“1. Introduction
The propagation of waves in linear dissipative systems is well studied but most of the investigations are concerned with the propagation of a single-frequency wave. On the other hand, in any of the practical situ- ations, one is faced actually with a wave packet, albeit with a very narrow spread around the central fre- quency. This means that one should take a special care to separate the effects of dispersion and dissipation on the propagation of the wave packet from the similar effects on a single frequency signal.
The effect of dissipation of the propagation of wave packets seems important because their constitution can change during the evolution and these changes can be used to evaluate the dissipation.
Especially elegant is the theory of propagation of packets with Gaussian apodization function.”
The essential point;
“Eq. (12) shows that an initial distribution of the energy as function of k will change in time in the sense that the amplitudes of the shorter waves will diminish faster in time than the amplitudes of the longer waves. This will lead to redistribution of the amplitudes and to a change of the apodization function of a wave packet that is subject to evolution according to Jeffrey’s equation. Therefore a general shift of the central wave number towards longer waves (smaller wave numbers k) is to be expected. In the case of light, this is called ‘‘redshift’’. The quantitative values for the redshift for different apodization functions may differ. The most interesting case appears to be the Gaussian distribution of the packet and we focus in this short note on the said case.”
At first I thought your jdea of light speed increasing crazy, but due to Stacy’s serious reaction I thought “let’s calculate how much it changes”. It appears that with H0 around 71.9 this would increase light speed with 1 meter per second every 45 years. Not easily noticed at least, but LIGO might.
The point isn’t so much that it could increase, but that if it doesn’t, then it invalidates the central premise of spacetime, which is based on light speed being constant in all frames.
They assume speed and spectrum as two different metrics. That space is expanding, based on the redshifted spectrum, but then denominate this expansion in terms of light speed.
It just doesn’t “add up.”
Redshift was originally assumed to be basic doppler effect, but when they realized it increases proportional to distance in all directions, it would either mean we are at the exact center of this expansion/universe, or that redshift is an optical effect. The only known cause at the time, by which it would be optical, would be some form of medium slowing it down, aka, “tired light,” yet there were no other distorting effects of this medium, so that tired light was dismissed and it was settled on that it was an effect of bending/expanding spacetime.
Which overlooks that central premise of spacetime.
There is no ‘center of expansion’ — everything is expanding away from everything else. Imagine a deflated balloon with dots on the surface. Inflate the balloon, and the dots get farther apart. An observer on any dot will see all the other dots moving farther away, and the farther they are away, the faster they are moving away. And yet none of the dots are ‘central’.
The constancy of the speed of light has been demonstrated by the Kennedy-Thorndyke experiment, among others.
If there was a center of expansion, then basic doppler effect would explain it. That’s why this whole expanding space premise was invoked.
Maybe go back and read what I wrote.
Since this expansion is calibrated against the speed of light, what is light speed measuring, if not space?
To further clarify that thought, given the confusion;
Presuming you understand the relationship between denominator and numerator, which is apparently a deep mystery to the field of cosmology, under all the fuzzy logic, the speed of light is still being used as the denominator, while the spectrum, redshifted, is the numerator in the expanding universe model.
I’ll add another note on sociology. Every individual of every generation of every field has to make his or her own mistakes. In astronomy, we got to be relatively good at learning from them, perhaps by virtue of making so many. It is a field in which few things are so certain that they cannot be completely overturned, and we witness it happen repeatedly over the course of our careers. In contrast, particle physicists are trained that everything has been known to six decimals since Weinberg introduced the Standard Model in 1973. They’re not used to being wrong to the point of being ill-equipped to recognize it when it happens.
The specific field of dark matter was entirely astronomical to start, and largely ignored (Oort and Zwicky independently identified very different reasons for dark matter in the 1930s; no one seems to have taken it seriously until the 1970s). Even then, as the astronomical evidence became overwhelming, many physicists scoffed at the notion, dismissing it as likely a fallacy of some fuzzy-minded astronomers. Since then, it has become accepted, then interesting, then fashionable. During that time, the majority of workers in the field have transitioned from having astronomy training to physics training. As a consequence, the same mistakes have been repeated from a more primitive starting point. The conversations I have with many physicists today are typically (thought not always) considerably less sophisticated than the conversations I had with astronomers in the 1990s.
I used to worry that the field would overtake me if it ever caught up. Now I wish it would just catch up to where we were thirty years ago.
This is an interview from 23 years ago, coming at physics from the applied side;
https://worrydream.com/refs/Mead_2001_-_Interview_(American_Spectator).html
“Once upon a time, Caltech’s Richard Feynman, Nobel Laureate leader of the last great generation of physicists, threw down the gauntlet to anyone rash enough to doubt the fundamental weirdness, the quark-boson-muon-strewn amusement park landscape of late 20th-century quantum physics. “Things on a very small scale behave like nothing you have direct experience about. They do not behave like waves. They do not behave like particles …or like anything you have ever seen. Get used to it.”
Carver Mead never has.
As Gordon and Betty Moore Professor of Engineering and Applied Science at Caltech, Mead was Feynman’s student, colleague and collaborator, as well as Silicon Valley’s physicist in residence and leading intellectual. He picks up Feynman’s challenge in a new book, Collective Electrodynamics (MIT Press), declaring that a physics that does not make sense, that defies human intuition, is obscurantist: It balks thought and intellectual progress. It blocks the light of the age.”
“a physics that does not make sense, that defies human intuition, is obscurantist”
This has been the case for any physics that challenges orthodoxy—such as MOND, according to proponents of dark matter.
The establishment loves well behaved subjects, even imaginary ones like dark matter.
That goes to the psychology, which needs that totem at the center of the village, grain of sand at the center of the pearl, eye of the storm, locus of structure. Otherwise it breaks down and scatters to the winds, prevailing forces. Tower of Babel. Structure is inherently centripetal. The signal coalesces in, as the noise is radiated out. Like galaxies. Gravity and light, as excess energy, thus information.
As mobile organisms, this sentient interface our body has with its situation functions as a sequence of perceptions, in order to navigate, so our sense of time is the present going past to future, yet the evident reality is that activity and the resulting change turns future to past. Tomorrow becomes yesterday, because the earth turns.
There is no dimension of time, because the past is consumed by the present, to inform and drive it. Causality and conservation of energy. Cause becomes effect.
Energy is conserved, because it manifests this presence, creating time, temperature, pressure, color and sound, as frequencies and amplitudes, rates and degrees.
The energy goes past to future, because the patterns generated come and go, future to past. Energy drives the wave, the fluctuations rise and fall. No tiny strings necessary.
Consciousness also goes past to future, while the perceptions, emotions and thoughts giving it form and structure go future to past.
Suggesting consciousness manifests as energy. Though it is the digestive system processing the energy and feeding the flame, while the nervous system sorts the patterns, signals from the noise. Coalescing our sense of self, as cycles of expansion and consolidation.
Thus the intellectual obsession with the patterns, rather than the processes. The seeming “things” we can triangulate.
As a side topic, a fascinating new article on the nature of the cosmological constant: https://doi.org/10.1093/mnrasl/slae112
Yes. I need to give that one a thorough read.
Thank you for a very nice read.
What is gravity? Something separate from matter? Presumably it must be since matter exists in so many different forms (different types of quark, different types of lepton, different types of gluon, different types of boson). In which case gravity, whatever it is, was in the Cosmic Egg alongside all these other fundamental particles and just happened to be compatible with them and with dark matter, just as all these particles just happened to be compatible with each other. Dark matter had the ‘property’ of being able to interact gravitationally both with itself and ordinary matter only in the sense that it shared with ordinary matter the undefined property of being susceptible to something external to it. Is that right?
I’m struggling to understand how belief in the eternal existence of eighteen or so fundamental particles that all happened to be compatible with each other is inherently more creditworthy than a hypothesis that is upfront about the miraculous, e.g. ex nihilo creation. And we are talking about the chance eternal existence of quadrillions of identical particles, each in one of these indivisible eighteen categories. It’s difficult to conceive of the eternal co-existence of two identical particles, let alone quadrillions.
“Surely the Milky Way has always been the Milky Way? so one imagines its evolution over time. This is monolithic thinking.”
MW star formation was rapid (Xiang & Rix 2022), peaking around 11 Gyr ago (equivalent to z = 2.6), then rapidly dropped off. This was approximately when star formation peaked in other galaxies (Cochrane et al. 2023) – it’s not a story of exponential SF. Some of the stars in the outer disc are younger, less than 9 Gyr, especially in the arms, but the majority date to 11–14 Gyr, possibly even 13–14 Gyr (Nepal et al. 2024).
So the MW, the galaxy we know best, does not support a picture of gradual assembly. Star formation (or the merging of small units converging on this cosmic centre of gravity ex situ/ex machina) is front-loaded. Star formation rates were much higher than 2 M☉/yr in the past.
Order(s)-of-magnitude higher SFRs are at a time when cosmologists are already having to postulate, incredibly, up to 100% rates of conversion of gas to stars. This is surely nonsense when everything in the early universe is conspiring to thwart star formation. Even as late as z = 1.4, half of all stars formed within 1 kpc of the nucleus (Nelson et al. 2016). As Stacy’s article notes, because of stellar feedback, it was more difficult, not easier, to cool molecular clouds. Not to mention radiation from around the SMBHs, which relative to stellar mass were ‘obese’ in the early universe.
This is exactly the level of detail that I didn’t want to get into, as it isn’t necessary for anything I said in this post. Giant ellipticals are galaxies that formed most of their stars early, but there are many types of galaxies (especially Irregulars) that took their sweet time getting around to making stars (DDO 154 is still 95% gas, and has a rather youthful stellar population).
As for the Milky Way, its past star formation rate cannot have been vastly higher than the current 2 per year, or we’d have a lot more stars than we do (the past average being 4). The detailed history of the star formation rate can and surely does oscillate up and down, and there could certainly have been an early burst. HOWEVER, one must read the papers you cite with caution. They are mostly talking about stars already selected to be old. For example, Nepal+ do indeed say “The majority of these stars are predominantly old (> 10 Gyr), with over 50% being older than 13 Gyr” BUT they are only talking about “metal-poor stars in thin-disc orbits.” Those are not the majority of stars. Indeed, they are so rare that they start by asserting their existence. So all they’re saying is that the metal poor stars orbiting in the thin disk are also old. That is interesting for how the Milky Way formed, but it is a very limited statement that does not apply to more recent star formation.
The universe-averaged star formation rate did indeed peak a long time ago; I expect most of those stars are in giant ellipticals. But there are also Irregular galaxies that keep forming stars, and look likely to continue forming stars indefinitely far into the future, e.g., https://arxiv.org/abs/1710.11236. So whatever picture we come up with has to accommodate a large diversity of star formation histories. I suspect this has more to do with astrophysics than mass assembly.
One of the many wobbly pillars of LCDM is the assumption that at the centre of all major galaxies there is a SMBH, and that, by a process of accretion, these have grown over time. However, the evidence shows that they shrink! The average black hole mass of 38 quasars in the range z = 7.5 to z = 5.8 was 4.6 billion suns (Farina et al. 2022), greater than most black holes in the near universe. (M87, the largest galaxy in the Virgo Cluster, is comparable, but as Steinhardt et al. 2016 note, BCGs are also ‘impossibly early’.) A survey encompassing 750,000 quasars over the period z = 5 to z = 0.1 showed average black hole mass decreasing from round 2 billion suns to 200 million (Wu & Shen 2022). The majority of SMBHs in the near universe are smaller than 200 million (Reines & Volonteri 2015). If BHs are not growing, the idea that they are surrounded by ‘accretion discs’ needs to be revisited. If they are shrinking, they cannot be black holes.
Forming SMBHs early is indeed a thorny problem. I’m less concerned with the lower mass at lower redshift; I suspect this is a selection effect. We can see the smaller ones nearby, but only the big boomers at high redshift.
Certainly it is hard to imagine big black holes shrinking in mass over time.
God didn’t plonk down island universes of galaxy mass then say “let there be galaxies!”
But that’s almost how it’s beginning to look if one takes on board the evidence for a static universe (e.g. LaViolette 2021, Li et al. 2023 [linked above]). If the universe is not expanding, then the mass and size of distant objects has been hugely underestimated. The BH/stellar mass ratio increases back in time to something approaching 50:50, and if one extrapolates back further, one reaches a point where all we have is black holes. But if they are not black holes, they must be spinning giant stars, which become galaxies as they expel mass and the expelled mass atomises into stars. Mergers are then in reality splitting events.
The two most distinct features of galaxies are that light/energy radiates out, while structure coalesces in.
What are black holes?
What if they are simply vortices? Like the eye of a hurricane, or ocean eddies;
https://scitechdaily.com/ocean-eddies-mathematically-equivalent-black-holes/
It would seem anything actually falling in, is shot out the poles as quasars. Which are like giant lasers and lasers are synchronized light waves.
Gravity, whatever it is, is a centripetal dynamic. So is synchronization.
What if physics is just barking up the wrong tree?
https://www.quantamagazine.org/physicists-discover-exotic-patterns-of-synchronization-20190404/
“Felten had solved the evolution of a spherical region that starts out expanding with the rest of the universe but subsequently collapses under the influence of MOND. The overdensity didn’t need to be large, it just needed to be in the low acceleration regime.”
Does one have a low acceleration regime between z = 100 and z = 6 when everything is flying apart at astronomical speeds? The initial hurdle is getting atoms to clump. And what happens when locally the hypothesised clumping increases the acceleration? In spiral galaxies acceleration declines radially outward, but here one is positing the opposite – infalling matter which soon surpasses the low acceleration threshold. And (I have asked this before), how does one deal with galaxies getting less compact over time? They are massive from the start, and having established a superdense gravity well, they apparently lose their grip. They suggest outfalling, not infalling!
But as above, there is no evidence that BHs grow over time even within the expanding universe model. There’s no evidence, either, that stellar mass grows after z = 7. It seems to me the whole idea of objects growing in mass over time is open to question.
Several distinct points here.
Acceleration regimes: crudely speaking, the universe enters the low acceleration regime (assuming a0 remains constant) at a predictable rate. The baryons cannot start to clump until they are released by the photons. After decoupling, they find themselves suddenly in the low acceleration regime. The rug has been pulled out from under them, so structure formation proceeds at an accelerated pace. As baryons condense, they may well enter the high acceleration regime, but by then galaxy formation is already well under way. MOND is the trigger for speedy clumping; the rest is astrophysics.
Size evolution: this is indeed a thorny problem, as I’ve answered before. I agree that, once established, these deep gravity wells are not going to lose their grip. These objects are practically impossible to destroy, yet have no local analogs. Where did they go? This seems to problematic to me that it makes me worry (as I commented somewhere above) that we have the metric wrong. The inferred size evolution goes as (1+z); it shows the same trend the Li identified in radio sources. So maybe we have the angular diameter distances wrong and there is no size evolution. This would be a failure of cosmology so profound that even I struggle to contemplate it despite having worried for many years that it might be broken.
I feel like we’ve learned that the answer to everything is “42” and are just starting to find out that the question is “what do you get when you multiply six by nine?”
“This would be a failure of cosmology so profound that even I struggle to contemplate it despite having worried for many years that it might be broken.”
It would be like how Newtonian mechanics failed in the end of the 19th century. Or how the geocentric model of the solar system failed in the 1600s.
It isn’t often that new observations comes in that shows that the predominant models used by scientists are just completely wrong, but the phenomenon has happened from time to time throughout history.
Indeed. MOND already approaches that level, and suggests there could be a lot more. This would be a lot more.
Few places challenge mainstream scientific thinking as seriously and thoughtfully as this one, which is why so many eagerly await its updates. While following the herd is easy and safe, dissenting ideas are both stimulating and thought-provoking. There’s immense satisfaction in demonstrating that the prevailing experts of the day may have been mistaken for a long time. History has shown time and again that the so-called ‘consensus’ is often fundamentally flawed.
Thanks.
Thanks. I did not set out to challenge mainstream scientific thinking; I’ve just followed where the data have led.
> maybe we have the angular diameter distances wrong and there is no size evolution
MOND can be successful in a theory of non-expanding space.
The “debunking” of tired light is as poorly thought out as the “debunking” of MOND. https://cosmology.info/redshift/rebut/errorswright.html
It would seem an optical effect would be the most obvious cause of redshift, given the many fudge factors to hold the expanding universe model together, but there is another possibility, than tired light.
Apparently multi spectrum “packets” will redshift over distance, as the higher frequencies dissipate faster.
Yet that would mean a wave front is being sampled, rather than individual, single spectrum photons traveling billions of lightyears. Which goes against one of the truly high holies of modern physics. That measurement is the ultimate arbiter and what we measure are quanta of light. Therefore these quanta must be foundational, rather than an artifact of light interacting with our physically complex measuring devices. A loading or threshold theory.
One of the various patches to current theory is Dark Energy. Which is required to explain the “bend” in the rate of redshift.
The rate of redshift increases proportional to distance, so it was assumed that it gradually, consistently slowed over the age of the universe, but what was discovered, by Perlmutter, et al, in 1999, was the rate actually dropped off precipitately, than flattened out. So in ballistics terminology, it was like the universe had been shot out of a cannon, then once it slowed enough, a rocket motor kicked in to sustain it. The Big Bang to explain the initial blast, with Dark Energy to explain the closer, more gradual effect.
Yet if we look at it from our point of view out, yes, it starts off gradually, but that upward curvature is it basically going parabolic. Which would be much more easily explained by an optical effect compounding on itself.
Thank you for this wonderful post.
How sure are the measurements of clusters over time? Do we truly know with much certainty that clusters are ‘stable’, i.e. that they don’t slowly increase in size but not in mass?
Just thinking, in my worldview such a phenomenon would be quite logical, but in MOND it might be nice as well (for understanding clusters’ oddly doubled a0-behaviour).
We always assume a spherical cow that’s in equilibrium, but there’s not such thing as a cluster that’s observed to be either. So yes, this is a legitimate concern, as are temperature variations in the X-ray gas and other details that matter to the mass determinations: conventional measures (dynamics vs. X-rays vs. lensing) are not always consistent. That said, I’ve never seen an effect of an amplitude that would make the problem for MOND go away. If I thought I did, I would most assuredly report it.
This indeed remains puzzling. I’d love to hear your take on what I’ve written up:
https://continentalhotspot.com/2024/08/16/26-does-the-bullet-cluster-disprove-mond/
https://continentalhotspot.com/2024/08/20/27-what-is-the-mond-cluster-conundrum/
These are fantastic.
“… long time to build a massive galaxy in LCDM … but it happens much faster in MOND.” Convince some of the string theorists that they need to explain MOND in terms of MOND inertia and/or a MONDian 5th force and./or a failure of Einstein’s equivalence principle.
Dear tritonstation.com owner, You always provide key takeaways and summaries.
“Resistance is not futile” 😀
Was somewhat worried this beacon in a garbage dump of mediocrity and party-lines has gone out. Many thoughts but maybe later. For now; “Marry Christmas” on an cosmastro site? When a date, if you absolutely have to mark dates, is just 2-3 days prior? Properly defined and with far wider sociological and psychological meaning.
Anyway, just marking the bush because this forest will be of great historical significance and I, of course, want to be at least infinitesimally immortal too.
Thanks again Stacy for keeping the lights on.