It is common to come across statements like “There is overwhelming astrophysical and cosmological evidence that most of the matter in our Universe is dark matter.” This is a gross oversimplification. The astronomical data that indicates the existence of acceleration discrepancies also test the ideas we come up with to explain them. I never considered MOND until I was persuaded by the data that there were serious problems with its interpretation in terms of dark matter.

The community seems to react to problems with the dark matter interpretation in one of several ways. Physicists often seem to simply ignore them, presuming that any problems are mere astronomical details that aren’t relevant to fundamental physics. Among more serious scientists, there is a tendency to bicker over solutions, settle on something (satisfactory or not), then forget that there was ever a problem.

Benoit Famaey and I wrote a long review for Living Reviews in Relativity about a decade ago. In it, we listed some of the problems that afflicted LCDM. It is instructive to review what those were, and examine what progress has been made. The following is based on section 4 of the review. I will skip over the discussion of coincidences, which remain an issue, to focus on specific astronomical problems.

Unobserved predictions

A problem for LCDM, and indeed, any theory, is when it makes predictions that are not confirmed. Here are a list of challenges stemming from observational reality deviating from the expectations or LCDM that we identified in our review, together with an assessment of whether they remain a concern.
The bulk flow challenge
Peculiar velocities of galaxy clusters are predicted to be on the order of 200 km/s in the ΛCDM model: as massive, recently formed objects, they should be nearly at rest with respect to the frame of the cosmic microwave background. Instead, they are observed to have bulk flows of order 1000 km/s.

This appears to remain a problem, and is related to the high collision speeds of objects like the bullet cluster, which basically shouldn’t exist.

The high-z clusters challenge
Structure formation is reputed to be one of the greatest strengths of LCDM, but the observers’ experience has consistently been to find more structure in place earlier than expected. This goes back at least to the 1987 CfA redshift survey stick man figure, which may seem normal now but surprised the bejeepers out of us at the time. It also includes clusters of galaxies, which appear at higher redshift than they should. At the time, we pointed out XMMU J2235.3-2557 with a mass of of ∼ 4 × 1014 M at z = 1.4 as being very surprising.

More recently we have El Gordo, so this remains a problem.

The Local Void challenge
Peebles has been pointing out for a long time that voids are more empty than they should be, and do not contain the population of galaxies expected in LCDM. They’re too normal, too big, and gee it would help if structure formed faster. In our review, we pointed out that the “Local Void” hosts only 3 galaxies, which is much less than the expected ∼ 20 for a typical similar void in ΛCDM.

I am not seeing much in the literature in the way of updates, so I guess this one has been forgotten and remains a problem.

The missing satellites challenge
LCDM predicts that there are many subhalos in every galactic halo, and one would naturally expect each of these to host a dwarf satellite galaxy. While galaxies like the Milky Way do have dwarf satellites, they number in the dozens when there should be thousands of subhalos. This is manifestly not the case.

The trick with this test is mapping the predicted number of halos to the corresponding galaxies that inhabit them. If there is a nonlinear relation between mass and light, then there can be fewer (or more) dwarf galaxies than halos. People seem to have decided that this problem has been solved.

It is not clear to me how the solutions map to the (contemporaneous with our review) Too Big To Fail problem in which the most massive predicted subhaloes are incompatible with hosting any of the known Milky Way satellites. It isn’t a simple nonlinearity in mass-to-light; some biggish subhalos simply don’t host galaxies, apparently, while many smaller ones do. That doesn’t make sense in terms of the many mass-dependent mechanisms that are invoked to suppress dwarf galaxy formation. Nevertheless, we are assured that it all works out.

The satellites phase-space correlation challenge
This is also known as the planes of satellites problem. At the time of our review, it had recently been recognized that the satellite galaxies of the Milky Way are observed to correlate in phase-space, lying in a seemingly rotation-supported disk. This is pretty much the opposite of what one expects in LCDM, in which subhalos are on randomly oriented, radial orbits.

The problem has gotten worse with more planes now being known around Andromeda and Centaurus A and other galaxies. There have been a steady stream of papers asserting that this is not a problem, but the “solution” seems to be to declare planes to be “common” if their incidence in simulations is a few percent. That is, they seem to agree with the observers who point out that this is a problem, and simply declare it not to be a problem.

The cusp-core challenge
The cusp-core problem is that cold dark matter halos are predicted to have cuspy central regions in which the density of dark matter rises continuously towards their centers, while fitting a dark matter mass distribution to observed galaxies prefers cored halos with a rougly constant density within some finite radius. This has a long history. Observers traditionally used the pseudoisothermal halo profile (with a constant density core) to fit rotation curve data. This was the standard model for a decade before CDM simulations predicted the presence of a central cusp. The pseudoisothermal halo continues to provide a better description of the data. The initial reaction of the theoretical community was to blame the data for not conforming to their predictions: they came up with a series of lame excuses (beam smearing, slit misplacement) for why the data were wrong. Serial improvements in the quality of data showed that these ideas were wrong, and effort switched from reality denial to model modification.

People generally seem to think this problem is solved through the use of baryon feedback to erase the cusps from galaxy halos. I do not find these explanations satisfactory, as they require a just-so fine-tuning to get things right. More generally, this is just one aspect of the challenge presented by galaxy kinematic data. This is what happens if you insist on fitting dark matter halos to data the looks like what MOND predicts. Lots of people seem to think that explaining the cusp-cpore problem solves everything, but this is just one piece of a more general problem, which is not restricted to the central regions. Ultimately, the question remains why MOND works at all in a universe run by dark matter.

I mention all this because it is the prototypical example of why one should take the claims of theorists to have solved a problem with a huge grain of salt. Here, the problem has been redefined into something more limited, then the limited problem has been solved in a seemingly-plausible yet unconvincing way, victory is declared, and the original, more difficult problem (MOND works when it should not) is forgotten or considered to be solved by extension.

The angular momentum challenge
During galaxy formation, the baryons sink to the centers of their dark matter halos. A persistent idea is that they spin up as they do so (like a figure skater pulling her arms in), ultimately establishing a rotationally supported equilibrium in which the galaxy disk is around ten or twenty times smaller than the dark matter halo that birthed it, depending on the initial spin of the halo. This is a seductively simple picture that still has many adherents despite never having really worked. In live simulations, in which baryonic and dark matter particles interact, there is a net transfer of angular momentum from the baryonic disk to the dark halo. This results in simulated disks being much too small.

This problem is solved by invoking just-so feedback again. Whether the feedback one needs to solve this problem is consistent with the feedback one needs to solve the cusp-core problem is unclear, in large part because different groups have different implementations of feedback that all do different things. At most one of them can be right. Given familiarity with the approximations involved, a more likely number is Zero.

The pure disk challenge
Structure forms hierarchically in CDM: small galaxies merge into larger ones. This process is hostile to the existence of dynamically cold, rotating disks, preferring instead to construct dynamically hot, spheroidal galaxies. All the merging destroys disks. Yet spiral galaxies are ubiquitous, and many late type galaxies have no central bulge component at all. At some point it was recognized that the existence of quiescent disks didn’t make a whole lot of sense in LCDM. To form such things, one needs to let gas dissipate and settle into a plane without getting torqued and bombarded by lots of lumps falling onto it from random directions. Indeed, it proved difficult to form large, bulgeless, thin disk galaxies in simulations.

The solution seems to be just-so feedback again, though I don’t see how that can preclude the dynamical chaos caused by merging dark matter halos regardless of what the baryons do.

The stability challenge
One of the early indications of the need for spiral galaxies to be embedded in dark matter halos was the stability of disks. Thin, dynamically cold spiral disks are everywhere around us, yet Newton can’t hold them together by himself: simulated spirals self destruct on a short timescale (a few orbits). A dark matter halo precludes this from happening by counterbalancing the self-gravity of the disk. This is a somewhat fine-tuned situation: too little halo, and a disk goes unstable; too much and disk self-gravity is suppressed – and spiral arms and bars along with it.

I recognized this as a potential test early on. Dark matter halos tend to over-stabilize low surface density disks against the formation of bars and spirals. You need a lot of dark matter to explain the rotation curve, but not too much so as to allow for spiral structure. These tensions can be contradictory, and the tension I anticipated long ago has been realized in subsequent analyses.

The low surface brightness spiral F568-1 (left) and its rotation curve (right). The heavy line indicates the stellar disk mass required to sustain the observed spiral arms; the light line shows what is reasonable for a normal stellar population for which the galaxy consistent with the BTFR and RAR. We can’t have it both ways; this is the predicted contradiction to invoking dark matter to explain both disk stability and kinematics.

I’m not aware of this problem being addressed in the context of cold dark matter models, much less solved. The problem is very much present in modern hydrodynamical simulations, as illustrated by this figure from the enormous review by Banik & Zhao:

The pattern speeds of bars as observed and simulated. Real bars are fast (R = 1) while simulated bars are slow (R > 2) due to the excessive dynamical friction from cuspy dark matter halos. (Fig. 21 from Banik & Zhao 2022).

The missing baryons challenge
The cosmic fraction of baryons – the ratio of normal matter to dark matter – is well known (16 ± 1%). One might reasonably expect individual CDM halos to be in in possession of this universal baryon fraction: the sum of the stars and gas in a galaxy should be 16% of the total, mostly dark mass. However, most objects fall well short of this mark, with the only exception being the most massive clusters of galaxies. So where are all the baryons?

The answer seems to be that we don’t have to answer that. Initially, the poroblem was overcooling: low mass galaxies should turn more of their baryons into stars than is observed. Feedback was invoked to prevent that, and it seems to be widely accepted that feedback from those stars that do form heat much of the surrounding gas so it remains mixed in with the halo in some conveniently unobservable form, or that the feedback is so vigorous that it expells the excess baryons entirely. That the observed baryon fraction declines with declining mass is attributed to the lesser potential wells of smaller galaxies not being able to hang on to their baryons as well – they are more readily expelled. That sounds reasonable at a hand-waving level, but getting it right quantitatively presents a fine-tuning problem: the observed baryon fraction correlates strongly with mass with practically no scatter. One would expect feedback to be rather stochastic and result in a lot of scatter, but if it did it would propagate straight into the Tully-Fisher relation, which has practically no scatter. This fine-tuning problem is addressed by ignoring it.

The more things change

So those are the things that concerned us a decade ago. Looking back on them, there has been some progress on some items and less on others. Being generous, I would say there has at least been progress on the missing satellite problem, cusp-core, angular momentum, and pure disks. There has been no perceptible progress on the other problems, some of which (high-z clusters, disk stability) have gotten worse.

This is all written in the context of dark matter, with only passing reference to MOND. How does MOND fare for these same issues? MOND is good at making things move fast; it naturally predicts the scale of the bulk flows. It also predicted early structure formation, and is good at sweeping the voids clean. It has nothing to say about missing satellites. There are no subhalos that might be populated with dwarfs in MOND, so the question doesn’t arise. It might provide an explanation for the planes of satellites, but I am underwhelmed by this idea (or any others that I’ve heard for this particular problem). MOND is the underlying cause of the cusp-core problem, which arises entirely from trying to fit dark matter halos to galaxies that obey MOND. MOND suffers no angular momentum problem; what you see is what you get. It is noteworthy that angular momentum is not an additonal free parameter as there is no dark component with an unspecified quantity of it; it is specified entirely by the observed distribution of baryons and their motions. Similarly, making pure disks is not a problem for MOND. One can have hierarchical structure formation, but it is not required to the degree that it wipes out nascent disks in the way it did in LCDM simulations before steps were taken to make them stop doing that. Disk stability in MOND stems from the longer range of the force law rather than piling on dark matter; it is comparable for high surface brightness galaxies in both theories, but readily distinguishable for low surface brightness galaxies. This test clearly prefers MOND. Finally, the missing baryon problem doesn’t really pertain in MOND. Objects just have the baryons they have; only in rich clusters of galaxies is there a residual missing baryon problem (albeit a serious one!)

At a conservative count, that is four distinct items that have nothing to do with rotation curves where MOND performs better than LCDM. But go ahead, tell me again how MOND only explains rotation curves and nothing else.


This was basically just section 4.2 of the review. Section 4.3 was about unexpected observations – observations that were surprising in the context of LCDM. I think this post is been long enough, so I won’t go there except to say that these unexpected things were either predicted a priori by MOND, or follow so naturally from it that they could have been if the question had been posed. So it’s not just that MOND explains some things better than dark matter, it’s that it correctly predicted in advance things that were not predicted by dark matter, and that are often not well-explained by it.

The situation remains incommensurate.

34 thoughts on “Checking in on Troubles with Dark Matter

  1. Even informally among my colleagues, these kinds of vague and general questions are sometimes scoffed at a bit, but I can’t help but probe: do you get the sense that these problems can all be addressed by MOND (or by some appropriate covariant theory of modified gravity in general), or are there larger mysteries lurking beyond the edges of some of these questions? The more people start considering possibilities of Hubble bubbles and local underdensities and such, the more I get curious about the fearful (or exciting) possibility that there’s something horribly, critically wrong with our entire cosmology—not that I have the slightest idea what it could be.

    Like

    1. I have wished to post this a long time here: Win, lose or draw for MOND, there IS something horribly wrong with our current understanding of cosmology and/or particle physics. If MOND is right, there has to be some vast unknown realms of basic gravitational dynamics that presently lack theoretical description altogether. You can’t just patch a0 on to the existing physics and, bingo, mission completed (I guess Milgrom, if correct, could be compared to Mendeleev and the periodic table – seeing a pattern that is striking but not having the theoretical tools – and their experimental foundations – to explain it).
      If dark matter exists, on the other hand, it almost certainly would have to consist of particles that are not described by the enormously successful standard model of particle physics. Same here, the particle, if found, can’ just be patched on to a standard model that refuses to predict it. Everyone can see that something is missing, but maybe some people don’t want to think it’s the tip of an iceberg.

      Like

      1. Yes.
        That Milgrom’s formula works at all is telling us something profound. The most obvious interpretation is about gravitational dynamics in the way you describe. Maybe it is telling us something about the nature of dark matter, but very few people seem to “get” this. Khoury and Blanchet are exceptions, but most particle physicists seem to think it suffices to have a new particle that provides more mass. It does not – that is a 1980s level of understanding of the problem. Not only do we need something outside the Standard Model of particle physics, we need something that inevitably leads to MONDian behavior. That is an obvious, minimum requirement for any model to be successful, just as fitting the CMB is. But hundreds if not thousands of theorists are building models in merry obliviousness to this absolute astrophysical requirement.

        Like

    2. Yes. That is something that impressed me about it early on – many of the things that pose problems for a dark matter interpretation are naturally explained by MOND. I worked really hard to sustain the dark matter interpretation. I didn’t take MOND seriously until I’d exhausted uncounted options. Once I let myself give MOND a chance, I found myself working a lot less hard.

      Like

  2. I’m not qualified enough to say if this is indeed the solution, but there is a paper by Lopez-Corredoira et al. (https://arxiv.org/abs/2210.13961) where they consider more realistic boundary conditions on clusters together with a modified virial theorem, which seems to solve the missing baryon problem in clusters for MoND

    Like

      1. Assume that SUSY occurs in nature. Assume that gravitons & gravitinos have empirically significant MOND-charges. Assume that all other fundamental particles have empirically insignificant MOND-charges. In the standard form of Einstein’s field equations, replace the –1/2 by –1/2 + MONDian-gravitino-data-function. Study the data-function empirically. Do you disagree? Do you understand why this approach might appeal to string theorists?

        Like

  3. Every description of gravity I’ve ever seen points out that the force is consodered to originate from the center of mass of each body. But given the vast spatial volume covered by a galaxy (or anything else in space) could this simplification be wrong? What if the mass of the galaxy (for example) were so spread out that you could not assume that gravity is simply originating from the black hole at the center of the collection of stars. Are not starts further out on the rim of a galaxy exerting gravity on all the “inner” stars and planets ? I know I’ve seen an article somewhere that justified the “center of mass” assumption for a planet. But that was assuming the other body was external to the “planet”.

    Like

    1. It all begins with Newton, who showed that for a spherical planet (which was either of uniform density or the density depended only on the distance from the centre), its attraction of an external object was the same as if all its mass was concentrated at its centre. However the Earth is not exactly spherical and that led to a whole series of mathematicians through to Laplace creating more complex formulae to deal with the non-sphericity. If you are interested, a 2-volume work on the subject is now available on Project Gutenberg: Isaac Todhunter’s “A history of the mathematical theories of attraction and the figure of the earth” (1873). Now when it comes to galaxies, particularly disk galaxies like the Milky Way it is more complex and Stacy would be a much better person than me to explain it.

      Like

      1. Galaxies are indeed very spread out – so spread out that their mean surface density is roughly that of a piece of paper. This is quite beyond our solar system experience, but in the spirit of Newtonian physics we know how to deal with it, just as we do for departures from sphericity of the earth. The difference is, it doesn’t suffice for galaxies – we need something extra, be it dark matter or MOND or [idea yet to be developed].
        In this context, it is interesting to contemplate the origin of inertial mass. We largely take this for granted, but Newton and Mach and Einstein all struggled with this, starting with Newton’s thought experiment about the surface of equilibrium of a spinning bucket of water: equilibrium relative to what? Mach argued (in an oversimplified nutshell) that the only answer that made sense was relative to everything else in the universe. Einstein is reputed to have wanted to include Mach’s Principle in GR but couldn’t figure out how and gave up. Maybe galaxies are the rotating buckets of the universe and the discrepancy that we witness is the effect of their motion relative to everything else.
        Stick that in your bong and smoke it.

        Like

        1. > Stick that in your bong and smoke it.

          Heh, I think you already are. 🙂

          Inertial (invariant) mass arises simply as a Casimir of the Lorentz group of inertial transformations, which in turn are a subgroup of the full dynamical symmetry group of inertial (i.e., unaccelerated) motion. The types of elementary fields correspond to unitary irreducible representations of the Poincare group. So the puzzle about inertial mass boils down to “why is the principle of relativity valid”. I.e., why do distinct inertial observers perceive the same laws of physics?

          IOW, the PoR implies inertial invariant mass. [To get this correct more generally, one must construct tensor products of the irreps and then graduate to so-called “interacting representations of the Poincare group” — Cf. Weinberg QFT vol 1.]

          The fact of inertial mass corresponding to a Casimir operator is (for physics) a *local* thing (since Lorentz symmetry is local) — there’s no need for a “rest of the universe” interaction.

          As for gravitational mass, what really matters in gravity is energy-stress-momentum, not invariant inertial mass. (This is proved by the fact that photons have zero invariant mass, but nevertheless have energy-momentum, hence are a source of gravitation.)

          The “equilibrium” is your spinning bucket of water is just equilibrium between conserved angular momentum, centripetal acceleration of particles in the fluid, gravitational force, and the bucket walls, all complicated by nonlinearity of the Navier-Stokes equation.

          Kind regards. 🙂

          Like

          1. Mach’s principle is mentioned in many of the science popularization books that I’ve read. At first this principle seemed very appealing and intriguing. But then one would think that for it to be true the influence of distant, and even nearby matter, would need to be instantaneous. So that, seemingly, rules out the plausibility of this principle. However, there’s the phenomena of quantum non-locality, where measurement of the properties of one particle are instantly correlated with the properties of another particle with which it was entangled, regardless of distance. So, maybe, somehow, Mach’s principle can be salvaged, even though the majority of modern physicists dismiss it, as summarized in the conclusion of this paper by Joseph Moonan Walbran.
            https://digitalcommons.morris.umn.edu/cgi/viewcontent.cgi?article=1115&context=horizons

            Like

            1. David,

              > […] However, there’s the phenomena of quantum non-locality, where measurement of the properties of one particle are instantly correlated with the properties of another particle with which it was entangled, regardless of distance. So, maybe, somehow, Mach’s principle can be salvaged, […] <

              Correlation is not causation. Cf. https://en.wikipedia.org/wiki/Correlation_does_not_imply_causation

              IOW, distant objects cannot "cause" local inertia, regardless of whether we perceive a correlation between them.

              Like

              1. I think you are overstating the case. The question of whether or not distant objects can “cause” local inertia is heavily dependent on one’s theory of quantum gravity. Any equation that relates gravity to inertia also relates inertia to gravity, and such an equation can be interpreted either in the conventional way (inertia causes gravity) or in the opposite way (gravity causes inertia). In the absence of a convincing quantum theory of gravity, the choice of interpretation is largely philosophical, rather than scientific.

                Like

              2. Mike,

                I agree with what the Wikipedia article says from a general philosophical point of view that correlation does not imply causation, like with the supposed link between low vitamin D levels and multiple sclerosis that they cite. But, unfortunately, the Wiki article does not specifically address quantum non-locality, although they have a link to their Quantum Mechanics page. In 1964 the physicist John Bell put forth his “Bell Inequality” theorem. Basically it said that if two entangled quantum particles show a correlation of particular properties (like polarization in photons) above a certain percentage in multiple, repeated experiments, then it proves that non-locality is real as predicted by the developers of Quantum Mechanics.

                In 1982 Alain Aspect pioneered the first test of Bell’s Inequality, which proved that, indeed, nature, at the quantum level, has instantaneous (though random) correlations for separated quantum entities. At later times John Clauser and Anton Zeilinger also carried out quantum entanglement experiments, also confirming the results of Alain Aspect. Zeilinger’s experiment was conducted in the Canary Island over a distance of 143 kilometers. The three of them received the 2022 Nobel Prize for their pioneering work.

                As far as distant objects affecting the inertia of local objects, that was an off-the-cuff speculation, based on the fact that quantized angular momentum is instantly correlated in the entanglement experiments. But linear momentum can have continuous values as far as anyone knows, and no experiment has demonstrated, as far as I know, that it is made up of tiny quantized increments of, let’s say, either mass or velocity that might make momentum subject to quantum non-locality. In fact, I recently read that the position variable of a particle is continuous, though I don’t remember exactly where I read that. But, I remember reading about ideas that spacetime itself might be broken up into little cells. There’s so many ideas out there that it’s hard to keep track, and I’m coming from a general knowledge of physics and memory of things that I read.

                Like

    2. Herb,

      > Every description of gravity I’ve ever seen points out that the force is considered to originate from the center of mass of each body. But given the vast spatial volume covered by a galaxy (or anything else in space) could this simplification be wrong?<

      Newtonian gravity is sourced by the mass *density* at every point. The shell theorem just shows that if the density is spherically symmetric, you can think of it all being concentrated at a central point.

      But this is very far from being true more generally, such as in real galaxies. Take a look at Jo Bovy's excellent online (even interactive!) textbook on gravity at https://galaxiesbook.org . The gravitational field even for over-simplified galactic models (such as the "razor-thin disk") are a long way from the spherically symmetric case.

      Also, note that serious galactic gravity research often involves solving the Newtonian Poisson equation (or a MONDian alternative in our case), using a mass density model built from independent observations. (How do you think all the rotations curves underpinned by data in Stacy's SPARC database were obtained? 🙂

      HTH.

      Like

  4. Reading the section (4.1) titled “Coincidences” in the paper jointly authored by Benoit Famaey and Stacy McGaugh; “Modified Newtonian Dynamics (MOND): Observational Phenomenology and Relativistic Extensions”, was very inspirational to me, since I have a home-brewed model that might account for such coincidences. With math being my weak suite, it took me a while just to confirm, to my own satisfaction, the coincidence in magnitude between a0 and the cosmological constant Lambda, as well as the Hubble constant. In this wildly speculative, and admittedly amateur model, the observed effects of dark energy and dark matter have a common origin, without the need for dark matter as conventionally applied. This is assumed to underlie the numerical connections between a0 and those cosmological parameters. But the model is predicated on the existence of gravitons. If these turn out not to exist then the model is invalidated.

    Like

      1. Thanks Maarten! The model was originally only intended to address quantum realm phenomena. It requires just one change in the Standard Model. Only much later did I appreciate that it might provide an explanatory framework for the dark sector in astronomy. But it’s largely non-mathematical, so not a rigorous model like all the truly professional papers. So it’s definitely in the amateur league, as scientific papers go. Of the eleven sections the only two that need completion are:

        10) Cosmological Consequences – Mimicking Dark Matter/Dark Energy
        11) Speculative Technological Implications and UAPs

        Needless to say the concept underlying the model is far out on a speculative limb, and might end up being useful only as a sci-fi plot device. But I’ll post it at viXra and provide a link when those two sections are done.

        Like

        1. Well, it doesn’t have to be a problem when little math is involved. Sometimes a simple but well formulated thought can turn out to be more powerful than pages of complicated calculations trying to get the perfect answer.

          Today I was working on a programming problem banging my head against getting a piece of code to perfection. After 3 hours hard work I decided to test the much more simple thought that the issue arose from another piece of code. That turned out to be the right approach. I think this also happens in science sometimes.

          Like

    1. On a rapid reading, I get the impression that the paper confirms that it is hard to make thin disks in LCDM. So I don’t know exactly what you mean by an answer.

      Like

      1. Hi, I found it funny that A/B made a post on galaxy evolution just after it was mentioned here. And I forgot to add that I consider myself a reporter rather than an arguer here, my fault. BTW The presented article has a followup where they deal with large galaxies with discs: https://arxiv.org/abs/2201.08855

        Like

  5. Pretty sure this conundrum won’t be resolved through scientific means, at least not in short to medium time frame. Just read this: https://aeon.co/essays/why-did-darwins-20th-century-followers-get-evolution-so-wrong where similar situation is afoot. Tons and tons of solid, empirical data hitting mighty dam of traditionalism. I mean yes, eventually any obstacle will erode giving the process enough time.
    Why is this happening? One would have concluded that in a field of human activity such as science, progress was the champion. But we are all humans (more precisely, living beings) first and professionals second. All living beings are traditionalist simply because it is energetically more favorable. Thinking new involves growing new pathways not to mention increased risk implementing those novelties whereas following established trails is just memory recall and to an extend somewhat automatic actions.
    Sure, some advances were the result of curiosity, strong nonconformist trait or pure rebellious behavior but most was down to necessity, environment pressure. And when it comes to something so abstract as new scientific ideas that is flat out absent. There are lives and livelihoods and statuses and egos standing firm, shields up, spears pointed.
    Banzai!!

    Like

    1. Wow. Great someone is finally acknowledging this big and clear problem in evolution theory! I’d say ditch it completely (except for micro evolution), but of course the view prevails that the assumption of God’s existence is not scientific. While all of science, in my view, points to an almighty entity that orders all things – I think leaving God out of science on purpose has had its time and the reasons for it (they all boil down to trying to build a godless model of the universe so we truly think we can ignore Him) are outdated, but who am I? I think we can only talk about the same reality that science is about if we assume such a reality exists, but every effort to put chance in God’s stead turns out to be a nonrealist view.

      Like

      1. Maarten, read anything by Stephen Jay Gould to counter Dawkins’ over-certainty about evolution. There is good evidence for multi-level selection in nature and it has even been used to explain the origins of religion in David Sloan Wilson’s “Darwin’s Cathedral: Evolution, Religion, and the Nature of Society”. Robert Asher’s “Evolution and Belief” demonstrates that atheism is not the necessary consequence of belief in evolutionary theory.

        This, however is getting well away from the topic of this blog and I don’t want to encourage further deviation.

        Like

        1. I know evolution believers that are christian, that’s not the problem. But if the new biology proven by science is clearly proving evolution false, its theory is not about reality. Thus it arrives in the same state as the string theory landscape and many-worlds interpretation. It’s no issue at all for me if scientists say they don’t believe in God, but if they don’t claim their science is about reality it is a big issue. That applies again to the topic of this blog: dark matter.

          Like

          1. I mean that they ought to try and do the kind of science that applies to our shared ubiquitous reality. If they don’t care, postulating a great many alternative realities (that don’t affect our reality) or something, or otherwise by simply not really adapting their theory to clear new phenomena proven by the real data – that’s the problem IMO. And there’s the correspondence between that article on evolution theory and this blog on preferring MOND over dark matter: they fail to adapt to new data.

            Like

          2. I think you have completely misunderstood the Aeon article. The new biology doesn’t prove evolution false; it proves a particular version of the theory of evolution false (or at least incomplete – hence my multi-level selection point). Evolution can still occur by single-base changes; it’s just not the only way that evolution occurs. To use an astronomical analogy it’s like Kepler disproving Aristotle’s theory that all heavenly bodies move in perfect circles by showing that Mars’ orbit was an ellipse. Before the data (from Tycho) was good enough for Kepler’s proof, which was confirmed by the improved accuracy of his planetary tables, it wasn’t possible to prove Aristotle wrong – even Copernicus used circular orbits in his heliocentric theory (and Tycho used circular orbits in his geo-heliocentric theory).

            It is only since the beginning of the 21st century and the Human Genome Project we have realised that there are far fewer human genes that build proteins than we expected. Before then humans were thought to have about 100,000 protein-building genes; now we know it is around 20,000. With the benefit of hindsight we can look back and find the examples cited in the Aeon article; at the time it was a great deal harder to fit these results into a coherent structure. A coherent structure is essential in science, otherwise – as Rutherford said – it is just stamp-collecting.

            Like

            1. I understand your viewpoint. It was just a side remark based on many popular science articles and bits of knowledge, not only the aeon article. I feel it’s better to drop the subject now, if you consent.

              Like

Comments are closed.