This post is adopted from a web page I wrote in 2008, before starting this blog. It covers some ground that I guess is now historic about things that were known about WIMPs from their beginnings in the 1980s, and experimental searches therefore. In part, I was just trying to keep track of experimental limits, with updates added as noted since the first writing. This is motivated now by some troll on Twitter trying to gaslight people into believing there were no predictions for WIMPs prior to the discovery of the Higgs boson. Contrary to this assertion, the field had already gone through many generations of predictions, with the theorists moving the goal posts every time a prediction was excluded. I have colleagues involved in WIMP searches that have left that field in disgust at having the goal posts moved on them: what good are the experimental searches if, every time they reach the promised land, they’re simply told the promised land is over the next horizon? You experimentalists just keep your noses to the grindstone, and don’t bother the Big Brains with any inconvenient questions!

We were already very far down this path in 2008 – so far down it, I called it the express elevator to hell, since the predicted interaction cross-section kept decreasing to evade experimental limits. Since that time, theorists have added sideways in mass to their evasion tactics, with some advocating for “light” dark matter (less in mass than the 2 GeV Lee-Weinberg limit for the minimum WIMP mass) while others advocate for undetectably high mass WIMPzillas (because there’s a lot of unexplored if unexpected parameter space at high mass to roam around in before hitting the unitarity bound. Theorists love to go free range.)

These evasion tactics had become ridiculous well before the Higgs was discovered in 2012. Many people don’t seem to have memories that long, so let’s review. Text in normal font was written in 2008; later additions are italicized.

Seeking WIMPs in all the wrong places

This article has been updated many times since it was first written in 2008, at which time we were already many years down the path it describes.

The Need for Dark Matter
Extragalactic systems like spiral galaxies and clusters of galaxies exhibit mass discrepancies. The application of Newton’s Law of Gravity to the observed stars and gas fails to explain the rapid observed motions. This leads to the inference that some form of invisible mass – dark matter – dominates the dynamics of the universe.

WIMPs
If asked what the dark matter is, most scientists working in the field will respond honestly that we have no idea. There are many possible candidates. Some, like MACHOs (Massive Compact Halo Objects, perhaps brown dwarfs) have essentially been ruled out. However, in our heart of hearts there is a huge odds-on favorite: the WIMP.

WIMP stands for Weakly Interacting Massive Particle. This is an entire class of new fundamental particles that emerge from supersymmetry. Supersymmetry (SUSY) is a theoretical notion by which known elementary particles have supersymmetric partner particles. This notion is not part of the highly successful Standard Model of particle physics, but might exist provided that the Higgs boson exists. In the so-called Minimal Supersymmetric Standard Model (MSSM), which was hypothesized to explain the hierarchy problem (i.e., why do the elementary particles have the various masses that they do), the lightest stable supersymmetric particle is the neutralino. This is the WIMP that presumably makes up the dark matter.

2020 update: the Higgs does indeed exist. Unfortunately, it is too normal. That is, it fits perfectly well with the Standard Model without any need for SUSY. Indeed, it is so normal that MSSM is pretty much excluded. One can persist with more complicated theories (as always) but to date SUSY has flunked every experimental test, including the “golden test” of the decay of the Bs meson. Never heard of the golden test? The theorists were all about it until SUSY flunked it; now they never seem to mention it.

Cosmology, meet particle physics
There is a confluence of history in the development of previously distinct fields. The need for cosmological dark matter became clear in the 1980s, the same time that MSSM was hypothesized to solve the hierarchy problem in particle physics. Moreover, it was quickly realized that the cosmological dark matter could not be normal (“baryonic“) matter. New fundamental particles therefore seemed a natural fit.

The cosmic craving for CDM
There are two cosmological reason why we need non-baryonic cold dark matter (CDM):

  1. The measured density of gravitating mass appears to considerably exceed that in normal matter as constrained by Big Bang Nucleosynthesis (BBN): Ωm = 6 Ωb (so Ωnot baryons = 5 Ωbaryons).
  2. Gravity is too weak to grow the presently observed structures (e.g., galaxies, clusters, filaments) from the smooth initial condition observed in the cosmic microwave background (CMB) unless something speeds up the process. Extra mass will do this, but it must not interact with the photons of the CMB the way ordinary matter does.

By themselves, either of these arguments are strong. Together, they were compelling enough to launch the CDM paradigm. (Like most scientists of my generation, I certainly bought into it.)

From the astronomical perspective, all that is required is that the dark matter be non-baryonic and dynamically cold. Non-baryonic so that it does not participate in Big Bang Nucleosynthesis or interact with photons (a good way to remain invisible!), and dynamically cold (i.e., slow moving, not relativistic) so that it can clump and form gravitationally bound structures. Many things might satisfy these astronomical requirements. For example, supermassive black holes fit the bill, though they would somehow have to form in the first second of creation in order not to impact BBN.

The WIMP Miracle
From a particle physics perspective, the early universe was a high energy place where energy and mass could switch from one form to the other freely as enshrined in Einstein’s E = mc2. Pairs of particles and their antiparticles could come and go. However, as the universe expands, it cools. As it cools, it loses the energy necessary to create particle pairs. When this happens for a particular particle depends on the mass of the particle – the more mass, the more energy is required, and the earlier that particle-antiparticle pair “freeze out.” After freeze-out, the remaining particle-antiparticle pairs can mutually annihilate, leaving only energy. To avoid this fate, there must either be some asymmetry (apparently there was about one extra proton for every billion proton-antiproton pairs – an asymmetry on which our existence depends even if we don’t yet understand it) or the “cross section” – the probability for interacting – must be so low that particles and their antiparticles go their separate ways without meeting often enough to annihilate completely. This process leaves some relic density that depends on the properties of the particles.

If one asks what relic density is necessary to make up the cosmic dark matter, the cross section that comes out is about that of the weak nuclear force. A particle that interacts through the weak force but not the electromagnetic force will have the about the right relic density. Moreover, it won’t interfere with BBN or the CMB. The WIMPs hypothesized by supersymmetry fit the bill for cosmologists’ CDM. This coincidence of scales – the relic density and the weak force interaction scale – is sometimes referred to as the “WIMP miracle” and was part of the motivation to adopt the WIMP as the leading candidate for cosmological dark matter.

WIMP detection experiments
WIMPs as CDM is a well posed scientific hypothesis subject to experimental verification. From astronomical measurements, we know how much we need in the solar neighborhood – about 0.3 GeV c-2 cm-3. (That means there are a few hundred WIMPs passing through your body at any given moment, depending on the exact mass of the particle.) From particle physics, we know the weak interaction cross section, so can calculate the probability of a WIMP interacting with normal matter. In this respect, WIMPs are very much like neutrinos – they can pass right through solid matter because they do not experience the electromagnetic interactions that make ordinary matter solid. But once in a very rare while, they may come close enough to an atomic nucleus to interact with it via the weak force. This is the signature that can be sought experimentally.

There is a Nobel Prize waiting for whoever discovers the dark matter, so there are now many experiments seeking to do so. Generically, these use very pure samples of some element (like Germanium or Argon or Xenon) to act as targets for the WIMPs making up the dark matter component of our Milky Way Galaxy. The sensitivity required is phenomenal, and many mundane background events (cosmic rays, natural radioactivity, clumsy colleagues dropping beer cans) that might mimic WIMPs must be screened out. For this reason, there is a strong desire to perform these experiments in deep mine shafts where the apparatus can be shielded from the cosmic rays that bombard our planet and other practical nuisances.

The technology development involved in the hunt for WIMPs is amazing. The experimentalists have accomplished phenomenal things in the hunt for dark matter. That they have so far failed to detect it should give pause to any thinking person aquainted with the history, techniques, and successes of particle physics. This failure is both a surprise and disappointment to those who understand modern cosmology. It should not come as a surprise to anyone familiar with the dynamical evidence for – and against – dark matter.

Searches for WIMPs are proceeding apace. The sensitivity of these experiments is increasing at an accelerating rate. They already provide important constraints – see the figure:


Searching for WIMPs

This 2008 graph shows the status of searches for Weakly Interacting Massive Particles (WIMPs). The abscissa is the mass of the putative WIMP particle. For reference, the proton has a mass of about one in these units. The ordinate is a measure of the probability for WIMPs to interact with normal matter. Not much! The shaded regions represent theoretical expectations for WIMPs. The light red region is the original (Ellis et al.) forecast. The blue and green regions are more recent predictions (Trotta et al. 2008). The lines are representative experimental limits. The region above each line is excluded – if WIMPs had existed in that range of mass and interaction probability, they would have been detected already. The top line (from CDMS in 2004) excluded much of the original prediction. More recent work (colored lines, circa 2008) now approach the currently expected region.

April 2011 update: XENON100 sees nada. Note how the “expected” region continues to retreat downward in cross section as experiments exclude the previous sweet spots in this parameter. This is the express elevator to hell (see below).

September 2011 update: CREST-II claims a detection. Unfortunately, their positive result violates limits imposed by several other experiments, including XENON100. Somebody is doing their false event rejection wrong.

July 2012 update: XENON100 still seeing squat. Note that the “head” of the most probable (blue) region in the figure above is now excluded.
It is interesting to compare the time sequence of their results: first | run 8 | run 10.

November 2013 update: LUX sees nothing and excludes the various claims for detections of light dark matter (see inset). This exclusion of light dark matter appears to be highly significant as the very recent prediction was for about dozen of detections per month, which should have added up to an easy detection rather than the observed absence of events in excess of the expected background. Note also that the new exclusion boundary cuts deeply into the region predicted for traditional heavy (~ 100 GeV) WIMPs by Buchmuelleur et al. as depicted by Xenon100. The Buchmuelleur et al. “prediction” is already a downscaling from the bulk of probability predicted by Trotta et al. (2008 – the blue region in the figure above). This perpetual adjustment of the expectation for the WIMP cross-section is precisely the dodgy moving of the goal posts that prompted me to first write this web page years ago.

May 2014: “Crunch time” for dark matter comes and goes.

July 2016 update: PandaX sees nada.

August 2016 update: LUX continues to see nada. The minimum of their exclusion line now reaches the bottom axis of the 2009 plot (above the line, with the now-excluded blue blob). The “predicted” WIMP (gray area in the plot within this section) appears to have migrated to higher mass in addition to the downward migration of the cross-section. I guess this is the sideways turbolift to evil-Kirk universe hell.


Indeed, the experiments have perhaps been too successful. The original region of cross section-mass parameter space in which WIMPs were expected to reside was excluded years ago. Not easily dissuaded, theorists waved their hands, invoked the Majorana see-saw mechanism, and reduced the interaction probability to safely non-detectable levels. This is the vertical separation of the reddish and blue-green regions in the figure.

To quote a particle physicist, “The most appealing possibility – a weak scale dark matter particle interacting with matter via Z-boson exchange – leads to the cross section of order 10-39 cm2 which was excluded back in the ’80s by the first round of dark matter experiments. There exists another natural possibility for WIMP dark matter: a particle interacting via Higgs boson exchange. This would lead to the cross section in the 10-42-10-46 cm2 ballpark (depending on the Higgs mass and on the coupling of dark matter to the Higgs).”

From this 2011 Resonaances post

Though set back and discouraged by this theoretical slight of hand (the WIMP “miracle” is now more of a vague coincidence, like seeing an old flame in Grand Central Station but failing to say anything because (a) s/he is way over on another platform and (b) on reflection, you’re not really sure it was him or her after all), experimentallists have been gaining ground on the newly predicted region. If all goes as planned, most of the plausible parameter space will have been explored in a few more years. (I have heard it asserted that “we’ll know what the dark matter is in 5 years” every 5 years for the past two decades. Make that three decades now.)

The express elevator to hell

We’re on an express elevator to hell – going down!

There is a slight problem with the current predictions for WIMPs. While there is a clear focus point where WIMPs most probably reside (the blue blob in the figure), there is also a long tail to low interaction cross section. If we fail to detect WIMPs when experimental sensitivity encompasses the blob, the presumption will be that we’re just unlucky and WIMPs happen to live in the low-probability tail that is not yet excluded. (Low probability regions tend to seem more reasonable as higher probability regions are rejected and we forget about them.) This is the express elevator to hell. No matter how much time, money, and effort we invest in further experimentation, the answer will always be right around the corner. This process can go on forever.

Is dark matter a falsifiable hypothesis?

The existence of dark matter is an inference, not an experimental fact. Individual candidates for the dark matter can be tested and falsified. For example, it was once reasonable to imagine that innumerable brown dwarfs could be the dark matter. That is no longer true – were there that many brown dwarfs out there, we would have seen them directly by now. The brown dwarf hypothesis has been falsified. WIMPs are falsifiable dark matter candidates – provided we don’t continually revise their interaction probability. If we keep doing this, the hypothesis ceases to have predictive power and is no longer subject to falsification.

The concept of dark matter is not falsifiable. If we exclude one candidate, we are free to make up another one. After WIMPs, the next obvious candidate is axions. Should those be falsified, we invent something else. (Particle physicists love to do this. The literature is littered with half-baked dark matter candidates invented for dubious reasons, often to explain phenomena with obvious astrophysical causes. The ludicrous uproar over the ATIC and PAMELA cosmic ray experiments is a good example.) (Circa 2008, there was a lot of enthusiasm that certain signals detected by cosmic ray experiments were caused by dark matter. These have gone away.)


September 2011 update: Fermi confirms the PAMELA positron excess. Too well for it to be dark matter: there is no upper threshold energy corresponding to the WIMP mass. Apparently these cosmic rays are astrophysical in origin, which comes as no surprise to high energy astrophysicists.

April 2013 update: AMS makes claims to detect dark matter that are so laughably absurd they do not warrant commentary.

September 2016 update: There is no update. People seem to have given up on claiming that there is any sign of dark matter in cosmic rays. There have been claims of dark matter causing signatures in gamma ray data and separately in X-ray data. These never looked credible and went away on a time scale shorter so short that on one occasion, an entire session of a 2014 conference had been planned to discuss a gamma ray signal at 126 GeV as dark matter. I asked the organizers a few months in advance if that was even going to be a thing by the time we met. It wasn’t: every speaker scheduled for that session gave some completely unrelated talk.

November 2019 update: Xenon1T sees no sign of WIMPs. (There is some hint of an excess of electron recoils. These are completely the wrong energy scale to be the signal that this experiment was designed to detect.

WIMP prediction and limits. The shaded region marks the prediction of Trotta et al. (2008) for the WIMP mass and interaction cross-section. The lighter shade depicts the 95% confidence limit, the dark region the 68% c.l., and the cross the best fit. The heavy line shows the 90% c.l. exclusion limit from the Xenon1T experiment. Everything above the line is excluded, ruling out virtually all the parameter space in which WIMPs had been predicted to reside.

2020 comment: I was present at a meeting in 2009 when the predictions of Trotta et al (above, in grey, and higher up, in blue and green) was new and fresh. I was, at that point, already feeling like we’d been led down this garden path more than one too many times. So I explicitly asked about the long tail to low cross-section. I was assured that the probability in that tail was < 2%; we would surely detect the WIMP at somewhere around the favored value (the X in the gray figure). We did not. Essentially all of that predicted parameter space has been excluded, with only a tiny fraction of the 2% tail extending below current limits. Worse, the top border of the Trotta et al prediction was based on the knowledge that the parameter space to higher cross section – where the WIMP was originally predicted to reside – had already been experimentally excluded. So the grey region understates the range of parameter space over which WIMPs were reasonably expected to exist. I’m sure there are people who would like to pretend that the right “prediction” for the WIMP is at still lower cross section. That would be an example of how those who are ignorant (or in denial) of history are doomed to repeat it.

I predict that none the big, expensive WIMP experiments will ever find what they’re looking for. It is past time to admit that the lack of detections is because WIMPs don’t exist. I could be proven wrong by the simple expedient of obtaining a credible WIMP detection. I’m sure there are many bright, ambitious scientists who will take up that challenge. To them I say: after you’ve spent your career at the bottom of a mine shaft with no result to show for it, look up at the sky and remember that I tried to warn you.


74 thoughts on “A lengthy personal experience with experimental searches for WIMPs

  1. As a freshly minted PhD from a large WIMP search experiment (you mentioned us in the post) maybe I can share my sadly anonymous thoughts about the current state of affairs:

    1) No junior experimentalist actually believes any of the theorists anymore. I work in this field and I genuinely can’t tell you the last pheno paper we took seriously. The feeling is these large WIMP experiments should run ~1 more generation until we hit the neutrino floor, and if there’s no weirdness with the neutrino rates then pack it up and go home. It’s the junior-ish faculty who need to make their careers who are pushing the strongest for continuing non-stop.

    2) Most of us hoping to continue in academia are trying to lateral to something else like neutrinos, CMB, or even QIS type stuff. Everybody is steering clear of accelerator particle physics but other than that it seems hard to judge where our careers might work out.

    3) There is some appetite for axions and LDM, but tbh it just feels like it’s being driven by the same hype and constantly changing targets (is it just me or does ksvz/dfsz seem realllllly stupidly contrived too). Also for the ldm stuff, there isn’t even much independent motivation other than “I guess we can try looking there”. I do think we won’t see large WIMP style collaborations again though, it’s probably going to be smallish groups hunting specific things and maybe that’s healthier.

    4) I don’t think anyone in my cohort is wedded to a specific paradigm or even a particle DM explanation… I think we’re all pretty open to modified gravity too (!). It’s the cosmology theory and computational people who are strongly opinionated but we sure as shit aren’t going to stand up in colloquiums and argue with them. Anything that’s not observational astro. or terrestrial particle experiment it’s easier just to nod and then ignore them.

    I don’t know what the best future looks like but at least at the junior levels in DM search I think it’s slightly healthier than the 30k foot view, mainly because us younger folk realize we don’t want to end up like our advisors and spend decades with nothing to show (i guess tenure would be nice though…)

    Liked by 4 people

    1. Regarding your last paragraph – I believe it comes down to when somebody started working in the field. If someone started right at the beginning, then 10 years in the field is not a problem – you can still hope for a result in the next 20 years. But after 20 years in the field and no result…going to 30 years is just inertia and fear being involved in a dead-end research.
      If someone started after the first 10 years, everybody that is already in the field is telling her/him that in 10 years for sure there will be results and that this is the best moment to join. After this initial decade, the next 10 years are the doubt years (the inertia years of the previous generation) when it becomes more and more difficult to see a path forward but the time already invested (and the career) make the decision to switch to another field rather difficult.
      Starting after the 20 years that already passed – well, most of the middle positions (those that started 10 years ago) have doubts, the higher positions (started 20 years ago) are grimy / grumpy / opaque, so yes, it should be obvious that if something doesn’t happen NOW, the field has no more future and he/her should be looking for side fields.

      Liked by 1 person

    2. Many thanks for your perspective. I’m sorry for the negative aspects of the experience you describe, but I can’t say I’m surprised. There is an inevitable selection effect, as the people who think the current situation is bad will change fields, while those who choose to remain will be those most committed to the status quo. I myself left physics for astronomy when the string theorists started to take over in the mid-80s. They talked a great game, but all their predictions were beyond any conceivable experimental test. That, to me, is not physics.
      I wish you the best of luck in sorting through these issues.

      Liked by 1 person

    3. Thanks for sharing that. A newly minted PhD is the perfect point to switch paths.
      (It’s a sad fact that you need to ‘follow the money’ as an experimentalist.)

      Like

  2. What if gravity is not so much a property of mass, as mass is an effect of a broader curvature that goes all the way out to particles emerging from fields?
    Given my recent point, about lightspeed as the implicit metric of redshift, was repeated a few too many times, I’ll leave this at one comment.

    Like

  3. Search for ‘suntola dynamic universe’ literature since mid 1990’s or my posts in various physics sites like this. DU expanded relativity and quantum theories correcting major mistakes in interpretation of SuperNova, Gravitational Wave, BH, Hubble flow etc observables using DE/DM auxiliary ‘epicycle’ parameters. My field of mathematical surveying sciences in general theory of estimation (toe) since 1970 confirms DU findings. It identifies the epicyclic bias of Ptolemy Earth centered vs unbiased physical Sun centered model 400-500 years ago. Today DE/DM are the biased epicycle corrections to biased GR/QM model, corrected by Suntola’s unbiased DU model.

    Like

  4. For example, supermassive black holes fit the bill, though they would somehow have to form in the first second of creation in order not to impact BBN.

    Roger Penrose 2020 Physics Nobel Prize winner has suggested supermassive black holes could come from the universe before the big bang and survive –what do you think of theory of this pre- big bang cyclical

    Like

    1. I think it is extremely unlikely. It is nice to imagine a cyclical universe, but we don’t appear to live in one. Even if we did, it is quite a trick to pass objects through the bottleneck of a genuine big bang. Perhaps it can be arranged, but it seems neither likely nor appealing.

      Liked by 1 person

      1. Even if we did, it is quite a trick to pass objects through the bottleneck of a genuine big bang. Perhaps it can be arranged, but it seems neither likely nor appealing.

        what would happened to black holes in a cyclical universe before big bang ?

        Like

        1. Well, first, there has to be a “before.” That’s not really a well defined concept – non-quantum GR does not take us all the way back to t=0, just a tiny fraction of a second before hand (a Planck time). So time is ill-defined prior to that.
          Even if we entertain the existence of a before-time, how can the camel pass through the eye of this needle? The black holes have presumably to have formed in the before-universe, and somehow survive the transition through our edition of the big bang without merging.
          The more conventional approach to creating primordial black holes is to imagine that they’re made in a very early second order phase transition that squeezes a lot of the available mass into a small volume. There is hard to arrange, there is no particularly good reason for this to happen, and there are limits – too many and they’re around to distort the CMB. So this always seemed something of a silly exercise to me.

          Liked by 2 people

          1. The more conventional approach to creating primordial black holes is to imagine that they’re made in a very early second order phase transition that squeezes a lot of the available mass into a small volume. There is hard to arrange, there is no particularly good reason for this to happen, and there are limits – too many and they’re around to distort the CMB.

            we know black holes and neutrinos exist but not cold dark matter

            what about adding both mond and neutrinos in model and simulation for theory building but no cold dark matter ?

            Like

  5. What these failed searches for dark matter prove, I think, is that we don’t understand the concept of mass. On an astrophysical scale, all we have is an equation that relates mass to gravitational acceleration, which is usually interpreted as saying that mass causes gravity, but which can equally well be interpreted as saying that gravity causes mass. All we actually know physically is that matter causes gravity. And even that is a bit dubious, without a proper physical mechanism to explain how. It is a huge conceptual leap to go from “matter causes gravity” to “mass causes gravity”. To my mind, what the “dark matter problem” shows is that this conceptual leap, for all its centuries of great success, was ultimately a mistake.

    Liked by 2 people

    1. What you are actually saying is that we are not sure if the equivalence principle is right or not. That is, we’re not sure if the inertial mass is equal with the gravitational charge (in all regimes)

      Like

      1. You put words into my mouth. The equivalence principle or lack of it is only one aspect of the problem. Neither gravitational mass nor inertial mass has an unambiguous universal definition, so before even asking whether they are the same or not, one has to examine the working definitions of each and see if they (a) make sense, or (b) can be improved. The concept of gravitational mass, for example, only currently makes sense in a Newtonian theory of gravity. If Newtonian gravity is called into question, as it is here, then there is no concept of gravitational mass.

        Like

        1. I intentionally avoided the use of gravitational mass and used charge instead because I wanted to make a clear distinction between mass and what is that causes gravity, just like you said.
          Mass is a property of matter. Gravity is generated by the gravitational charge that is attached to matter. In what way is it attached? The equivalence principle says that mass and gravitational charge are equal / the same. MoND and other alternative such theories show that sometimes they may not be.
          So like you, I said that matter has mass and that matter causes gravity. But avoided to say that mass causes gravity.

          Like

    2. DU falsified the starting point of constant c using preservation of total mass M in closed 3-surface of 4-sphere and allowing its expansion speed C4 to balance resulting motion energy along R4 direction with its opposing gravitational energy toward barycenter. This also falsified Big Bang, Equivalence Principle, Planck constant and 26 locally restricted postulates that have distorted the physics foundations and cosmology concepts such as Dark Energy/Matter. See Suntola DU literature including lectures posted at fls.fi site of Finland’s Nature Science Society.

      Like

  6. Dear All,

    this is an important and much needed historical recount. Something is certainly going wrong today and it is, I think, morally and ethically not really in order any longer to use tax-payers money to search for something which is in actuality already well-ruled out by experiment (we might just as well be searching for angels on a needle):

    A fundamental argument which shows that dark matter does not exist in any form such that it affects galaxies is that of Chandrasekhar dynamical friction: Two galaxies that approach each other such that their dark matter halos begin to overlap loose kinetic energy through the simple sling-shot dynamical effect when each and every dark matter particle (its mass does not matter) of the one galaxy is gravitationally diverted by the other galaxy. This has been calculated ad infinitum and is the reason why galaxies, once they come to within some hundred kpc of each other, merge in the theory, while stars, which fly by each other, do not: stars do not have dark matter halos and so they simply pass each other on Kepler orbits. Galaxies don’t: they move past each other like they are in honey. This process of Chandrasekhar dynamical friction is absent in the observational data. We see many interacting galaxies but few if any true mergers. These observational data robustly exclude the existence of dark matter particles of any type and it is therefore unclear why astronomers keep believing in their existence. For those interested, they can read up on this here: Kroupa (2015: https://ui.adsabs.harvard.edu/abs/2015CaJPh..93..169K/abstract ) (more research at Bonn University is underway: a first result is published here: Oehm & Kroupa 2017: https://ui.adsabs.harvard.edu/abs/2017MNRAS.467..273O/abstract ).

    The implication is of course that gravitation is not Einsteinian/Netwonian on the scale of galaxies, a conclusion which does not surprise given that this gravitational theory was developed by Einstein and Newton using basically the same Solar system data and without data on galaxies. Applying Einstein’s theory to galaxies and beyond constitutes therefore an extrapolation of an empirically-based theory by many orders of magnitude. It is this break-down of this fantastic extrapolation which astronomers and too many physicists call dark matter. It is surprising that this elementary kindergarten issue appears to be largely missed by the community, with celebrated professors (of cosmology and extragalactic astrophysics) massively discouraging research on non-dark-matter-based approaches. Taking the dynamical data from galaxies into account, unambiguously leads to scale-invariant (i.e. Milgromian) dynamics being broken in the regime where the matter density surpasses a critical limit, which is where Einsteinian/Newtonian gravitation is observed to be an excellent approximation like in the Solar system or near black holes.

    Pavel Kroupa (Bonn and Prague)

    Liked by 5 people

      1. I take it that this is directed at Pavel, and I’m not sure what aspect of the bullet cluster you’re referring to. In the context of dynamical friction, the sub-clusters of the bullet passed through each other with the collisionless components (galaxies and dark matter) experiencing only dynamical friction – how much, we don’t really know without watching the system evolve for a few hundred million years. The gas famously lags the other components because it is collisional – we’re seeing the hydrodynamical effects of the two giant gas clouds smacking into each other. That’s different from the dynamical (gravity-only) friction that Pavel is talking about.

        Liked by 3 people

    1. Pavel, I don’t believe that your no-go argument applies to my (recently published) `transparent matter’ model https://doi.org/10.3390/sym12091534
      GR is such a well tested, beautiful and natural (see introduction to that paper) theory; We always find too little mass out there but never too much – why should that be so if GR is wrong and there is no `dark component’ to the energy-momentum tensor?

      Astronomy will forever remain limited to the passive collection of electromagnetic (and perhaps gravitational) waves from a single point-of-view. A deformation of terrestrial physics—which survived a plethora of active tests—experimentally inaccessible otherwise, is not really different from epicycles drawing and will always be achieved with enough of them—interpolation functions, additional fields and terms in the Lagrangian, etc

      Like

      1. Your transparent matter model is not the standard cold dark matter model that Pavel is critiquing. So it might be an alternative, but it doesn’t contradict anything Pavel is saying.
        As for GR, I think we have to be careful with words. Obviously GR is correct, so far as it has been tested, and so far as it goes – which is not into the quantum realm. That there should be more to the story is the starting point of the entire field of quantum gravity.
        Contrary to popular mythology, GR does not pass every test. That we have to invoke dark matter is a result of it flunking the test of predicting motions in extragalactic objects. If Newton flunked this test for planets around stars, we would seek a more general theory. And it did, albeit in a subtle way for Mercury, whose excess precession is now explained by GR. (And of course Newton explains the first 5,557 arcseconds/century; the excess caused by GR is 43 arcseconds/century). By comparison, the mass discrepancy problem in galaxies is not subtle.
        We don’t just “always find too little mass out there but never too much.” That’s true, but falls far short of describing what is observed. The extra amount that we find depends on the acceleration – the acceleration predicted for what we see in the normal matter. It’s very systematic. So it isn’t just more, unseen stuff. It is something that knows intimately well about the distribution of mass that we can see. This phenomenology was predicted in advance by MOND, which was the only theory to do so. So the question isn’t why GR is wrong, it is why MOND ever gets anything (let alone so much) right.

        Liked by 2 people

        1. I agree. But it is still `dark staff’ in the sense of retaining GR and only modifying the e-m tensor. The point of my model and its motivation is that the r.h.s. of Einstein’s equations is a mess – the “cheap-wood wing of the palace” – and we have plenty of reasons to be suspicious of its present state regardless of the missing matter problem. My strong suspicion is that, once mundane matter is properly represented by the e-m tensor, a transparent, electromagnetic component leading to MOND-like behavior is inevitable (in single galaxies; in clusters the situation is much more complicated – again, conforming with MOND’s failure there). MOND is therefore just a coincidence from that perspective.

          Like

          1. Let me elaborate a little. Einstein’s concern with his own (field) equations stemmed from his disbelief in the energy-momentum (e-m) tensor, beyond its phenomenological role; he did not believe that an e-m tensor capable of representing the “quantum” – the true nature of matter – can be written down (see p.10 in the excellent paper https://arxiv.org/pdf/1803.09872.pdf). I think that it actually can! When you do that, quantum mechanics – another great mystery of twentieth century physics – becomes a prosaic statistical description of that e-m tensor, and `quantum gravity’ just (a yet to be written) full-fleged version thereof, involving also the metric tensor.

            Like

            1. So first, as you say, clusters are a mess, so you shouldn’t put too much weight on them. I take the discrepancy MOND suffers there seriously, but it is a much smaller problem than it is portrayed to be. It’s roughly a factor of 2 in mass, which is 20% in velocity dispersion. Dark matter models scream SUCCESS! when they get anywhere near that close.
              I have little patience for “just a coincidence” arguments. I’ve thought about this a long time. No way is this a coincidence. MOND-like behavior must happen for a reason. A physical reason. Anything else is a form of magical thinking.

              Liked by 1 person

              1. In my paper I give an explicit physical reason. Roughly speaking, each particle must carry with it an electromagnetic `halo’, composed of both of its advanced and retarded fields (contrary to other proposed solutions to the missing matter problem, this isn’t some assumption contrived to that end). That halo must have an `isothermal form’ by virtue of Maxwell’s equations alone, hence the flattening of the rotation curve. In dense regions there is a significant, destructive interference effect between different halos (again, because of reasons unrelated to the missing mass problem) suppressing the contribution of this electromagnetic component of the e-m tensor relative to that of matter. The `coincidence’ is that in dense regions the Newtonian acceleration is large, so dark matter seems to `kick-in’ only below a certain density.

                Like

        2. As for your defense of modified gravity – Newton didn’t set out to outperform Kepler; his motivation was gravity itself – falling apples and flying cannonballs. Similarly, Einstein’s concern with Newtonian gravity wasn’t Mercury’s precession, but its incompatibility with SR, describing terrestrial physics as well. Modified gravity is following the much less fruitful path of Ptolemy and Kepler.

          Like

          1. Your model sounds promising then – models that “do the right thing” without contrivance are what we need to make progress.
            As for Newton & Einstein, I agree with your point that they explained more than they set out to do, and that’s a virtue. However, they didn’t exist in a vacuum; Newton was certainly aware of Kepler and other planetary phenomena; the apple-drop was making the conceptual connection between celestial and terrestrial physics. So it is an oversimplification of the history of science to say these are different paths – an oversimplification that is perhaps unavoidable in this format, so let’s leave it at that.
            I disagree with your assertion that modified gravity is more in the realm of Ptolemy and Kepler than Newton and Einstein. What I personally do as an observer is perhaps in the realm of Kepler: identifying empirical laws for galaxies as he did for planets. As a theorist, Milgrom set out to explain the then-newly discovered flat rotation curves – no one would suggest modifying gravity (or invoke invisible mass) without a powerful motivation like that. In so doing, he predicted a wealth of unexpected phenomena beyond that. Lots of those predictions have since come true. Bekenstein, hardly a foe of GR, made the point a long time ago that MOND had more successful predictions to its credit than GR had at the time of its widespread acceptance. Since then, a number more have been corroborated. It is a frustration that so many scientists working on the problem seem to be unaware of this – it’s like watching people play solitaire with an incomplete deck of cards. For a recent list, see Table 1 in https://arxiv.org/abs/2004.14402 .

            Liked by 1 person

            1. Not in the least do I underappreciate the importance of Milgrom’s (and Kepler’s…) phenomenology. I just find the conclusion, expressed in Pavel’s comment, that we can say goodbye to GR in the MOND regime, quintessentially epicyclish.

              Like

              1. Huh. I was only motivated to consider MOND after long work on dark matter led me to conclude that dark matter was inevitably epicyclic. So, I guess the question remains, how do we know when to try something else.

                Like

            2. Prof. McGaugh,

              Robert Wilson makes a somewhat offhand comment; “On an astrophysical scale, all we have is an equation that relates mass to gravitational acceleration, which is usually interpreted as saying that mass causes gravity, but which can equally well be interpreted as saying that gravity causes mass.”

              If we use the curvature of space as a description of gravity, doesn’t it extend throughout the relationship between matter and energy? If energy is released from matter, it explodes and that takes up more space. The opposite would be that as matter coalesces out of energy, waves collapsing, particles out of fields, it takes up less space. Given the intergalactic expansion is based on measuring light and the inward curvature of gravity is based on measuring matter, it seems there is a very basic relationship between the two sides of this dynamic. Basically galaxies are light radiating out, as matter falls in. So it would seem that if there is more gravitational effect, than can be associated with the amount of matter, Robert’s point above might be a hint of where to look. As this inward curvature extending throughout the process.
              Just an idea from a long time observer

              Like

              1. One recurrent idea for how there things come about is the relation of a mass and the space it is embedded in to the larger universe. Newton worried about this with his water-in-a-bucket thought experiment; Mach thought long and hard on this, and his ideas strongly influence Einstein, who reputedly tried hard to incorporate Mach’s principle into GR but ultimately gave up. In the context of our discussions here, this might be phrased as a connection of inertial mass to the background established by everything else in the universe (what is Newton’s bucket rotating relative to?) Perhaps – and this is speculation – but perhaps an entity only acquires inertial mass when it suffers an acceleration relative to Mach’s everything else. So MOND happens, in this picture, when things get close to that limit – their inertial mass asymptotes towards zero, and they get easier to push around – by normal inverse-square law gravity; no need to modify GR. Except that GR was motivated in part by the equivalence of inertial mass and gravitational charge, which this throws out the window (and was perhaps why Mach’s principle could not be incorporated). So on the one hand, it seems anathema to GR; on the other, I don’t see that we’d have to throw out the baby with the bathwater.

                Liked by 2 people

            3. Professor,

              One aspect of, I guess it’s SR, that doesn’t seem to be mentioned is that the overall vacuum would seem to have an equilibrium state, as the frame with the fastest clock and longest ruler would be closest to it. As opposed to a frame moving at the speed of light in this vacuum, where the clock has stopped and the ruler has shrunk to zero. Which could possibly be that essential frame, to which everything in the universe, including the speed of light, is in relation.
              So if space is an equilibrium state and it isn’t bound by any defined frame otherwise, then it is also infinite. So it starts at zero and goes to infinity.
              What fills space is energy, which radiates toward infinity. Though expressing form and definition, even if just the fluctuations of waves. Which coalesce/fade toward equilibrium.
              This relationship seems to be what galaxies express, as the light radiates outward, while the mass gravitates inward, in a sort of cosmic convection cycle. Yes, matter falls into the black holes at the center, but there are also enormous jets of energy being shot out the poles, along with all the energy radiated away, as the mass coalesces into ever more dense structures. So might these black holes be an eye of a storm?
              This relationship between energy and form is certainly expressed in our physiology, since we have the digestive, respiratory and circulatory systems processing the energy driving us, while the central nervous system seems particularly adapted to recognize and distinguish form and structure.
              Einstein said that if there truly is a theory of everything, it should be explainable to a five year old. Might it be thermodynamics? Yes, nature is extremely complex and we all deal with that every time we walk out the door, but what drives it, other than thermodynamic feedback loops? It’s the geology and the atmosphere that gave rise to us, why not galaxies and the universe?

              Like

    2. Pavel,

      I don’t think it is Newtonian/Einsteinian gravity per se that is at fault but the inability of theorists to devise an appropriate implementation of Newtonian gravity for calculating the rotation curves of individual, orbiting stars in a complex multibody system such as a disk galaxy. Instead, the default approach has been to estimate the expected curves using either the Keplerian or Newtonian approach which effectively results in an erroneous two-body solution to a quite obviously multibody problem. That is the entire basis for the dark matter claims in galaxies, going all the way back to the Rubin, Ford papers of the early 80s, which repeatedly refer to an “expected Keplerian decline” in the rotation curves.

      Like

  7. I have nothing substantial to say, but I’d like to thank you for spending so much time and effort to help humanity understand the universe a bit better.

    I think there’s a sort of lottery that young researchers engage in, and “listening to theoreticians who make predictions that get falsified”, is one of the potential career outcomes – due to no fault of their own.

    I wish there was more recognition for dead-end theories that were posited in good faith and with good effort – and a stronger post mortem reckoning with advocates of bad theories – but I guess in this regard academia is just as messy as all the other fields of R&D humanity engages in …

    Liked by 3 people

    1. That would be more of a psychological area of consideration. Which might prove very helpful for the other fields to sort out their political impulses.
      Since it’s been shown the consciousness doesn’t really make the immediate decisions, it seems likely its recursive function is imagination. To consider all the varied possibilities, in order that our future responses are better informed. When we are awake, this effect has to constantly reset to new input, while when we are asleep, the mind just creates narratives out of memories, with one leading to the next.
      What is worth considering is that when we are in the position to control the input, whether its because we are very wealthy, or simply that we are staring at a computer screen and can switch our attention, then we do go into more of a dream state/flow. How much does this explain many institutional behaviors, political, academic, religious, even military strategists, where they can create and potentially live out their imagined feedback loops?

      Like

  8. Dear David,

    to assess the papers one needs to type-in the whole URL as given above. The link identified by the system is broken because it does not accommodate all characters (for whatever technical reason).

    The URLs again:
    https://ui.adsabs.harvard.edu/abs/2015CaJPh..93..169K/abstract
    and
    https://ui.adsabs.harvard.edu/abs/2017MNRAS.467..273O/abstract

    In this latter publication we were forced to take out too critical comments, but the implication of the study is that the galaxies in the M81 group must have near-to-simultaneously arrived and met in one place in order to today be seen as a compact group, because a few hundred Myr later they will have merged _if_ dark matter were to exist. Alternatively and much more intuitively, they have been orbiting about each other many times as indicated by the vast clouds of gas joining them all and which is easily allowed in Milgromian gravitation where dark matter does not play a role in galaxies and therefore does not lead to shrinkage of the system due to Chandrasekhar dynamical friction.

    Pavel Kroupa (University of Bonn and Charles University in Prague)

    Like

  9. Replace the .. from the links from Pavel Kroupa with 2 simple dots. I don’t know if they are a different symbol (they are 2 distinct characters) but after I replaced them, it worked for me.
    And by the way David – please don’t force dr. McGaugh to close again the comments….

    Like

  10. Worth mentioning that the direct detection experiment exclusions only scratch the surface. The combination of multiple limitations from different perspectives is far more intense. One of the biggest threats to the DM paradigm collectively is to combine the exclusion of DM that interacts directly with baryonic matter at any meaningful threshold with the increasingly required interaction of DM with baryonic matter to some degree to reproduce the inferred halo distributions seen over a wide range of circumstances.

    If DM must be a million times more weakly interacting than neutrinos, but DM must have an interactions stronger than that to reproduce the halos that are inferred with increasingly precise measurements from the dynamics of visible matter and lensing, then the DM particle paradigm doesn’t work.

    Liked by 1 person

    1. The flip side of education is that it is indoctrination. Think of one’s native language and how it becomes the basis of our thoughts. Scientists are people. When we are highly trained to a particular mind set and join others equally committed, there’s no going back. It’s like a stem cell becoming an adult cell. Change is generational, as succeeding generations sense the system is breaking and react to it, rather than commit to it.

      Like

  11. I seem to be having a problem posting a comment using wordpress. I tried a new password at wordpress, but the comment didn’t show up. Possibly I’m using the wrong email account.

    Like

  12. This time my comment showed up. Here’s what I tried to post, hoping it comes up:

    Dr. McGaugh,

    I’m currently reading your 2015 paper on the archive titled: “The Surface Density Profile of the Galactic Disk from the Terminal Velocity Curve”. There is such a smorgasbord of interesting papers that you have written, or in collaboration with others, that I find myself quite busy digesting them (as best I can from a layman level background), and learning a great deal about the workings of the Universe. From those papers, and your writings on Tritonstation, I can appreciate the remarkably compelling case for MOND. One thing that really stood out to me was how well MOND can handle Sancici’s Law, while Lambda-CDM falls flat on its face in trying to accommodate this galactic phenomena.

    Just this morning I came across a new study on phys.org regarding the dark matter/normal mater (DM/NM) ratio in the Dragonfly 44 galaxy. The original study computed the DM/NM ratio to be 10,000 to 1, way outside the ranges previously seen. This really stumped dark matter advocates. But the new study, described as exhaustive, reduced that ratio to 300 to 1, much more in line with previous values observed elsewhere. But, in the back of my mind, I have to wonder if there wasn’t some force-fitting to bring this outlier in line to salvage the dark matter paradigm. But it could be that they have done a very thorough and reputable job of the highest standards. Even still it appears that Dragonfly 44 has set a new upper benchmark for the DM/NM ratio.

    Like

    1. Actually regarding Dragonfly 44 I think the bias may have been precisely the other way around. That original claim for a very high dark:normal matter ratio came from a group led by Van Dokkum. Same folks who claimed that DF2 and DF4 had no dark matter (which was thoroughly debunked here: https://academic.oup.com/mnras/article/486/1/1192/5380810).

      Seems to me they are just trying to prove that some regions have lots of dark matter and others none at all (which has to be true if dark matter exists and is weakly interacting). Even if they have to assume unrealistic distances and add “correction factors” to boost their globular cluster count in order to do it.

      Like

      1. There is a long history here. I started my career studying low surface brightness galaxies. The properties of these galaxies were essentially unknown when MOND was developed, but it made strong predictions for them: the lower the surface brightness, the lower the predicted acceleration, and the stronger the MOND effect. It came as a great surprised and shock to me when I realized that MOND was the only theory that correctly predicted what I observed. I had been studying low surface brightness galaxies because I found them interesting, not because I wanted to use them to test MOND. Indeed, I (like many) was largely unaware of MOND; it was largely a coincidence that I had conducted an experiment that one might have designed specifically to test it.
        The Dragonfly-discovered galaxies are the next generation of even lower surface brightness galaxies, so also provide a good test. Trouble with such dim galaxies is that it is difficult to get good data for them. So you have to take things with a grain of salt. When DF44 was first measured, the reported velocity dispersion was much too high for MOND. It was an outlier from all the other similar galaxies, and the data were kinda dodgy, so I figured it was likely to fall back into line as the data improved. It did: https://arxiv.org/abs/1906.01631
        Same story with DF2. The first report made it sound impossible for MOND. On closer examination, it was in good agreement: https://arxiv.org/abs/1804.04167.
        At this juncture, I have fact-checked literally hundreds of cases like these. It happens over and over and over that someone will point out some case that seems problematic for MOND. On closer inspection, this never holds up. Never. I’ve seen this movie enough times; it’s time to move on to figuring out what it means than claiming “oo! oo! THIS case is a problem!” I’m more concerned with the forest than the occasional outlying tree.

        Like

        1. Ah yes. I certainly did not mean to imply otherwise. Van Dokkum has been doing great work in discovering these types of objects in the first place (and he apparently shares my love for Dutch literature so I can’t help but like the guy). I mean if there’s debate about a distance being 13 Mpc or 20 Mpc the uncertainty is about half of the total quantity. And it is natural to interpret such uncertainty using one’s own experiences. Time will tell I guess.

          Like

          1. I do not mean to contradict anything you say; I’ve just been down this road many times. Some of it with Pieter – he was a grad student at Groningen when I made a habit of visiting there as a postdoc. Some of the best science has been made in the Netherlands.

            Like

  13. As a theoretician, I distinguish two cases: elementary WIMPs, i.e., new particles that are made up using new theories that have no experimental basis, and WIMP candidates that fit into the standard model. Recently, I read about hexaquarks as DM candidates. Do you know the experimental status on this particular option?

    Like

  14. This particular plot was generated by DMtools (http://dmtools.brown.edu) with references in the caption. It appears that this does not specify which Elliis et al., but my guess is that it is Ellis, Ferstl, & Olive, Physics Letters B, 481 (2000) 304. I wish I could find again (I did once; can’t right now) the conference proceeding of Ellis in which he apologizes to experimentalists for moving the goal posts by six orders of magnitude – a “sorry, not sorry” phrasing that enraged one colleague enough that he left the field. Well, that, and the implicit “thanks for falsifying our prediction; we’re going to ignore that and ask that you do it all over again only the problem is much harder now.”

    Another related reference of that era for that region of parameter space is Jungman, Kamionkowski, & Griest, Physics Reports, 267 (1996) 195. Physics Reports is a review journal: this was already old news in 1996.

    Like

  15. Dear Stacy,
    Thank you for your blog and your perseverance in trying to find an alternative to dark matter.
    The acceleration parameter a0 in MOND is of the same order as cH0, it is of the same order as the acceleration of the Universe expansion and also of the same order as the deceleration of the Hubble parameter. Do you think this is just a coincidence or do you think it could be a pointer to a more basic theory underlying MOND phenomenology, as Milgrom writes it ?
    If so, have you ever thought of putting together a model including some elements of Melia’s Rh=ct, Dirac’s LNH and MOND ?

    Like

  16. It is certainly an intriguing numerical coincidence. Having been misled previously by the “WIMP miracle”, I worry that it might just be a coincidence. So I have not personally tried putting together such models, but it is certainly a worthwhile endeavor.

    Like

    1. That’s reassuring that you think it is a worthwhile endeavor. I have tried to put together such a model and I have published it (not a great journal, but that’s a start). In the paper, I propose a novel MOND formula where the acceleration parameter is exactly cH0/2 and it does not require an interpolating function. If you have a spare moment, I would welcome your feedback on it. You can download the paper here:

      Click to access jhepgc_2020101615242805.pdf

      Like

  17. I wonder what do you think about the work of Claudia de Rham (which I saw was for a period at CWRU, so maybe you know her personally): https://arxiv.org/abs/1011.1232
    From my understanding, massive gravity goes in direct opposition to MoND at galactic scales – i.e. it implies a lower strength at large distances (small acceleration) while MoND implies the other way around. Would this explain only the lambda part from LCDM, meaning that CDM is still necessary?
    I’m curious if you had some discussions about these aspects (while she was at CWRU, for instance).

    Like

    1. Yes, you have it right.
      We had no deep discussions. The people working on explaining Lambda by modifying gravity usually (though not always) treat it as a separate problem, and largely accept the existence of dark matter.

      Like

      1. hi

        any way for you to blog on

        Testing the Strong Equivalence Principle: Detection of the External Field Effect in Rotationally Supported Galaxies

        Chae, Kyu-Hyun;McGaugh, Stacy S.; Li, Pengfei; Schombert, James M.arXiv:
        arXiv:2009.11525

        could some type of dark matter say super fluid dark matter create effects like External Field Effect ?

        Like

  18. I have been following several blogs over the past few years that discuss dark matter. It appears that often an interpretation of MOND is being missed. Yes it is modified NEWTONIAN dynamics but not necessarily modified General Relativity. It can be modified GR but it can also be standard GR but with self-interactions included for correct solutions obtained for relevent geometries. I know of three recent programs that are including self-interactions into standard general relativity and claiming very good results with no dark matter. Deur started working this quite some time ago. I would hope that more people would look into these ideas. Yes, I initially thought that MOND was just curve fitting with the goal of modifying GR but maybe one is not changing GR but just finding the correct solutions to the equations of GR.

    Like

    1. It’d be nice if there were just some detail of GR that we had overlooked. There are claims to this effect, but I don’t think it is so simple. These effects are no where near large enough to have the desired impact. I hope I am wrong about that, but it isn’t like I haven’t thought about it.

      There is also a lot of misinformaiton among scientists about what MOND is and is not. One seems to be a presumption that MOND contradicts GR. This is silly, but it leads to an overemphasis on modification. Extension would be more like it. Indeed, if MOND is a modification of inertia, it isn’t obvious that GR needs to change at all even though it violates the original formulation of the equivalence principle – a formulation no one seems to use any more. The only theory that I’m aware of that rigorously imposes the equivalence of inertial mass and gravitational charge is Yilmaz’s GR variant.

      Like

      1. how much gravitational energy is calculated in a galaxy between all the massive objects and even photons and neutrinos such as the milky way and andromeda and is this also accounted for? what about dark energy as a source that also gravitates?

        Like

  19. One also has to fold in the nonlinearity of GR.

    Here is my problem. I agree that it is time to drop a particle physics model for dark matter. And I am told that everything else is ruled out. And MOND is not a theory. It is phenomenology. So is that it?

    Like

    1. The data that “look like” MOND compose a phenomenology that is observed but which is true in the data regardless of what name we call it by. That is not the same as MOND as a theory, and it is not correct to describe it as a phenomenology.

      Whether MOND is a theory depends on your definition of a theory. I have heard very different takes on this from serious scientists. Some assert that the lack of general covariance means that it is not a theory. That seems awfully restrictive to me: by that definition, nothing qualified as a theory before Einstein. Too bad we can’t ask him, but I would be surprised if Einstein thought there were no theories before his. Is Newtonian gravitation a theory? MOND encapsulates Newton, so if Newtonian gravity is a theory, then MOND is also a theory. It is incomplete for not also encompassing GR, but that doesn’t make it not a theory.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s