A quick note to put the acceleration discrepancy in perspective.

The acceleration discrepancy, as Bekenstein called it, more commonly called the missing mass or dark matter problem, is the deviation of dynamics from those of Newton and Einstein. The quantity D is the amplitude of the discrepancy, basically the ratio of total mass to that which is visible. The need for dark matter – the discrepancy – only manifests at very low accelerations, of order 10^{-10} m/s/s. That’s one part in 10^{11} of what you feel standing on the Earth.

Astronomical data span enormous, indeed, astronomical, ranges. This is why astronomers so frequently use logarithmic plots. The abscissa in the plot above spans 25 orders of magnitude, from the lowest accelerations measured in the outskirts of galaxies to the highest conceivable on the surface of a neutron star on the brink of collapse into a black hole. If we put this on a linear scale, you’d see one point (the highest) and all the rest would be crammed into x=0.

Galileo established that the we live in a regime where the acceleration due to gravity is effectively constant; g = 9.8 m/s/s. This suffices to describe the trajectories of projectiles (like baseballs) familiar to everyday experience. At least is suffices to describe the gravity; air resistance plays a non-negligible role as well. But you don’t need Newton’s Universal Law of Gravity; you just need to know everything experiences a downward acceleration of one gee.

As we move to higher altitude and on into space, this ceases to suffice. As Newton taught us, the strength of the gravitational attraction between two bodies decreases as the distance between them increases. The constant acceleration recognized by Galileo was a special case of a more general phenomenon. The surface of the Earth is a [very nearly] constant distance from its center, so gee is [very nearly] constant. Get off the Earth, and that changes.

In the plot above, the acceleration we experience here on the surface of the Earth lands pretty much in the middle of the range known to astronomical observation. This is normal to us. The orbits of the planets in the solar system stretch to lower accelerations: the surface gravity of the Earth exceeds the centripetal force it takes to keep Earth in its orbit around the sun. This decreases outward in the solar system, with Neptune experiencing less than 10^{-5} m/s/s in its orbit.

We understand the gravity in the solar system extraordinarily well. We’ve been watching the planets orbit for ages. The inner planets, in particular, are so well known that subtle effects have been known for ages. Most famous is the tiny excess precession of the perihelion of the orbit of Mercury, first noted by Le Verrier in 1859 but not satisfactorily* explained until Einstein applied General Relativity to the problem in 1916.

The solar system probes many decades of acceleration accurately, but there are many decades of phenomena beyond the reach of the solar system, both to higher and lower accelerations. Two objects orbiting one another intensely enough for the energy loss due to the emission of gravitational waves to have a measurable effect on their orbit are the two neutron stars that compose the binary pulsar of Hulse & Taylor. Their orbit is highly eccentric, pulling an acceleration of about 270 m/s/s at periastron (closest passage). The gravitational dynamics of the system are extraordinarily well understood, and Hulse & Taylor were awarded the 1993 Nobel prize in physics for this observation that indirectly corroborated the existence of gravitational waves.

Direct detection of gravitational waves was first achieved by LIGO in 2015 (the 2017 Nobel prize). The source of these waves was the merger of a binary pair of black holes, a calamity so intense that it converted the equivalent of 3 solar masses into the energy carried away as gravitational waves. Imagine two 30 solar mass black holes orbiting each other a few hundred km apart 75 times per second just before merging – that equates to a centripetal acceleration of nearly 10^{11} m/s/s.

We seem to understand gravity well in this regime.

The highest acceleration illustrated in the figure above is the maximum surface gravity of a neutron star, which is just a hair under 10^{13} m/s/s. Anything more than this collapses to a black hole. The surface of a neutron star is not a place that suffers large mountains to exist, even if by “large” you mean “ant sized.” Good luck walking around in an exoskeleton there! Micron scale crustal adjustments correspond to monster starquakes.

High-end gravitational accelerations are 20 orders of magnitude removed from where the acceleration discrepancy appears. Dark matter is a problem restricted to the regime of tiny accelerations, of order 1 Angstrom/s/s. That isn’t much, but it is roughly what holds a star in its orbit within a galaxy. Sometimes less.

Galaxies show a large and clear acceleration discrepancy. The mob of black points is the radial acceleration relation, compressed to fit on the same graph with the high acceleration phenomena. Whatever happens, happens suddenly at this specific scale.

I also show clusters of galaxies, which follow a similar but offset acceleration relation. The discrepancy sets in a littler earlier for them (and with more scatter, but that may simply be a matter of lower precision). This offset from galaxies is a small matter on the scale considered here, but it is a serious one if we seek to modify dynamics at a universal acceleration scale. Depending on how one chooses to look at this aspect of the problem, the data for clusters are either tantalizingly close to the [far superior] data for galaxies, or they are impossibly far removed. Regardless of which attitude proves to be less incorrect, it is clear that the missing mass phenomena is restricted to low accelerations. Everything is normal until we reach the lowest decade or two of accelerations probed by current astronomical data – and extragalactic data are the only data that test gravity in this regime.

We have no other data that probe the very low acceleration regime. The lowest acceleration probe we have with solar system accuracy is from the Pioneer spacecraft. These suffer an anomalous acceleration whose source was debated for many years. Was it some subtle asymmetry in the photon pressure due thermal radiation from the spacecraft? Or new physics?

Though the effect is tiny (it is shown in the graph above, but can you see it?), it would be enormous for a MOND effect. MOND asymptotes to Newton at high accelerations. Despite the many AU Pioneer has put between itself and home, it is still in a regime 4 orders of magnitude above where MOND effects kick in. This would only be perceptible if the asymptotic approach to the Newtonian regime were incredibly slow. So slow, in fact, that it should be perceptible in the highly accurate data for the inner planets. Nowadays, the hypothesis of asymmetric photon pressure is widely accepted, which just goes to show how hard it is to construct experiments to test MOND. Not only do you have to get far enough away from the sun to probe the MOND regime (about a tenth of a light-year), but you have to control for how hard itty-bitty photons push on your projectile.

That said, it’d still be great experiment. Send a bunch of test particles out of the solar system at high speed on a variety of ballistic trajectories. They needn’t be much more than bullets with beacons to track them by. It would take a heck of a rocket to get them going fast enough to return an answer within a lifetime, but rocket scientists love a challenge to go *real fast*.

*Le Verrier suggested that the effect could be due to a new planet, dubbed Vulcan, that orbited the sun interior to the orbit of Mercury. In the half century prior to Einstein settling the issue, there were many claims to detect this Victorian form of dark matter.

There is now a very simple way to calculate Hubble’s Constant, by inputting to an equation, the numerical value of Pi and the speed of light (C) from Maxwell’s equations, and the value of a parsec. NO space probe measurements (with their inevitable small measuring / interpretation errors) are now required. Hubble’s Constant is ‘fixed’ at 70.98047 PRECISELY. This maths method removes the errors / tolerances that is always a part of attempting to measuring something as ‘elusive’ as Hubble’s Constant. This has very deep implications for theoretical cosmology.

The reciprocal of ‘fixed’ 70.98047 is 13.778 billion light years, BUT as this does not increase as time passes, it’s the Hubble distance ONLY.

The equation to perform this is :- 2 X a meg parsec X light speed (C). This total is then divided by Pi to the power of 21. This gives 70.98047 kilometres per sec per meg parsec.

The equation to perform this can also be found in ‘The Principle of Astrogeometry’ on Amazon Kindle Books. This also explains how the Hubble 70.98047 ‘fixing’ equation was found. David.

LikeLike

Repeat your visit to Milgrom’s lecture. I promise, it will be worth your time.

You really need to read this paper the provides at theoretical underpinning for MOND behavior while solving the galactic cluster problem. A. Deur, “Implications of Graviton-Graviton Interaction to Dark Matter” (May 6, 2009) https://arxiv.org/abs/0901.4005 (published at Phys.Lett.B676:21-24,2009). This first article in the series is A. Deur, “Non-Abelian Effects in Gravitation” (September 17, 2003) https://arxiv.org/abs/astro-ph/0309474 (not published).

One of the better and more intuitive introductions to the idea is in this power point presentation. http://www.phys.virginia.edu/Files/fetch.asp?EXT=Seminars:2693:SlideShow

Deur also makes a theoretical prediction which neither dark matter nor MOND reach which is born out by observation, which is that non-spherical systems have greater deviations from GR without DM. Alexandre Deur, “A correlation between the amount of dark matter in elliptical galaxies and their shape” (28 Jul 2014).

See also A. Deur, “A possible explanation for dark matter and dark energy consistent with the Standard Model of particle physics and General Relativity” (August 14, 2018), which expands the analysis to cosmology and dark energy phenomena. https://arxiv.org/abs/1709.02481v1

The use of a scalar graviton approximation used by Deur is justified in A. Deur, “Self-interacting scalar fields in their strong regime” (November 17, 2016) https://arxiv.org/abs/1611.05515 (published at Eur. Phys. J. C77 (2017) no.6, 412). See also Diogo P. L. Bragança, José P. S. Lemos “Stratified scalar field theories of gravitation with self-energy term and effective particle Lagrangian” (June 29, 2018) (confirming that scalar approximations can reproduce experimental tests).

LikeLike

The link to the 2014 paper is here: https://arxiv.org/abs/1407.7496

LikeLike

Thanks for the post and your great blog, following and learning from it for some years.

My question concerning RAR and a0 : how well defined is a0=1.2 given a possible degeneracy with the form of the RAR ( the Interpolation function)? As I understand a0 is the fitting parameter for your specific RAR function. Did you try different functional forms? Would the variation of a0 be within the intrinsic scatter? My concern is to define a0 conceptually independent of the interpolation function. Is the BTFR of help? a0 depends only on asymptotic form of RAR but you reported in a recent paper some issues with the slope ( 3.8 instead of 4). Does this effect value of a0 via BTFR?

Btw : I am a London based tutor. Recently, discussing gravity in the solar system and beyond,I introduced MOND laws to my students( instead of the familiar F=ma). My students were outraged ( I am certain you know that situation:). Anyhow, I recommend to use F= ma for the exams!!!

LikeLiked by 1 person

Awesome. Students are often outraged when one first introduces dark matter as well. Or at least they use to be – perhaps it is famous enough now not to sound immediately absurd.

You are correct about a0: the value is specific to the adopted functional form of the RAR. This is very well defined, so while one could choose a different function that gives a different a0, it wouldn’t be much different and it would have to be very nearly the same line. Or deeper concern is that our best estimate of the stellar mass is still somewhat uncertain (by 20% or so), so the whole thing could shift a bit with that.

The BTFR does help; one can form a quantity with units of acceleration a = V^4/(GM); then each point along the BTFR becomes an independent estimate of the characteristic acceleration. This gives a very similar answer (https://arxiv.org/abs/1102.3913). Subtle point that I failed to make clear in that paper: flattened systems like disk galaxies rotate slightly faster than the equivalent spherical system, so to put these numbers on the same scale, one needs a geometric correction factory x so that a0 = (xVf^4)/(GM). For finite thickness galactic disks, x=0.8. This is a purely geometric effect, and carry vary an imperceptibly tiny bit from galaxy to galaxy as they can have different thickness disks.

The method I just described assumes the BTFR slope is 4. So yes, it’s a problem if the slope is really 3.8 (or whatever that is not exactly 4). However, the very best fit of a line is never going to be exactly 4.00000, so the question becomes whether we can tell the difference. The short answer, in my evaluation, is no – 3.8 is practically the same as 4.0 at the accuracy of current data. There are others who would dispute that, but to do so they have to believe the formal errors that we provide more than I do. Even then, it is a matter of saying “THIS galaxy has an ever so slightly different a0 from THAT galaxy” without realizing that there doesn’t have to be an a0 at all, or that it could and should vary by orders of magnitude rather than a few percent.

LikeLike

Thanks for your detailed reply and the link to your paper which I carefully studied.

It is completely evident that there is an acceleration scale in the data, regardless of its exact value.

In my mind the situation is similar to Special Relativity, with one big difference though. In SRT we know about an absolute velocity scale not only from deviations of Newtonian mechanics via the gamma- factor, but also by direct measurement of the velocity of the photon. In other words we have a direct connection between ‘ modified Newtonian mechanics’ and a microscopic quantity ( velocity of photon). For that reason SRT is the proper mechanis of relativistic particles, not just modified Newtonian mechanics.

In case of ultra-weak gravity, nature is playing hide and seek. We only know the modification of Newtonian mechanics ( MOND),not its microscopic origin ( I could not make much sense of Verlinde’s proposal). In absence of experimental evidence we have to keep guessing.

Last comment : It would be in interesting to present the graph of the BTFR with errorbars but without physical scales to a microbiologist, say. Would that scientist infer a natural law from the database? Or rather explain the data with some sort of feedback mechanism?

LikeLike

Hello,

when you mention

“I also show clusters of galaxies, which follow a similar but offset acceleration relation. The discrepancy sets in a littler earlier for them (and with more scatter, but that may simply be a matter of lower precision)”

is your proposal in any way related to the literature on extended MOND,

“Generalizing MOND to explain the missing mass in galaxy clusters by A Hodson”

modifying gravity yet again to explain galaxy clusters as MOND itself fails without dark matter?

LikeLike

I’m not proposing any solution here; I’m just pointing out that galaxy cluster data are slightly offset from the data for individual galaxies. This might be due to dark matter (which could be normal baryons in this case, not necessarily something new and exotic) or it could be an indication that MOND was just the first approximation of some more general theory (e.g., https://arxiv.org/abs/1207.6232).

LikeLike

I’ve not seen much blogging or popular articles on extended mond similar to the paper you mention – extending MOND for galaxy clusters, since the original MOND proposed by Milgrom doesn’t do enough. could extended MOND also explain large scale structure of the universe like dark matter, if MOND doesn’t?

if neutrino masses turn out to be say 1.85 ev according to one paper * could that explain MOND and galaxy cluster or is 1.85ev still not heavy enough? 1.85 ev would be problematic for standard cold dark matter cosmology

*Nieuwenhuizen, T. M. (2016). “Dirac neutrino mass from a neutrino dark matter model for the galaxy cluster Abell 1689”. Journal of Physics. Conference Series. 701 (1): 012022. arXiv:1510.06958

LikeLike

To answer your subsequent questions, I don’t think it is correct to say MOND doesn’t explain large scale structure. It has done a good job of predicting the early occurrence of large structures and the emptiness of voids – both problems for LCDM. What seems to be meant by the assertion that it doesn’t work is that it is a much harder problem to compute the power spectrum, which LCDM fits well (with ample free parameters to do so… that’s a large part of the reason we had to invent both dark matter and dark energy). That’s good, but the power spectrum isn’t the only thing that matters, nor is it obvious that LCDM is the only way it can be done.

As for the neutrino mass – any sum of neutrino masses in excess of 0.12 eV kills LCDM (0.12 is the most recent limit from Planck), as the entire vaunted edifice of structure formation doesn’t work with too much hot dark matter. The minimum mass from neutrino oscillations is 0.06 eV, so neutrino masses have to lie in that narrow factor-of-two window. For MOND, a heavy neutrino (of order 1 eV) would solve the problem with the excess missing mass in clusters and might actually help with structure formation (the problem isn’t that MOND can’t form structure, but that it is a bit too eager to do so).

The formal upper limit is < 2 eV, though I've heard it argued that recent results push this down to 1.5 eV. I guess this is too boring to write up formally. But it is a possibility that an experiment like KATRIN will detect a neutrino mass that exceeds what LCDM can tolerate (< 0.12 eV), as experimentally the allowed range is presently 0.06 – 2 eV.

LikeLike

Can the Breakthrough Starshot initiative be used to test the MOND regime?

LikeLike

Unfortunately, I don’t think so. It’ll get far enough away, but to get there it will be accelerated at >>>> a0. Once it is between the stars, deviations from its super-fast trajectory cause by the tiny acceleration of MOND will be practically unnoticeable. This test is one the wrong end of the acceleration plot!

LikeLike

I suspect that the fundamental reason that gravity must be modified (perhaps by torsion or curvature) is that there exists a black hole at the center of the universe.

This paradigm could support the convergence of numerous theoretical propositions that presently appear to lack an underlying fundamental basis.

LikeLike

there’s no reply under

“any sum of neutrino masses in excess of 0.12 eV kills LCDM (0.12 is the most recent limit from Planck), as the entire vaunted edifice of structure formation doesn’t work with too much hot dark matter”

a neutrino mass of 1ev or higher would rule out LCDM , does this in effect then make MOND the best explanation?

there’s still though some dark matter unaccounted for by neutrinos, how would that be explained?

could a neutrino mass of 1ev or highe also explain the third peak in the CMB?

LikeLike

You leave so many comments sometimes that WordPress cuts out the option for further replies.

1. In effect, yes. If a the neutrino mass excludes LCDM, then that excludes all currently conceived variations of cold dark matter. I’m sure there would be some wailing and nashing of theories trying to wriggle out of it.

2. Trivially.

3. No. A 1 eV neutrino does not explain the 3rd peak of the CMB. That would still require new physics, in the form of some driving term in the oscillations – presumably from something like a scalar field rather than a massive particle.

LikeLike

this is all very interesting to me. any idea when results for neutrino mass will be announced based on timetables of current experiments on neutrino masses in progress?

by scalar field, could it be the higgs or a hypothetical inflaton field?

LikeLike

I do not have any insider information as to when a neutrino mass will be measured. Hopefully “soon.” KATRIN should be able to detect a neutrino mass in excess of 0.3 eV after a few years of data collection. Where they are in this process, and whether they will achieve this target sensitivity, I don’t know.

For a discussion of the oscillations & scalar fields, see http://astroweb.case.edu/ssm/mond/CMB6.html

This discussion is 12 years old. It is now clear that TeVeS does not get the third peak right. But the principle that a new theory could contain a [scalar] field that might affect the CMB oscillations in the same way as CDM remains a logical possibility.

LikeLike

I’m a fan of the MOND approach to solving the acceleration discrepancy on astrophysical scales, as it seems intuitively simpler than positing Dark Matter in various quantities and distributions for each galaxy and larger structures. But a recent paper (https://arxiv.org/abs/1808.06634v1) seems to provide definitive evidence for Dark Matter in Dwarf Galaxies by utilizing the mechanism of kinematic heating of the Dark Matter at the cores of these structures by “bursty star formation”. Admittedly, I haven’t read the paper, but an issue that immediately came to mind is the vanishingly small interaction between normal and Dark Matter, so it made me wonder how ‘heat’ could be transferred between them. Perhaps reading the paper would clear that objection up. So, after some shopping this afternoon I’ll dig into the paper.

LikeLike

By “heat” here they mean the speed of dark matter particles. So not heating in the thermal sense, but heating in the sense of increasing speeds through gravitational interaction.

This article is why I finally got around to talking about dwarfs in https://tritonstation.wordpress.com/2018/09/04/dwarf-satellite-galaxies-and-low-surface-brightness-galaxies-in-the-field-i/ though I did not get so far in the telling. Long story short, they managed to pick out one of the few dwarfs that is genuinely problematic for MOND. It’s problematic for DM too – hence the need for heating, which may or may not work as needed.

To cut the story short, the point is that MOND has been fantastically successful in predicting the properties of these dwarfs. Draco and Ursa Minor are exceptions among what are now dozens of examples. Portraying these systems as problematic for MOND is akin to claiming the CMB is problematic for LCDM because of the so-called axis-of-evil. It might be true, but one has to overlook enormous successes to highlight the problems.

LikeLike

Stacy, beautiful plot about acceleration data. Is this published/on arxiv? As I would like to use it in talks, etc.

LikeLike

Thanks. This is an update to Fig. 11 of https://arxiv.org/abs/1112.3960. Hadn’t considered then how *high* acceleration could get. Otherwise it is unpublished. You are welcome to use it and cite the above article.

LikeLike

Indeed – we see the acceleration scale empirically, but we have no clear idea why it happens physically. One could babble about this at length, and a fair number of people have done so in the literature. Perhaps there is a connection to the microphysics of the vacuum, or has to do with Mach’s principle, but we remain far from understanding why it happens.

As far as showing these results to other scientists – I have done that on a many occasions. They immediately recognize that something important is going on. The physical intuition is obvious when not encumbered by preconceptions about dark matter/MOND/cosmology.

LikeLike

Mike McCulloch ( http://physicsfromtheedge.blogspot.com/2018/09/wide-binaries-20.html ) has begun looking at wide binaries as tests of MOND. There is some previous work by Hernandez et al (2014) https://arxiv.org/abs/1401.7063 but the measurements of stellar positions by Gaia should provide a much better test than the Hipparchos and SDSS data that Hernandez et al used.

LikeLike