Like the Milky Way, our nearest giant neighbor, Andromeda (aka M31), has several dozen dwarf satellite galaxies. A few of these were known and had measured velocity dispersions at the time of my work with Joe Wolf, as discussed previously. Also like the Milky Way, the number of known objects has grown rapidly in recent years – thanks in this case largely to the PAndAS survey.
PAndAS imaged the area around M31 and M33, finding many individual red giant stars. These trace out the debris from interactions and mergers as small dwarfs are disrupted and consumed by their giant host. They also pointed up the existence of previously unknown dwarf satellites.
As the PAndAS survey started reporting the discovery of new dwarf satellites around Andromeda, it occurred to me that this provided the opportunity to make genuine a priori predictions. These are the gold standard of the scientific method. We could use the observed luminosity and size of the newly discovered dwarfs to predict their velocity dispersions.
I tried to do this for both ΛCDM and MOND. I will not discuss the ΛCDM case much, because it can’t really be done. But it is worth understanding why this is.
In ΛCDM, the velocity dispersion is determined by the dark matter halo. This has only a tenuous connection to the observed stars, so just knowing how big and bright a dwarf is doesn’t provide much predictive power about the halo. This can be seen from this figure by Tollerud et al (2011):
This graph is obtained by relating the number density of galaxies (an observed quantity) to that of the dark matter halos in which they reside (a theoretical construct). It is highly non-linear, deviating strongly from the one-to-one line we expected early on. There is no reason to expect this particular relation; it is imposed on us by the fact that the observed luminosity function of galaxies is rather flat while the predicted halo mass function is steep. Nowadays, this is usually called the missing satellite problem, but this is a misnomer as it pervades the field.
Addressing the missing satellites problem would be another long post, so lets just accept that the relation between mass and light has to follow something like that illustrated above. If a dwarf galaxy has a luminosity of a million suns, one can read off the graph that it should live in a dark halo with a mass of about 1010 M☉. One could use this to predict the velocity dispersion, but not very precisely, because there’s a big range corresponding to that luminosity (the bands in the figure). It could be as much as 1011 M☉ or as little as 109 M☉. This corresponds to a wide range of velocity dispersions. This wide range is unavoidable because of the difference in the luminosity function and halo mass function. Small variations in one lead to big variations in the other, and some scatter in dark halo properties is unavoidable.
Consequently, we only have a vague range of expected velocity dispersions in ΛCDM. In practice, we never make this prediction. Instead, we compare the observed velocity dispersion to the luminosity and say “gee, this galaxy has a lot of dark matter” or “hey, this one doesn’t have much dark matter.” There’s no rigorously testable prior.
In MOND, what you see is what you get. The velocity dispersion has to follow from the observed stellar mass. This is straightforward for isolated galaxies: M* ∝ σ4 – this is essentially the equivalent of the Tully-Fisher relation for pressure supported systems. If we can estimate the stellar mass from the observed luminosity, the predicted velocity dispersion follows.
Many dwarf satellites are not isolated in the MONDian sense: they are subject to the external field effect (EFE) from their giant hosts. The over-under for whether the EFE applies is the point when the internal acceleration from all the stars of the dwarf on each other is equal to the external acceleration from orbiting the giant host. The amplitude of the discrepancy in MOND depends on how low the total acceleration is relative to the critical scale a0. The external field in effect adds some acceleration that wouldn’t otherwise be there, making the discrepancy less than it would be for an isolated object. This means that two otherwise identical dwarfs may be predicted to have different velocity dispersions is they are or are not subject to the EFE. This is a unique prediction of MOND that has no analog in ΛCDM.
It is straightforward to derive the equation to predict velocity dispersions in the extreme limits of isolated (aex ≪ ain < a0) or EFE dominated (ain ≪ aex < a0) objects. In reality, there are many objects for which ain ≈ aex, and no simple formula applies. In practice, we apply the formula that more nearly applies, and pray that this approximation is good enough.
There are many other assumptions and approximations that must be made in any theory: that an object is spherical, isotropic, and in dynamical equilibrium. All of these must fail at some level, but it is the last one that is the most serious concern. In the case of the EFE, one must also make the approximation that the object is in equilibrium at the current level of the external field. That is never true, as both the amplitude and the vector of the external field vary as a dwarf orbits its host. But it might be an adequate approximation if this variation is slow. In the case of a circular orbit, only the vector varies. In general the orbits are not known, so we make the instantaneous approximation and once again pray that it is good enough. There is a fairly narrow window between where the EFE becomes important and where we slip into the regime of tidal disruption, but lets plow ahead and see how far we can get, bearing in mind that the EFE is a dynamical variable of which we only have a snapshot.
To predict the velocity dispersion in the isolated case, all we need to know is the luminosity and a stellar mass-to-light ratio. Assuming the dwarfs of Andromeda to be old stellar populations, I adopted a V-band mass-to-light ratio of 2 give or take a factor of 2. That usually dominates the uncertainty, though the error in the distance can sometimes impact the luminosity at a level that impacts the prediction.
To predict the velocity dispersion in the EFE case, we again need the stellar mass, but now also need to know the size of the stellar system and the intensity of the external field to which it is subject. The latter depends on the mass of the host galaxy and the distance from it to the dwarf. This latter quantity is somewhat fraught: it is straightforward to measure the projected distance on the sky, but we need the 3D distance – how far in front or behind each dwarf is as well as its projected distance from the host. This is often a considerable contributor to the error budget. Indeed, some dwarfs may be inferred to be in the EFE regime for the low end of the range of adopted stellar mass-to-light ratio, and the isolated regime for the high end.
In this fashion, we predicted velocity dispersions for the dwarfs of Andromeda. We in this case were Milgrom and myself. I had never collaborated with him before, and prefer to remain independent. But I also wanted to be sure I got the details described above right. Though it wasn’t much work to make the predictions once the preliminaries were established, it was time consuming to collect and vet the data. As we were writing the paper, velocity dispersion measurements started to appear. People like Michelle Collins, Erik Tollerud, and Nicolas Martin were making follow-up observations, and publishing velocity dispersion for the objects we were making predictions for. That was great, but they were too good – they were observing and publishing faster than we could write!
Nevertheless, we managed to make and publish a priori predictions for 10 dwarfs before any observational measurements were published. We also made blind predictions for the other known dwarfs of Andromeda, and checked the predicted velocity dispersions against all measurements that we could find in the literature. Many of these predictions were quickly tested by on-going programs (i.e., people were out to measure velocity dispersions, whether we predicted them or not). Enough data rolled in that we were soon able to write a follow-up paper testing our predictions.
Nailed it. Good data were soon available to test the predictions for 8 of the 10* a priori cases. All 8 were consistent with our predictions. I was particularly struck by the case of And XXVIII, which I had called out as perhaps the best test. It was isolated, so the messiness of the EFE didn’t apply, and the uncertainties were low. Moreover, the predicted velocity dispersion was low – a good deal lower than broadly expected in ΛCDM: 4.3 km/s, with an uncertainty just under 1 km/s. Two independent observations were subsequently reported. One found 4.9 ± 1.6 km/s, the other 6.6 ± 2.1 km/s, both in good agreement within the uncertainties.
We made further predictions in the second paper as people had continued to discover new dwarfs. These also came true. Here is a summary plot for all of the dwarfs of Andromeda:
MOND works well for And I, And II, And III, And VI, And VII, And IX, And X, And XI, And XII, And XIII, And XIV, And XV, And XVI, And XVII, And XVIII, And XIX, And XX, And XXI, And XXII, And XXIII, And XXIV, And XXV, And XXVIII, And XXIX, And XXXI, And XXXII, and And XXXIII. There is one problematic case: And V. I don’t know what is going on there, but note that systematic errors frequently happen in astronomy. It’d be strange if there weren’t at least one goofy case.
Nevertheless, the failure of And V could be construed as a falsification of MOND. It ought to work in every single case. But recall the discussion of assumptions and uncertainties above. Is falsification really the story these data tell?
We do have experience with various systematic errors. For example, we predicted that the isolated dwarfs spheroidal Cetus should have a velocity dispersion in MOND of 8.2 km/s. There was already a published measurement of 17 ± 2 km/s, so we reported that MOND was wrong in this case by over 3σ. Or at least we started to do so. Right before we submitted that paper, a new measurement appeared: 8.3 ± 1 km/s. This is an example of how the data can sometimes change by rather more than the formal error bars suggest is possible. In this case, I suspect the original observations lacked the spectral resolution to resolve the velocity dispersion. At any rate, the new measurement (8.3 km/s) was somewhat more consistent with our prediction (8.2 km/s).
The same predictions cannot even be made in ΛCDM. The velocity data can always be fit once they are in hand. But there is no agreed method to predict the velocity dispersion of a dwarf from its observed luminosity. As discussed above, this should not even be possible: there is too much scatter in the halo mass-stellar mass relation at these low masses.
An unsung predictive success of MOND absent from the graph above is And IV. When And IV was discovered in the general direction of Andromeda, it was assumed to be a new dwarf satellite – hence the name. Milgrom looked at the velocities reported for this object, and said it had to be a background galaxy. No way it could be a dwarf satellite – at least not in MOND. I see no reason why it couldn’t have been in ΛCDM. It is absent from the graph above, because it was subsequently confirmed to be much farther away (7.2 Mpc vs. 750 kpc for Andromeda).
The box for And XVII is empty because this system is manifestly out of equilibrium. It is more of a stellar stream than a dwarf, appearing as a smear in the PAndAS image rather than as a self-contained dwarf. I do not recall what the story with the other missing object (And VIII) is.
While writing the follow-up paper, I also noticed that there were a number of Andromeda dwarfs that were photometrically indistinguishable: basically the same in terms of size and stellar mass. But some were isolated while others were subject to the EFE. MOND predicts that the EFE cases should have lower velocity dispersion than the isolated equivalents.
And XXVIII (isolated) has a higher velocity dispersion than its near-twin And XVII (EFE). The same effect might be acting in And XVIII (isolated) and And XXV (EFE). This is clear if we accept the higher velocity dispersion measurement for And XVIII, but an independent measurements begs to differ. The former has more stars, so is probably more reliable, but we should be cautious. The effect is not clear in And XVI (isolated) and And XXI (EFE), but the difference in the prediction is small and the uncertainties are large.
An aggressive person might argue that the pairs of dwarfs is a positive detection of the EFE. I don’t think the data for the matched pairs warrant that, at least not yet. On the other hand, the appropriate use of the EFE was essential to all the predictions, not just the matched pairs.
The positive detection of the EFE is important, as it is a unique prediction of MOND. I see no way to tune ΛCDM galaxy simulations to mimic this effect. Of course, there was a very recent time when it seemed impossible for them to mimic the isolated predictions of MOND. They claim to have come a long way in that regard.
But that’s what we’re stuck with: tuning ΛCDM to make it look like MOND. This is why a priori predictions are important. There is ample flexibility to explain just about anything with dark matter. What we can’t seem to do is predict the same things that MOND successfully predicts… predictions that are both quantitative and very specific. We’re not arguing that dwarfs in general live in ~15 or 30 km/s halos, as we must in ΛCDM. In MOND we can say this dwarf will have this velocity dispersion and that dwarf will have that velocity dispersion. We can distinguish between 4.9 and 7.3 km/s. And we can do it over and over and over. I see no way to do the equivalent in ΛCDM, just as I see no way to explain the acoustic power spectrum of the CMB in MOND.
This is not to say there are no problematic cases for MOND. Read, Walker, & Steger have recently highlighted the matched pair of Draco and Carina as an issue. And they are – though here I already have reason to suspect Draco is out of equilibrium, which makes it challenging to analyze. Whether it is actually out of equilibrium or not is a separate question.
I am not thrilled that we are obliged to invoke non-equilibrium effects in both theories. But there is a difference. Brada & Milgrom provided a quantitative criterion to indicate when this was an issue before I ran into the problem. In ΛCDM, the low velocity dispersions of objects like And XIX, XXI, XXV and Crater 2 came as a complete surprise despite having been predicted by MOND. Tidal disruption was only invoked after the fact – and in an ad hoc fashion. There is no way to know in advance which dwarfs are affected, as there is no criterion equivalent to that of Brada. We just say “gee, that’s a low velocity dispersion. Must have been disrupted.” That might be true, but it gives no explanation for why MOND predicted it in the first place – which is to say, it isn’t really an explanation at all.
I still do not understand is why MOND gets any predictions right if ΛCDM is the universe we live in, let alone so many. Shouldn’t happen. Makes no sense.
If this doesn’t confuse you, you are not thinking clearly.
*The other two dwarfs were also measured, but with only 4 stars in one and 6 in the other. These are too few for a meaningful velocity dispersion measurement.
41 thoughts on “Dwarf Satellite Galaxies. III. The dwarfs of Andromeda”
In keeping with the duality paradigm, MOND would be the prediction of a curved finite space and infinite age model; while LCDM would be the prediction in a flat infinite space and finite age model. This leads me to infer that the CDM is not of a particulate form, but of a wave form. And you are right, it doesn’t make sense.
one question i have is the issue of satellites on planes, how robust is the empirical evidence that dwarf satellites exist on a plane or is it just random
are andromeda and milky way the only galaxies mention, what about other large galaxies with dwarf galaxy satellites? is there a situation where in another galaxy, a dwarf galaxy could only be explained in terms of mond and not dark matter or would falsify mond?
A good read. Congratulations on more good results. Can you define aex, ain, and a0 for me? I think a0 is the acceleration where MOND starts to appear, aex is the EFE acceleration, and ain is what you measure for a particular dwarf. Am I in the ballpark?
Right – a_ex is the external acceleration; a_in the internal acceleration of each dwarf. a0 is the critical acceleration scale of MOND.
LikeLiked by 1 person
As for planes of satellites – the evidence is robust. The certainly are not random.
I’m not sure what is meant by another galaxy. You mean dwarfs satellites around another galaxy? Here we’ve discussed dwarfs around the Milky Way and around Andromeda. Those are different host galaxies. There are several dozen dwarfs around each. Each dwarf satellite is an independent test. Outside the Local Group, we have less detailed information. NGC 1052-DF2 (see previous post) is an example of such a dwarf that was initially thought to contradict MOND but actually appears to corroborate it.
As for a case that can’t be explained with dark matter – that depends on your standard. If you set a high standard, that any dwarf correctly predicted by MOND is a challenge to dark matter, then there are lots and lots and lots of those. If you set the bar low, then you can never say that any case is a problem for dark matter. Some dwarfs happen to have more and some less dark matter, that’s all. That has been my concern for the dark matter paradigm for over two decades now: it can never be formally excluded.
LikeLiked by 1 person
Here is a thought experiment for testing the nature of dark matter. If we assume there is a duality at the cosmological horizon, then all the mass in this expanding universe is not expected to simply escape the horizon while every observer is eventually left to stare into complete darkness.
A different outcome should be expected. Could coming up with the correct theoretical framework for this alternative expectation yield the right test for dark matter?
For example, we know that the vacuum energy cannot be set equal to “dark energy”, but how does it compare to the mass-energy that we expect is crossing the horizon in the expanding universe view? IF a theoretical connection is considered valid, and properly characterized, then there may be two distinct expectations depending on the state of DM.
Do you know if anyone has calculated the expected mass-energy crossing the horizon?
Don’t know such stuff. I share your discomfort with the notion that the end-state of LCDM is an empty universe with everything having expanded away out of view. But there is nothing special about the cosmic horizon. It is just how far we can see, as with our local horizon on Earth. In the standard cosmology, the mass-energy contained withing the current horizon is a straightforward calculation. If you want to make the horizon a privileged place, which it sounds like, then it depends on whatever theory you base this on.
I understand your point about the need to more clearly identify the theory before trying to fit it to observations. You are right, and it is a tall order.
Even in communicating the current concordance model there are likely misconceptions. For example, people may have very different views of what “nothing special” really means. At best it likely mixes the assumptions of the cosmological principle that the laws of physics should be the same for any observer, and that the universe is homogeneous on large scales.
Now the horizon on Earth is clearly a discontinuity that moves with the observer. But I’m not sure what the condition of homogeneity should look like in this analogy. Should a sunrise occur equally everywhere? Would the view of an island suddenly destroy my assumptions of homogeneity?
That the horizon is a privileged place is really one of the central points in the paradigm that I am trying to describe.
We can get there as follows:
1. Embed the static-dynamic duality of the universe at the cosmological horizon.
2. Require a complementary description of the dynamic cosmological horizon and redshift to the event horizon and gravitational redshift of a static universe stabilized by a central black hole.
3. Recognize that the horizon is privileged in time in the dynamic description, and is privileged in space in the static description.
Do you think that is a logical possibility?
Can you give us an idea of the origins of EFE in MOND? Do you know if this effect was introduced in order to fit rotation curves? Or was it first noticed as a requirement within the logical framework of MOND — i.e. a prediction?
The origin or the MOND EFE is an excellent question. It was present in the very first paper (Milgrom 1983a), and is a logical necessity of the theory. It was not introduced to fit rotation curves, and I’ve never encountered a case where it was necessary to do so – galaxies that rotate so far all seem to be isolated by this standard. No rotation curve will stay flat forever, because the rest of the universe will eventually dominate, and the EFE will impose a de facto edge. But that is way far out, maybe 300 kpc in the case of the Milky Way, where Andromeda takes over. The EFE usually comes up only in the context of incredibly tiny systems exposed to the fields of much bigger ones.
I understand that dark matter struggles with satellite planes, but how does MOND explain it? what causes satellites to align and rotate in planes in MOND?
What I meant is that of the 200 billion galaxies in the known universe, if you could observe dwarf galaxies of larger galaxies like Milky way and Andromeda in all of them, what would be predicted about those dwarf galaxies in other galaxies if CDM is true and if MOND is true, as opposed to just Milky way and andromeda.
does superfluid dark matter also have a EFE since its supposed to reproduce MOND via superfluid physics.
@ Daniel Kim
I believe the explanation for the satellites planes for MW / And is that MOND forms the large scale structures much quicker than LCDM and in MOND And and MW already had past interactions. The satellite galaxies are the left overs of these past intersactions.
Check this: https://darkmattercrisis.wordpress.com/2016/07/09/dynamics-of-local-group-galaxies-evidence-for-a-past-milky-way-andromeda-flyby/
“The box for And XVII is empty” I think that you mean “The box for And XXVII is empty”.
Your evidence for MOND is less than 1.5 sigma. It’s harder to get 3 heads in a row in coin tossing. Didn’t Dokkum already falsify MOND?
For a priori predictions and considering the total large error sources (like light to mass ratio, the distances towards the satellites, the distances between the satellites and the host, the actual rotation curves for the galaxies themselves), I’d say 1.5 sigma looks quite impressive. Can you match these predictions with the various flavours of dark matter models without actually fitting the DM halos to match the observations?
As for Dokkum falsifying MOND, I’d recommend you to read dr. McGaugh’s response (you can find the actual referenced links in the post)
I’d point out some snippets:
“The bigger problem I see is that one cannot simply remove the dark matter halo […]. The stars must respond to the change in the gravitational potential; they too must diffuse away. […], but the observed motions are then not representative of an equilibrium situation. This is critical to the mass estimate, which must perforce assume an equilibrium in which the gravitational potential well of the galaxy is balanced against the kinetic motion of its contents. ”
“Then there are the data themselves. Blaming the data should be avoided, but it does happen once in a while that some observation is misleading. In this case […] the velocity dispersion is estimated from only ten tracers. I’ve seen plenty of cases where the velocity dispersion changes in important ways when more data are obtained […]. Indeed, several people have pointed out that if we did the same exercise with Fornax, using its globular clusters as the velocity tracers, we’d get a similar answer to what we find in DF2. But we also have measurements of many hundreds of stars in Fornax, so we know that answer is wrong […]The fact that DF2 is an outlier from everything else we know empirically suggests caution.”
“Van Dokkum et al. point out the the velocity dispersion predicted for this object by MOND is 20 km/s, more than a factor of two above their measured value. They make the MOND prediction for the case of an isolated object. DF2 is not isolated, so one must consider the external field effect (EFE).”
“For DF2, the absolute magnitude of the acceleration is approximately doubled by the presence of the external field […] so the mass discrepancy is smaller, decreasing the MOND-predicted velocity dispersion by roughly the square root of 2. For a factor of 2 range in the stellar mass-to-light ratio (as in McGaugh & Milgrom), this crude MOND prediction becomes σ = 14 ± 4 km/s.”
“And it’s not the only way to analyze the data. Indeed, they tried three different ways which gives the results 4.7 km/s, 8.4 km/s and 14.3 km/s. All I learn from this is that it’s not enough data to make reliable statistical estimates from. But then I’m a theorist.
Michelle Collins, however, is an actual astrophysicist. She is also skeptical. She even went and applied two other methods to analyze the data and arrived at mean values of 12 +/-3 km/s or 11.5+/-4 km/s, which is well compatible with Stacy’s MOND estimate.”
Survey the physicists from NAS, Royal Society, APS, etc. Ask them if they will falsify the law of gravity on the basis of observational data with confidence level of less than 1.5 sigma. They will laugh at you. Dark matter is not a new law of physics. It doesn’t make predictions. It fits observations. Astronomers don’t predict the properties of exoplanets. They fit the exoplanets with observations.
The blog post is nice. Dokkum et al published their paper in Nature. It was peer-reviewed and accepted by the editors of Nature. Stacy’s blog isn’t peer-reviewed. Bloggers can peer review it. I don’t think it will pass 🙂
Well yes, blog posts are not peer reviewed. But if you followed the links and clicked the embedded references like I suggested, you would have seen references to papers submitted to peer review.
or this (which, since is in press, we can safely say it passed the peer review)
And if you’re curiosity is still open, you can also check this response papaer which was published in the last weeks also in nature:
Regarding the survey you suggest – I get the feeling that you don’t fully understand the magnitude of the errors in the data. If you’re data have large error bars, don’t expect the predictions to be perfect. Measuring distances on cosmic scales is not like tossing coins on your table.
Please have a look here https://rdcu.be/6uZn
You say “Dark matter is not a new law of physics. It doesn’t make predictions”
Indeed – DM doesn’t make predictions. But the laws of phisics must make predictions.
Currently, GR / Newtonian dynamics fail to make predictions without invoking DM. Or put it otherwise, GR / ND make wrong predictions just by themselves.
MOND, on the other hand makes predictions (at least at galactic levels) without needing fine tuning. Yes, you have a_0 the is a fit to observations, but once you get a fit for a_0, you stick with it for future observations.
Your analogy with exoplanets doesn’t hold here – you’re comparing individuals (like individual stars, individual planets) with statistical populations that are using average estimators (like mass to light ratios).
Equally – you cannot predict the speed / how it would react if you compress the gas for an individual molecule, but you can predict the average speed of all molecules or how, on average, they will react when compressing the gas.
@dr. McGaugh I assume my previous comment is waiting moderation because of the number of links in it – two of them are to arxiv papers and the last one is to the same paper that Jean Fichot linked (but as a preview in nature)
If you care to look through Stacy’s older posts you will find many links to his (and others’) peer-reviewed papers in the literature; you should not be basing your judgement on this blog alone. There is a good deal of tension in cosmology at the moment between the value of H0 based on CMB measurements (around 67 km/s/Mpc) and that based on SN1a measurements (around 73 km/s/Mpc) and these are already > 3σ apart. Unless that tension can be resolved by finding a systematic error in SN1a distance measurements, looking increasingly unlikely since the Gaia measurements of MW Cepheids confirmed consistency of distances at the 0.3% level, this poses a real issue for ΛCDM as putting 73 km/s/Mpc into that model noticeably worsens the fit to the CMB.
Now, I don’t know if Stacy is right about MOND or if dark matter and dark energy do or don’t exist; what I do know is that there are some pretty strange results that cannot be explained away by poor experimental technique (as in the cases of superluminal neutrinos and BICEP CMB polarization). As a professional physicist, and onetime astronomer, I prefer to keep an open mind until there is more evidence; rushing to judgement is not a good characteristic for a scientist.
Thanks everyone for your comments. It has been my growing impression that it is becoming increasingly impossible to have a sane discussion about this issue. The comments of @astro illustrate the situation.
I have repeatedly made (and published in the refereed literature) a priori predictions for both LCDM (when possible) and MOND. Repeatedly, it has been MOND that is more successful, and not just for galaxy dynamics. Predictive power is at the very basis of the scientific method – it is what is suppose to keep us objective. To assert that predictive successes are meaningless is to abandon the scientific method.
I do not understand how the 1.5 sigma number is obtained. This is just asserted. There are 33 separate dwarfs. It isn’t just that one prediction comes within 1.5 sigma of the data; 32 of 33 do so. That’s a lot more than a 1.5 sigma result.
NGC 1052-DF2 is a great example. Initially, it did indeed look very bad for MOND. When you do it right, the opposite is true. And yes, this is published in the refereed literature (https://arxiv.org/abs/1804.04167).
Ideally, I’d like to compare the MOND prediction to that of LCDM to see which fares better. This is not possible: LCDM does not make a specific prediction for the velocity dispersion of each dwarf. As best I can tell, it is impossible for it to do so. Are we seriously to believe that it is better for a scientific theory to make no prediction at all than to make a priori predictions that are successful?
It is not correct to assert that dark matter does not require new physics. In the early days of the missing mass problem, we thought it would suffice to have some normal matter that happened to be non-luminous. This certainly seems a more conservative approach than modifying the laws of gravity. That can’t work anymore: dark matter as we now conceive it requires new physics – presumably some new particle from beyond the Standard Model. That is absolutely new physics, even if we have become accustomed to thinking of it as normal. It ain’t.
I despair for the future of science. I understand the reluctance of many to contemplate a theory like MOND. I started from the same perspective. But its predictions came true in my data, which falsified my own (dark matter based) predictions. So I have tested it over and over. It has been successful more often than not, and has often made successful a priori predictions in cases where dark matter doesn’t offer a prediction at all. Simply ignoring this – or worse, asserting that it didn’t happen (apparently without checking the literature) – is to abandon the scientific method itself.
Please don’t “despair for the future of science”. This sort of thing has happened many times in the past in many fields of science, one big example that comes to mind is how long it took for the ‘microbe’ theory of disease to overcome the ‘miasma’ theory. It took a ridiculously long time. MoND is about 35 years old. To me that sounds about right for the typical length of time (from historical examples) for a new paradigm to START to take hold.
Thanks, Ron. You are certainly correct that this subject has followed the all-too-classic pattern of derision and denial before acceptance. But like Frodo and Sam in Mordor, we don’t know how this particular story ends – only how the Nazgul would like it to.
Fortunately the Nazgul don’t get the final say, evidence does.
McGaugh et al paper in arxiv confirms Dokkum et al result even after their adjustments. MOND = 13.4 km/s. Upper limit on velocity dispersion = 10.5 km/s. Upper limit of MOND = 18.2 km/s. RMS velocity dispersion = 14.3 km/s. Both papers agree the velocity dispersion deviates from MOND.
You don’t understand error and probability. Large errors in data mean you cannot use it to prove a theory. How can large error, in your mind, favors the theory? The data is not evidence for the theory. No evidence, no theory (null hypothesis). You don’t prove the null hypothesis. You disprove it with accurate data and strong evidence.
Don’t compare measurement and coin tossing. They are not comparable. You compare whether the probability of the observed deviation from a theory is due to random error or to non-random cause. The issue is not the size of error but its frequency. High probability (high frequency) indicates non-random cause (theory is wrong). Probability theory originated from Pascal’s analysis of games of chance. If I see 5 deviations out of 10 observations, I can say the probability is equal to getting heads in a coin toss.
Newtonian dynamics is consistent with DM and ordinary matter. It can explain galactic dynamics with one or both. DM could be unseen ordinary matter. Newtonian dynamics has not failed. Exoplanets and DM are both unseen and their properties are inferred from indirect observations. Your analogy with gas does not hold. Gases obey gas laws. If a fluid deviates from gas laws, you can rule it’s not a gas. No unique laws for DM. Since Newtonian dynamics are observed, you cannot rule out DM. If you always see cold gases in your experiments, you cannot rule out hot gas because it also obeys the gas laws.
I will comment only on the first paragraph, which is a complete misreading of Famaey, McGaugh, & Milgrom (which seems to be what is meant by “McGaugh et al paper”). I have written previously about the case of DF2 in this blog, so scroll back if you wish to understand why these assertions are incorrect. In particular, https://arxiv.org/abs/1804.04136 have shown that the upper limit of van Dokkum et al is not really an upper limit – indeed, is close to their best estimate for the actual dispersion. The result is that DF2 is consistent with MOND within the current observational uncertainties. To state that “Both papers agree the velocity dispersion deviates from MOND” is the opposite of what one paper says, and as a coauthor of one of those papers, I would know.
More generally, this reply does not obviously pass the Turing test. I’m happy to encourage discussion here and, when appropriate, respectful disagreement. Telling other people that they don’t understand something is not respectful, and is usually a sign that it is the speaker who doesn’t get it.
Let me rephrase a bit what you said to an equally valid assertion:
Large errors in data mean you cannot DISprove a theory.
That is, the claim of van Dokkum that DF2 rules out MOND is not supported by the data he provides.
To me, it seems that you discard MOND predictions out of hand, maybe because of a bias you have because GR / ND + DM cannot make similar predictions. I, on the other hand, if I’d have to choose between two models in my job in which one provides ballpark estimates and the other’s estimates all over the chart I’ll always chose the model that gives me the better values.
And I’m sure that all “physicists from NAS, Royal Society, APS”, if subjected to a blind test (i.e. they get only data, without hints about what they are about), would chose the results of MOND as indicative of a possible underlying law.
You complain about the magnitude of the errors in the estimates. Fine – improve the data!. Get better distancee / masse / rotation speeds estimates and we will be able to discuss further. With what is today available, this is the situation.
When you say “If I see 5 deviations out of 10” can you please quantify what do you mean by deviations? Is it 1 sigma? Two sigma? 0.37 or 0.01 sigma? What exactly is a deviation? I mean, if I’m to measure the sped of sound and get 340m/s +/- 3m/s one sigma – does this counts as a deviation?
In coin tossing you cannot have one heads +/- 0.3 heads and 1 tail +/- 0.25 tails. This is why, in my opinion, you cannot compare the predictions (with the associated error bars) made by MOND with coin tossings. And by the way – you brought coin tossing into discussion.
As for the exoplanets / galactic scales vs molecules / gases. It wasn’t about GR /ND vs gas laws. It was about individuals and populations in the statistical sense.
MOND doens’t measure individual stars to get the rotation curve. MOND measures collective light (that is from a statistical population) and uses average estimators to get the mass of the system. And with them it is capable to predict, on average, how are the stars in the galaxy moving. GR / ND + DM cannot do that and this is the main issue as any theory should have predictive power.
If you fail to see the similarity with the gases (with, say the temperature as a proxy for the average speed of all the molecules), then I don’t think I can say anything else.
” The issue is not the size of error but its frequency. High probability (high frequency) indicates non-random cause (theory is wrong). ”
I’d say this is flat out misleading or even wrong.
Usually high frequency errors are indicative of large noise sources (i.e very poor signal to noise ratio – which is the case here). Unbalenced runs of same sign errors (like say 20% negative, 80% positive errors, in statistically significant samples – i.e not 10 – 20 data points, but hundreads) are, indeed, indicative of non-random causes – but from this to say that the theory is wrong is a long way.
Most of the time, these are caused by offsets / systematic / method errors. After you exclude these possible error sources and still get unbalenced runs, you start looking at your working model – maybe your experimental model is wrong, like, say, you assumed small angle approximations as valid while the use case has significant deviations from these approximations. Or maybe you neglected contributing terms (like EFE in MOND). And only after you exclude also these model problems you start to assess whether the theory or (most probably) your understanding of the theory might be wrong.
And by checking the MOND predictions, I see both positive / negative errors (the population size is too small to judge if the same sign errors balance each other statistically).
So where’s the problem with 1.5 sigma?
For a little relief, I can recommend a very interesting documentary titled, “The Most Unknown”. It was aired last month on Netflix, and is a fascinating look at how we can communicate our thoughts about some of the greatest mysteries we face today.
A completely off topic question:
Do you think/expect the Gaia data set, when complete, will be useful for testing MoND vs. DM?
Or do you expect the increased detail to not make much difference overall in the debate?
Yes – more and better data are always a good thing. However, my chief expectation for Gaia is TMI: too much information. It will take a long time to sort out as we figure out what higher order effects we can no longer ignore in either theory. First thing that’ll likely happen is we ignore the things we have always ignored, and get nonsensical results. There will be a period of Confusion.
If you look at Hernandez et al (2014) https://arxiv.org/abs/1401.7063 they were using Hipparchos and SDSS data to look at wide binaries in the MOND regime. So straightaway, one should be able to take the Gaia data for the same binaries to get a better signal-to-noise ratio. Of course, Gaia will also allow the measurement of many more wide binaries (what Stacy calls TMI) but there are quite enough data points from Hiparchos and SDSS to make a convincing argument if the errors were reduced.
Thank you Laurence, I had not thought about wide binaries. That could be a VERY strong test of MoND, if, as you say, the errors are small enough.
I have a very general question that could be related in some way to MOND behavior.
It regards the equivalence principle and tidal effects. It is believed that the equivalence principle equating inertial to gravitational mass is only valid for very small regions of spacetime. The mechanism be which it breaks down is commonly associated with tidal effects. Tidal effects result from an appreciable gradient in the gravitational field.
However, in the MOND-regime, the gravitational field gradient is going to zero. This suggests to me that if the inequality of inertial and gravitational masses is still required, then some other mechanism must arise to account for this difference. One could speculate that an acceleration gradient must arise.
So my question is, is the equivalence principle returned in the MOND regime, and if not, what is the mechanism that accounts for the expected differences in the inertial and gravitational masses?
That is a deep question.
MOND can be interpreted as either modified inertia or modified gravity. Some attempts at the latter, like TeVeS, indeed have the force law change when the gradient in the potential becomes small. In the former, the equivalence between gravitational charge and inertial mass is broken. That they were ever the same is one thing that sets gravity aside from other forces. E.g., E&M has a charge that determines the electrostatic force between charged particles, but the resulting acceleration depends on the inertial mass of the particle. Charge and inertial mass are logically distinct entities for every other force; why for gravity does m(inertia) = m(gravitational charge)?
That they are indeed equal is what I understood from my youth to be the equivalence principle, as you state it. This statement seems to have disappeared from modern definitions of the EP. Indeed, the EP comes in several flavors. The weak EP is the universality of free fall – gold falls the same as feathers. MOND certainly obeys the WEP. The strong EP adds Lorenz invariance and location invariance (the results of experiments don’t depend on where they’re done). MOND certainly violates the SEP – particularly, the location invariance, for as we discussed wrt the external field effect, the same dwarf galaxy will behave differently if it is isolated or if it is affected by the gravity of a giant host. Intriguingly, there is an intermediate case: the Einstein EP. The EEP is only slightly weaker than the SEP, allowing an exception for to location invariance for gravitational experiments. Gravity can affect itself.
So by these definitions, yes – the MOND regime preserves the EP up to the EEP but not up to the SEP. Having thought about these issues for many years, there are many appealing aspects of the modified inertia approach (no need to change gravity) – as well as some nasty side effects (non-locality; hysteresis). But you put your finger on it – what is the mechanism that causes inertial mass to differ from gravitational charge? I don’t know, and have only heard hand-waving ideas about Mach’s principle and the vacuum energy.
But there is a still deeper question: why is inertial mass the same as gravitational charge at all? What is the mechanism that determines inertial mass in general? Perhaps this is where we’re missing something deep conceptually.
The following is likely way too simplistic of a line of reasoning, but with it I don’t find it too mysterious that inertial mass should be equal to gravitational mass – locally. By locally, I mean where flat spacetime is indistinguishable from curved spacetime.
For example, let’s hypothetically say that gravitational mass is a property of curved space or flat time, and that inertial mass is a property of flat space or curved time – then no local experiment would see any difference between the two, as it may not be able to resolve the curvature . . . . or maybe the curvatures are just equal locally. This is simple enough for me to buy into, and I imagine that there could also be even more complicated explanations, which nevertheless produce the same result.
I do like the idea better that locally the curvatures are both complementary and equal to eachother – just to make extra sure we can’t tell the difference.
Still doesn’t answer how they become different from one another in non-local considerations.
Others questions are, would the data for which MOND is fit result from a non-local measurement in this context? Is it possible that the effects of modified inertia and tidal forces are two sides of the same coin? Are either or both of these effects a way in which “gravity” interacts with E&M?
This may be related to Mach’s principle.
And yes, it can be hard to tell the difference between non-local and tidal effects. Even the strong EP excepts tidal effects; only truly local experiments are location invariant. But then, how local is local?
These are deep issues that are by and large being ignored.
“How local is local?” I have no idea, though I like the question. Maybe in a truly Machian sense, we should suggest that “local” is at least defined by two related spaces – like maybe at the observer, and at the observer’s horizon?
Seems ironic, but after-all we are talking about “spooky action at a distance” here.
Comments are closed.