In the last post, I noted some of the sociological overtones underpinning attitudes about dark matter and modified gravity theories. I didn’t get as far as the more scientifically  interesting part, which  illustrates a common form of reasoning in physics.

About modified gravity theories, Bertone & Tait state

“the only way these theories can be reconciled with observations is by effectively, and very precisely, mimicking the behavior of cold dark matter on cosmological scales.”

Leaving aside just which observations need to be mimicked so precisely (I expect they mean power spectrum; perhaps they consider this to be so obvious that it need not be stated), this kind of reasoning is both common and powerful – and frequently correct. Indeed, this is exactly the attitude I expressed in my review a few years ago for the Canadian Journal of Physics, quoted in the image above. I get it. There are lots of positive things to be said for the standard cosmology.

This upshot of this reasoning is, in effect, that “cosmology works so well that non-baryonic dark matter must exist.” I have sympathy for this attitude, but I also remember many examples in the history of cosmology where it has gone badly wrong. There was a time, not so long ago, that the matter density had to be the critical value, and the Hubble constant had to be 50 km/s/Mpc. By and large, it is the same community that insisted on those falsehoods with great intensity that continues to insist on conventionally conceived cold dark matter with similarly fundamentalist insistence.

I think it is an overstatement to say that the successes of cosmology (as we presently perceive them) prove the existence of dark matter. A more conservative statement is that the ΛCDM cosmology is correct if, and only if, dark matter exists. But does it? That’s a separate question, which is why laboratory searches are so important – including null results. It was, after all, the null result of Michelson & Morley that ultimately put an end to the previous version of an invisible aetherial medium, and sparked a revolution in physics.

Here I point out that the same reasoning asserted by Bertone & Tait as a slam dunk in favor of dark matter can just as accurately be asserted in favor of MOND. To directly paraphrase the above statement:

“the only way ΛCDM can be reconciled with observations is by effectively, and very precisely, mimicking the behavior of MOND on galactic scales.”

This is a terrible problem for dark matter. Even if it were true, as is often asserted, that MOND only fits rotation curves, this would still be tantamount to a falsification of dark matter by the same reasoning applied by Bertone & Tait.

Lets look at just one example, NGC 1560:

 

ngc1560mond
The rotation curve of NGC 1560 (points) together with the Newtonian expectation (black line) and the MOND fit (blue line). Data from Begeman et al. (1991) and Gentile et al. (2010).

MOND fits the details of this rotation curve in excruciating detail. It provides just the right amount of boost over the Newtonian expectation, which varies from galaxy to galaxy. Features in the baryon distribution are reflected in the rotation curve. That is required in MOND, but makes no sense in dark matter, where the excess velocity over the Newtonian expectation is attributed to a dynamically hot, dominant, quasi-spherical dark matter halo. Such entities cannot support the features commonly seen in thin, dynamically cold disks. Even if they could, there is no reason that features in the dominant dark matter halo should align with those in the disk: a sphere isn’t a disk. In short, it is impossible to explain this with dark matter – to the extent that anything is ever impossible for the invisible.

NGC 1560 is a famous case because it has such an obvious feature. It is common to dismiss this as some non-equilibrium fluke that should simply be ignored. That is always a dodgy path to tread, but might be OK if it were only this galaxy. But similar effects are seen over and over again, to the point that they earned an empirical moniker: Renzo’s Rule. Renzo’s rule is known to every serious student of rotation curves, but has not informed the development of most dark matter theory. Ignoring this information is like leaving money on the table.

MOND fits not just NGC 1560, but very nearly* every galaxy we measure. It does so with excruciatingly little freedom. The only physical fit parameter is the stellar mass-to-light ratio. The gas fraction of NGC 1560 is 75%, so M*/L plays little role. We understand enough about stellar populations to have an idea what to expect; MOND fits return mass-to-light ratios that compare well with the normalization, color dependence, and band-pass dependent scatter expected from stellar population synthesis models.

MLBV_MOND
The mass-to-light ratio from MOND fits (points) in the blue (left panel) and near-infrared (right panel) pass-bands plotted against galaxy color (blue to the left, red to the right). From the perspective of stellar populations, one expects more scatter and a steeper color dependence in the blue band, as observed. The lines are stellar population models from Bell et al. (2003). These are completely independent, and have not been fit to the data in any way. One could hardly hope for better astrophysical agreement.

 

One can also fit rotation curve data with dark matter halos. These require a minimum of three parameters to the one of MOND. In addition to M*/L, one also needs at least two parameters to describe the dark matter halo of each galaxy – typically some characteristic mass and radius. In practice, one finds that such fits are horribly degenerate: one can not cleanly constrain all three parameters, much less recover a sensible distribution of M*/L. One cannot construct the plot above simply by asking the data what it wants as one can with MOND.

The “disk-halo degeneracy” in dark matter halo fits to rotation curves has been much discussed in the literature. Obsessed over, dismissed, revived, and ultimately ignored without satisfactory understanding. Well, duh. This approach uses three parameters per galaxy when it takes only one to describe the data. Degeneracy between the excess fit parameters is inevitable.

From a probabilistic perspective, there is a huge volume of viable parameter space that could (and should) be occupied by galaxies composed of dark matter halos plus luminous galaxies. Two identical dark matter halos might host very different luminous galaxies, so would have rotation curves that differed with the baryonic component. Two similar looking galaxies might reside in rather different dark matter halos, again having rotation curves that differ.

The probabilistic volume in MOND is much smaller. Absolutely tiny by comparison. There is exactly one and only one thing each rotation curve can do: what the particular distribution of baryons in each galaxy says it should do. This is what we observe in Nature.

The only way ΛCDM can be reconciled with observations is by effectively, and very precisely, mimicking the behavior of MOND on galactic scales. There is a vast volume of parameter space that the rotation curves of galaxies could, in principle, inhabit. The naive expectation was exponential disks in NFW halos. Real galaxies don’t look like that. They look like MOND. Magically, out of the vast parameter space available to galaxies in the dark matter picture, they only ever pick the tiny sub-volume that very precisely mimics MOND.

The ratio of probabilities is huge. So many dark matter models are possible (and have been mooted over the years) that it is indefinably huge. The odds of observing MOND-like phenomenology in a ΛCDM universe is practically zero. This amounts to a practical falsification of dark matter.

I’ve never said dark matter is falsified, because I don’t think it is a falsifiable concept. It is like epicycles – you can always fudge it in some way. But at a practical level, it was falsified a long time ago.

That is not to say MOND has to be right. That would be falling into the same logical trap that says ΛCDM has to be right. Obviously, both have virtues that must be incorporated into whatever the final answer may be. There are some efforts in this direction, but by and large this is not how science is being conducted at present. The standard script is to privilege those data that conform most closely to our confirmation bias, and pour scorn on any contradictory narrative.

In my assessment, the probability of ultimate success through ignoring inconvenient data is practically zero. Unfortunately, that is the course upon which much of the field is currently set.


*There are of course exceptions: no data are perfect, so even the right theory will get it wrong once in a while. The goof rate for MOND fits is about what I expect: rare, but  more frequent for lower quality data. Misfits are sufficiently rare that to obsess over them is to refuse to see the forest for a few outlying trees.

Here’s a residual plot of MOND fits. See the peak at right? That’s the forest. See the tiny tail to one side? That’s an outlying tree.

rcresid_mondfits
Residuals of MOND rotation curve fits from Famaey & McGaugh (2012).

73 thoughts on “It Must Be So. But which Must?

  1. interesting post, which galaxies are the outliers, dragonfly44?
    could a combination of unseen black holes + MOND explain the outlier like dragonfly44, or nonequilibrium?

    Like

  2. Hi Stacy, If you haven’t already, I urge you to have a look at papers by Mike Mculloch on Quantised Inertia. The fundamental tenet is that in galaxies, it is inertial mass that is modified (reduced) at the ultra-low accelerations evident at the edges. The velocities of stars can be calculated from first principles of the theory, and give results at least comparable to MOND. I have recently tried to figure out implications for particle physics.

    Liked by 1 person

  3. I wanted to post on the the other entry about the exact same thing – that in reality, DM theory must be tuned to mimic the behavior of MOND on galactic scales. I mean – that was the obvious thing for me when I saw that paragraph that you quoted.
    And yes, I believe only the most ignorant (about MOND) could not see this problem.
    But as you say, I believe here is a deeper problem – since the existing prevailing bias, it is difficult for one to become sufficiently familiarized with MoND / modified gravity theories and hence will forever remain on the arrogant part of the plot.
    For teachers / researchers, it might be difficult to accept an idea that is contrary to your lifetime work. For students – if your teacher doesn’t point you in this direction (why would he/she – it is contrary to the lifetime’s work) it would be very difficult to become familiar with the subject. I mean, all your peers are pushing you towards DM, even more, they might even penalize you if you invoke different ideas.
    For me – I don’t know, call me skeptic, or maybe it has to do with the way I learned physics in high school (much respect for my teacher in regards to this) – but if the data point consistently to a very different direction than what the theory is saying, then for sure one must start question the validity of the assumptions made with respect to the theory. I always regarded DM as a placeholder for the error between theory and observations and said the DM maps are, in fact, error maps. But it was funny when I expressed this feeling – which, in the end, it is the truth as DM must be invoked to reconcile the observations – that pretty everyone of my peers said that I don’t know what I’m talking. DM is real and there is no error between observations and theory! (well of course – in the end you fit the observations with DM, so there you go, no error).
    It happened exactly as you’re saying – repeat it sufficiently enough and everybody starts to forget why it was initially introduced and start to believe it as real.
    I particularly liked you’re description of the understanding of gravity starting with Galilei (constant acceleration everywhere) -> Newton (acceleration depending of the masses and separations) -> Einstein (deviations in strong fields) and how this was a progressively higher scale theory (local for Galilei, Solar system for Newton and with extensions for Einstein).
    But why must we assume that GR as is should be valid on all scales? History thought us that changing the scales by orders of magnitude require also changes in our theoretical framework. Why stop now, especially when measurements show a different story?

    Like

  4. Exactly. The extragalactic data that show the acceleration discrepancy are the only data that test GR at such low acceleration scales. So one must invoked dark matter to save it, while you could just as well say it flunks this test.
    I myself had a very hard time accepting that perhaps I had been wrong to be so sure the answer had to be dark matter. So this is indeed very much an issue of perspective and training. Getting past that is what the scientific method is for, and why I have repeatedly made and tested a priori predictions of MOND. It is not healthy for the scientific endeavor that so many scientists reject consideration of MOND without checking what it does right (though they’re quick to call out what it does wrong. If we applied the same standard to CDM, we would have abandoned it a quarter century ago).
    I’ve said it before but it needs saying again: why doe MOND get so many predictions right? It is neither sufficient nor satisfactory to say “dark matter does it.” One needs to understand why and how and – most importantly – be able to make the same predictions.

    Liked by 1 person

  5. I appreciate your articles and already thoght about recommending something like you did here:
    Mond has undeniable (on close enough look) success in describing many observations on galaxies at least.
    ‘Sell’ it as a behavior the correct solution to the underlying problem has to show/cover.
    I follow discussions in other blogs, recently some article by Ethan Siegel, why LCDM is right. From these I get an impression, what is going wrong here. I myself originally assumed MOND to just fit observed rotation curves (and similar observations) mith some modificated gravitational force law. That is not convincing, I have learned in other areas you can fit data wit a few parameters surprisingly good and not rarely better than with some honest model. So the first important point not always regarded is the use of essentially just 1 parameter which is always the same.
    The next point is the very unfortunate naming ‘MOND’: Modification is already ugly, Newtonian is superceded by GR and otherwise understood in a way, that renders modifications nonesense.
    So already from that perspective it is important, that first of all a common, very specific behavior is found, that can be described in this way. This anyway does not explain any deeper reason of the ‘MOND-behavior’, since some reconcilement with GR and probably some suitable qm field will be needed.
    When thinking about such a field it becomes visible, that perhaps there need not be a contradiction between DM and ‘MOND-behavior’ in the end: there was a respective discussion in Sabine Hossenfelders blog recently.
    The discussion should become more precise and focused on what is really known. Clearly LCDM seems to do a good job on cosmological problems. But often from that simple assumptions used for that purpose are generalised and treated as self-evident: it has to be a cold gas wit irrelevant interaction. Nothing more sphisticated seems to be considered.
    As long as there are only astronomical data, these should be taken seriously on all ends – and not based on prejudices inspired by specific assumed solutions. This game may change a bit with definite particle observations, which are nowhere visible.
    I have never seen an attempt to combine the additional mass with matter-behavior in the early universe with the dynamics obsevable as the ‘MOND-behavior’ on galactic scale in the later universe and laboratory findings/SM/GR anyway.

    Like

  6. To your last paragraph – there are attempts at combining the best of both worlds, like Khoury’s superfluid dark matter or Blanchet’s dipolar DM. But these are nascent, and have a long way yet to go.
    The rest of what you say aligns with what I have said in many of the review articles I cited in the previous post. The one small contradiction I’d make is about the one parameter. What you say is true, but it depends on the situation – how much freedom to fit do you get from the available parameters (be it one or many). In the case of MOND, you get very little: M*/L is a scaling factor; the shape is fixed by the observed light distribution. In the case of cosmology, there are 6 parameters at a minimum, and they can be varied simultaneously to fit all sorts of things (see, e.g., http://space.mit.edu/home/tegmark/movies.html). Worse, we’ve never been shy about inventing new free parameters as needed.

    Liked by 1 person

  7. “Worse, we’ve never been shy about inventing new free parameters as needed.”
    With enough epicycles, I mean free parameters, any theory can fit any observation. I would think there should be some rule, analogous to Occam’s razor, that would put a limit on allowed free parameters.

    Like

  8. To clarify, perhaps it should be standard scientific practice that when showing how a fit matches the data to also show, in some manner, how dependent that fit is on open parameters. For example for MoND rotation curves one would show what different, reasonable, assumptions of the M/L would result in. My understanding is that to do so for LCDM, showing other, reasonable, assumptions for the parameters, would result in such a huge number of vastly different alternate curves that the results would become laughable.

    Like

  9. In effect one would have error bars for the theory as well as the data.
    Sorry for not putting this all in one post, but was doing stream of thought.

    Like

  10. You’d think, but there is no hard and fast rule. That things were becoming too complicated was a concern that grew gradually on me through the ’90s. Now it seems limited by our current horizon… only the immediate complications are the concern; people only work on the top tier of this house of cards. The previous floor is presumed to be a solid foundation, as is the floor below that, floor over floor, right the way down to the pack of jokers that are the foundation in the sand.

    Like

  11. I found the Wiki article on the Bullet Cluster a bit discouraging with respect to a Mondian explanation. But, at least, Moti Milgrom has shown it’s possible to reduce the error for a Mondian fit for this cluster 5-fold, leaving only a factor of 2 to explain. Perhaps the Wiki article is outdated, and that discrepancy with MOND has further diminished.

    Like

  12. It seems to me that the way forward for MOND has to be to demonstrate that it explains measurements on as many different length scales as possible without the need for dark matter. Stacy’s posting on Astronomical acceleration scales shows two of these: galaxies and clusters of galaxies, but there are also wide binaries and globular clusters which could show the the effect at shorter length scales. It may be that a collaborative effort is needed between specialists in all of these.

    Like

  13. I’ve done that, a long time ago (starting with my review of the evidence in the ApJ in 1998). The summary is that the mass discrepancy always sets in around the critical acceleration scale. This is true for all systems that I’m aware of, with the inclusion or exclusion of clusters of galaxies, depending on your level of precision. The mass discrepancy appears in clusters at a slightly higher acceleration scale that in galaxies. See the graph at https://tritonstation.wordpress.com/2018/09/07/astronomical-acceleration-scales/
    The slight/large difference (depending on your perspective) is the origin of the residual mass discrepancy in clusters – i.e., why they are a problem for MOND. This was discussed in the annual review I coauthored with Bob Sanders in 2002, before the bullet cluster was discovered. The bullet cluster just shows the same discrepancy as every other cluster, albeit in a visually dramatic fashion. However, its collision velocity is more naturally explained by MOND.
    It is a fool’s errand to police (or trust) wikipedia articles. The entry on the bullet cluster looks to follow a familiar script, maximizing the problem for MOND while minimizing the problem for LCDM. The quote from Hayashi is a typical white wash, as the velocity of the bullet subcluster certainly is high for LCDM – the issue being how high is “exceptionally” high. I’ve seen analyses that range from there being a 10% chance it could happen in LCDM (unusual but hardly exceptional) to 1 part in 100,000 (exceptional to the point that it should not exist, so would falsify LCDM). The system happens to be very close to a tipping point where the probability distribution falls off sharply. As a consequence, slightly different treatments of the data, the hydrodynamical effects, and their uncertainties can lead to these seemingly major discrepancies. It is like asking “did we take a giant leap off a cliff, or just a small step?”

    Liked by 1 person

  14. I’m curious about the Newtonian expectation curve that appears in the graph for NGC 1560. It is radically different from the standard Keplerian expectation curve that is common throughout the literature of the last 40 years. How is this curve being derived, i.e., what are the structural assumptions (both mathematical and physical) underlying the derivation? It is interesting that there is no longer an expected steep decline but there remains a significant discrepancy for DM or MOND to accommodate.

    I wonder how the model being used differs from this Newtonian derivation that finds no discrepancy, and therefore no need for either DM or MOND: https://arxiv.org/abs/1406.2401

    Like

  15. That’s a great question that was famously discussed at the IAU 100 meeting “Internal Kinematics and Dynamics of Galaxies” in the early ’80s. There it was momentarily suggested that proper (non-Keplerian) computation of the expected rotation curve could solve the problem. Long story short, it does not. Makes for amusing historical reading though (especially the response to the presentation by Kalnajs, which is at the end of someone else’s talk, so you have to read through the book to find it).
    OK, stepping back: a Keplerian curve is appropriate only in the limit of a point mass (e.g., the sun). If you get far enough away, everything looks like a point mass. So one often see this used as an illustration, even though it is not technically correct for galaxies, because galaxies are not point sources.
    For an extended mass distribution, one must numerically solve the Poisson equation to find the gravitational potential corresponding to the observed 3D density distribution. In practice, we see the projected surface density profile. That gives us 2 of the 3 dimensions; one usually azimuthally averages this then assumes a disk thickness corresponding to what we observe for edge-on galaxies. (Sadly, we cannot pick up galaxies and examine all 3 dimensions in real time.)
    What you see in the figure is the result of this calculation. The Newtonian expectation is the circular speed of a test particle orbiting in the gravitational potential corresponding to the observed distribution of luminous mass. This is a rigorous calculation (at least the way we do it; others have been known to cut corners by assuming exponential or even spherical disks).
    [It looks like the paper you cite is part rigorous, part makes stuff up for the mass distribution – e.g., the various k-values in their Fig. 3. The green lines in their Fig. 4 appear to be what the mass distribution would need to be to explain the rotation curves. But we observe that; it ain’t the same. Hence the problem.]
    One does not see the Keplerian decline in most galaxies because we don’t see far enough out. The finite size of the galaxy dominates; we wouldn’t expect to perceive a Keplerian downturn until far, far outside the last measured point. This is generally the case in galaxies. [Historically, great importance was placed on a few cases, like NGC 2403, where the rotation curve could be traced very far out. Even there, the decline is not yet Keplerian, but it is clearly getting there.]
    In high surface brightness galaxies, the stars do in fact suffice to explain much of the rotation till pretty far out. (See, e.g., my recent post on the Milky Way.) For low surface brightness galaxies, this is not the case – the amplitude of the mass discrepancy is pronounced already at small radii. This is what you see in NGC 1560, and is what attracted my attention to the issue in the first place. I had had my own prediction for what low surface brightness galaxy rotation curves should do, but my own data falsified that. The only correct a priori prediction was that of Milgrom (1983).

    Liked by 1 person

  16. Well yes, the Keplerian expectation curve is inappropriate for observable galactic rotation curves. Nonetheless, it was commonly employed for most of the last 40 years. See, for just one example, this NED article from 2000: https://ned.ipac.caltech.edu/level5/March01/Battaner/node3.html

    You and I both know that 1) the Keplerian expectation is wrong (in the observable regime) and, 2) it was widely employed and a principal source of the dark matter conjecture. This is a matter of historical record. It is also an incidental consideration at this point. I do not wish to belabor it.

    What is important is that the expectation curve is still being inappropriately modeled. The newer technique you describe beginning with,

    “For an extended mass distribution, one must numerically solve the Poisson equation to find the gravitational potential corresponding to the observed 3D density distribution.”,

    is not mathematically wrong in itself, but it is physically incomplete. It simply ignores the fact that the local mass distribution of the disk, relative to the orbiting body under consideration, has to be weighted properly if a realistic expectation curve is to be generated. Although this Newtonian expectation, as described, is more sophisticated it is still essentially wrong. And it is wrong for a simple and straightforward reason; it does not produce an expectation curve that agrees with observation.

    This Newtonian approach to the problem, as in the Keplerian case, requires an ad hoc patch. In both cases it is assumed that the qualitative model underlying the calculations is correct and complete, and that therefore, the discrepancy with observation is external to the modeling technique employed. This in turn requires either DM or MOND. But it is the failure to consider the model itself as the source of the problem that leads to the false choice between these two ad hoc solutions. Both the Keplerian and Newtonian model reduce galactic dynamics to a two body problem. That cannot possibly work for a typical galactic structure, and indeed, they do not.

    The choice between Dark Matter and MOND is a red herring. What is needed is a better analytical model, one that properly represents the simple fact that your own work has made clear. The Radial Acceleration Relation means that the distribution of baryonic matter is sufficient to account for the observed rotation curves. What is now needed is a theoretical model that accurately reflects that reality without invoking superfluous, ad hoc patches to either reality itself, or to well established physics.

    Your rather off-handed dismissal of the Pavlovich paper (https://arxiv.org/abs/1406.2401) is disappointing in that regard. It is exactly the type of qualitative rethinking that has to take place at the theoretical level. Whatever its shortcomings it is a significant theoretical attempt to come to terms with physical reality by constructing a model that better represents that underlying physical reality. Patching failed analytical models is not the way forward here, especially not in light of the RAR.

    Liked by 1 person

  17. It is true that over-simplified Keplerian depictions were often used to illustrate the dark matter problem. This oversimplification is a pet peeve of mine, and on occasion I have be known to interrupt colloquium speakers to point out this very flaw when they have been sloppy in selecting an illustration. Nevertheless, this does not obviate the problem, which still exists when it is done properly.
    So yes, the RAR relates the baryons directly to the dynamics. Yes, we need a proper explanation for why the baryons appear to be the source of the gravity. But no, you are not going to be able to do this without something drastic like dark matter or a generalized force law.
    You seem to imagine that somehow a more realistic modeling of galaxies will make the problem go away. NO. That is simply not possible. We have been down this path many, many times. It is known not to work.
    Please forgive me if I sound dismissive. But we’ve been there, done that. Life is too short.

    Liked by 1 person

  18. You and I are arguing across a great divide. You don’t see this because of exactly the sociological biases you cite with respect to Dark Matter. Everybody you know in your profession was raised and educated in a scientific academy in which the central role of empiricism has been displaced by a model-centric mathematicism. You “know” this but you can’t “see” it.

    You say things such as:

    “…the ΛCDM cosmology is correct if, and only if, dark matter exists. But does it? That’s a separate question, which is why laboratory searches are so important – including null results.”

    As you suggest, the empirical results for DM are unambiguously null. We should have ceased having discussions re the dark matter hypothesis years ago – it ain’t there – the hypothesis has failed. Scientifically speaking, DM should be a dead issue.

    So why does the DM paradigm persist in the face of extensive negative empirical results? Because in the science of the academy a model’s requirements carry more weight than the empirical results. This effectively stands the scientific method on its head. It doesn’t matter what empirical reality says, the model determines what is there.

    So, doesn’t MOND resolve this problem? No, it doesn’t because it is just a mathematical fix that has no explanatory power. It provides no falsifiable physical account for the changed behavior in the low-acceleration regime. MOND is math, it is not physics.

    In this, it is fair to say that GR itself offers no explanation for the gravitational effect, unless one accepts the empirically unsupported existence of a substantival spacetime. Why not then accept MOND on the same terms? Simply, because we shouldn’t accept either GR or MOND as accurate descriptions of physical reality, even if we allow them as useful computational tools.

    It is this deep seated incuriosity about the underlying physics that is most disturbing about modern science. We have lots of people working on the scientifically inert String Theory, and absolutely no one pursuing an investigation into the physical cause of the gravitational effect. It is deemed sufficient to only calculate outcomes without any need to understand underlying physical processes.

    This model-centric culture is not only disturbing, but deeply problematic. It has resulted in standard models that can mimic observations while bearing no structural resemblance to physical reality. Both standard models are populated by entities and events that are not observed in empirical reality.

    Which brings me to the Pavlovich model cited above, which you summarily dismissed because it calculates the mass distribution of a galaxy from the model itself and you find this calculation discrepant with the mass distribution you calculate differently (mass and mass distribution being calculated not directly observed). So your dismissal of this alternate model is of exactly the same form as the DM advocates’ dismissal of MOND, isn’t it? It is just a kind of pervasive, model-centric territoriality that hampers scientific progress.

    As to your “…been there, done that” assertion, I’d credit it more if you could cite even one example of a seriously considered alternative galactic model employing Zwicky’s concept of gravitational viscosity. I’ve never seen one.

    Like

  19. @budrap I believe dr. McGaugh addressed already some of the points you highlight in this very thread (for instance, see the reply to my previous post).
    As for the dismissal of the paper you cite I would argue that it is rather naive to assume that in all these years after the discovery of the missing matter basically no one ever tried to analyze in detail the mass distribution in galaxies and perform computations based on that. I’d hazard to say that this was one of the very first things to try to reconcile the observations with the theory.

    Like

  20. That’s certainly what I thought I had said. Obtaining a flat rotation curve with no modification of gravity requires a mass distribution that declines as 1/r^2 for a sphere, or 1/r for a disk. But the mass distribution of stars falls off much faster: exponentially. Amusingly, the gas roughly falls off as 1/r, which is crudely the right shape, but the amplitude is too low. So one of the many false alleys we’ve been down is some form of dark matter that follows the gas distribution. Still need dark matter, just in a disk instead of a sphere. This has other problems… long story short, this isn’t satisfactory. If one insists on maintaining known physics without dark matter, one is inevitably driven to invoke a variable M/L that is whatever it needs to be to map the observed distribution of matter to that required by dynamics. Basically, a rolling fudge factor.
    I agree with some of budrap’s comments about the obsession with model building and the culture that has developed around the subject of dark matter. I’ve spoken with many scientists who seem to think any fudge that maintains the dark matter paradigm isn’t a fudge at all, but must be how it works. I am particularly disturbed that many of these folks seem to think it better to fit the data after the fact (with whatever fudge, however absurd) than to make genuine a priori predictions. This abandons the most fundamental aspect of the scientific method.
    I think budrap mischaracterizes MOND, which if nothing else is certainly falsifiable. Arguably, it has been falsified. So perhaps it is just a first approximation of some deeper theory. Or perhaps some of the data are misleading – as Feynman noted, even the correct theory disagrees with some of the data, because some of the data are inevitably incorrect.
    I certainly agree that we lack a deeper physical understanding of what is going on, with either DM or MOND. At least that much is clear in MOND: we do not understand the mechanism by which inertial mass departs from equality with gravitational charge. In the case of DM, there are an uncountably infinite number of possibilities, so we can and often do comfort ourselves with what is merely the illusion of understanding.
    There are many examples of this illusion of comprehension. My favorite example lately is the assertion I’ve heard repeatedly that dwarfs like Crater 2 and And XIX are “fully explained by LCDM.” This is an example of the abandonment of predictive power. The expectation (not even a proper prediction) in LCDM was for much higher velocity dispersions in these dwarfs. First we invoked feedback, but that doesn’t suffice in these cases, so in addition we further invoke tidal stripping. That can happen, to be sure, but we don’t predict where and when, we just say “oopsies, that dwarf’s velocity dispersion is too low. Must have been stripped.” This makes no attempt to explain why MOND correctly predicted those cases a priori. Willfully ignoring successful predictions guarantees a lack of understanding, but maintains the illusion of comprehension.
    As for the Pavlovich model, budrap is correct that I was quick to dismiss it. However, contrary to budrap’s assertion, it is not because I am guilty of ignoring alternatives. Quite the contrary. I have spent an enormous amount of time reviewing literally hundreds of such alternatives. At most, only one of these can be correct. The rest have to be off the rails in some way. I now have a great deal of experience in spotting when something is off the rails. Denying that there is a problem at all is off the rails.
    It is important in science to admit when one is wrong. I started from the same position as most of my colleagues who persist with DM I was initially just as dismissive of MOND as any of them. I was wrong to be so. It’d be great if something better were to come along. Maybe it has already, and it hasn’t sunk in yet. But we need to move forward, not backwards: not all alternatives are equally meritorious.
    As for Zwicky’s gravitational viscosity, how is it my job to cite for you the work of others exploring something that might have been viable in 1937 but clearly is not now? Literature searches can be made on-line by anyone. NASA ADS is best, though for something like this Google scholar might have more reach (in number of unrefereed sources, not it time: it is lousy for things that existed BG (Before Google). Because nothing existed BG as far as Google is concerned.) [Corollary: if I say we’ve been there done that, it is because we’ve been there and done that. I can understand being reluctant to accept such an assertion of an experience one hasn’t shared. But it has been my experience and the experience of the relevant scientific community. That one hasn’t shared that experience does not place one in a strong position to deny that it happened.]

    Liked by 1 person

  21. Also, there have been attempts to identify an underlying physical source for MoND. These attempts have not been successful yet, but the fact that MoND is so successful at predicting rotation curves suggests, though does not guarantee, that it is indeed a good approximation of some deeper physical law.

    Like

  22. Gravitational viscosity was rather spuriously dismissed in 1941 by Chandrasekhar’s paper, The Time of Relaxation of Stellar Systems. It is long past time to reconsider it. Mathematicians are not infallible and their conclusions re physical systems cannot stand inviolable once made.

    Like

  23. Re-quantized inertia
    I’ve seen that there are several predictions made by quantized inertia for which you might already have an answer. Namely:”there should not exist any mutual acceleration below about 7×10^-10 m/s^2 today, and further back in time this minimum acceleration, a_min=2c^2/(Hubble scale), was higher, since the Hubble scale was smaller, so ancient (high redshift) galaxies should have greater spin for less visible mass.”
    Is there any data to validate / invalidate this prediction?

    And the other one: ” MiHsC predicts a minimum acceleration in nature, 6.7×10^-10 m/s^2, the acceleration for which the Rindler horizon reaches the Hubble horizon and can’t be any larger”.
    This is akin to the discussion we had some time ago about why isn’t there an acceleration constant, like the speed of light is for speeds.
    It is interesting that the predicted minimum acceleration is larger than a0 (by almost an order of magnitude). Whit this, I wonder how QI can replicate MoND in galaxies.

    Like

  24. There are many galaxies where accelerations of 0.1 a0 are observed, and some in which it is only a few percent. Crater 2 is one of the record holders, around 10^-12 m/s/s if memory serves. Theories that predict a minimum acceleration higher than this are excluded. This was true of the original form of conformal Weyl gravity (though there is another term that appears now to matter), and some claims about NFW halos (Navarro was at one point advocating a minimum acceleration higher than this). If MiHsC predicts a minimum acceleration as high as 6.7E-10, it too would be excluded.

    Like

  25. As I was short of time, my previous comment re Chandrasekhar’s paper was a place holder for a question I wanted to ask. Is that paper the source for your claim that “Zwicky’s gravitational viscosity…might have been viable in 1937 but clearly is not now”? If so, I would have to say that the irrelevancy is greatly overstated. Chandrasekhar’s paper, whatever its mathematical merits, is, itself, of dubious relevance to galactic physics since the model was derived primarily by consideration of the dynamics of stellar clusters.

    If, on the other hand, you have additional reasons for considering gravitational viscosity irrelevant to the “missing mass” problem I would appreciate knowing what they are. I have found no evidence that the 1941 “dismissal” has ever been reconsidered. That’s why I asked for a reference – like dark matter, the evidence doesn’t appear to be there..

    Like

  26. Zwicky’s discussion, so far as I read it, had to do with analogies to stars and things that had physical contact. Viscosity is a fluid dynamical effect. Stuff in direct contact. What we’re discussing here is the gravity between stars separated by vast stretches of empty space. Stuff as far from direct contact as it gets. So the words “gravitational viscosity” are an oxymoron to me. They just don’t go together. Like “living dead” or “larger half” or “astro particle physics”. I don’t even know how to begin considering a proposition who component words make no sense together.

    Like

  27. Several things come immediately to mind here:

    1) Atoms in a solid or liquid are electro-statically bound – they are not in direct contact. They only appear so from our human perspective.
    2) The ‘vast stretches’ that separate the stars in the outer regions of a galactic disk from each other are far smaller than the vast distance that separates the outer regions from the galactic core.
    3) What lies between the stars in the outer region is not empty space, but gas, plasma, other non-luminous baryonic matter, and electromagnetic radiation. At minimum there is always electromagnetic radiation. The concept of “empty space” is a 19th century anachronism.
    4)Therefore, the term gravitational viscosity, is not an oxymoron, but is reasonably analogous to the concept of fluid viscosity, the only differences arising due to relative scale factors.

    Like

  28. I feel exasperation inside of me. You make quote more absurdities than I care to correct. For example, the electrostatic bonding you refer to is the very definition of direct physical contact. That’s how direct physical contact works at the microscopic level. And of course there is gas in the interstellar medium. That’s what we measure rotation curves with. The average density of interstellar gas is about 1 hydrogen atom per cubic centimeter – less than the best vacuum we can make in the laboratory. So for all practical purposes, space remains the gold standard for defining empty. Unless you redefine viscosity, gravitational viscosity is an oxymoron.

    Like

  29. Well, I went to the source for quantised inertia relative to rotation curves (https://arxiv.org/abs/1709.04918) and my hopes were shattered…
    Honestly, if I was a reviewer for that paper I would have reject it, even though astrophysics is only a passion of mine and I’m not active in the field.
    Neglecting the way he adds up the magnitudes of vectors that don’t share a direction, the way he arrives to a MoNDian formula is completely arbitrary and unsupported even in his framework.
    Take for instance this paragraph, after eq. (5) – “At the edge of a galaxy, |a| becomes small, so the acceleration must be maintained above the minimum acceleration allowed in quantised inertia (McCulloch, 2007) by the value of a′, and so a′ = 2c^2/Θ”. I see no real explanation why, although he sets a minimum acceleration is his theoretical framework, he then still allows for a, the acceleration responsible for the centripetal force, to go bellow that one. For this I say his relations are not supported by his framework.
    Furthermore, suppose that he is to model the bobbing effect for a star that is slightly out of the plane of the galaxy. If he applies the same reasoning, he will write again the acceleration as a sum of terms, one responsible for the bobbing up and down about the galactic plane, and one for the inhomogeneities and make this second term equal with the minimum acceleration. Except that this second term would include also the centripetal acceleration for which he already concluded that is not equal with the minimum allowed acceleration. For this reason I say his formulas are arbitrary.
    Even more, assuming an n-body problem (i.e. a star vs a galaxy) – if his framework allows a minimum acceleration of 2c^2/Θ, then, how come the inhomogeneities in the matter distribution act only with this amount and not with the sum of all the individual bodies. I mean – all the stars on the other side of the galaxy must exert all the same minimum acceleration so you cannot collapse the entire effect into just the minimum acceleration. In fact, the acceleration felt by a star at the periphery would be huge and the star will rapidly fall towards the galactic center.

    Like

  30. A constant minimum acceleration predicts rising rotation curves, not flat ones: V^2 = a_min*r. So any theory with a fixed minimum acceleration a_min predicts V rising as r^1/2. We observe rotation curves out to and sometimes beyond 0.1*a0, so that excludes a_min > 10^-11 m/s/s.

    Like

  31. I assure you that my own exasperation is only exceeded by my disappointment. Your vague intimation that you, or someone in the theoretical department, has already done an exhaustive and comprehensive evaluation of all possible alternative methods for modeling galactic dynamics, and therefore no new models need apply – that is nothing but a weak, very weak argument from authority.

    This is particularly annoying because it is in service of propping up a model that is a clear failure. It is a failed model for the simple reason that it cannot accurately calculate galactic rotation curves without the ad hoc invocation of either Dark Matter or MOND.

    Then, there is this:

    1. “Obtaining a flat rotation curve with no modification of gravity requires a mass distribution that declines as 1/r^2 for a sphere, or 1/r for a disk. But the mass distribution of stars falls off much faster: exponentially. Amusingly, the gas roughly falls off as 1/r, which is crudely the right shape, but the amplitude is too low.”

    2. “If one insists on maintaining known physics without dark matter, one is inevitably driven to invoke a variable M/L that is whatever it needs to be to map the observed distribution of matter to that required by dynamics. Basically, a rolling fudge factor.”

    First, if paragraph 1 is correct, then the M/L cannot be constant as you seem to assume but must vary, since the luminous matter falls off much faster than the non-luminous gas. The M/L should grow larger as the luminosity drops toward zero in the gas only region.

    Second, I do indeed insist on maintaining known physics without dark matter and if that means a variable M/L, then so be it. The constant M/L you insist upon is only your assumption. To put it succinctly, in science, the math has to follow the physics – not the other way around. You can’t claim as fact, a constant M/L, when, in fact, it is only an assumption, and one that is contradicted by observations implying a variable M/L.

    As to your ill-thought out remarks re gravitational viscosity it only adds to my impression that no one in the theoretical community over the last 70 plus years has given it any meaningful consideration. This despite its obvious potential relevance to galactic dynamics. How then, have theorists come to treat it as irrelevant? Because…It is known – of course.

    Sooner or later, Dr McGaugh, you’re going to have to choose between doing Science and the Guild of Academic Scientists to which you belong. I don’t imagine that will be easy. I hope you choose wisely.

    Like

  32. @budrap
    While I can see such an analogy to be something to think about, I would point out there is a major difference between the EM interaction and gravity – that is, in a fluid, the interactions between particles may be both attractive and repulsive while for gravitational interaction will always be attractive. I believe that repulsive interactions play also an important role in fluid viscosity as for something to flow you need to displace (push) the molecules that are already filling the space in front of the flow. With gravity, you can’t do that.
    I think another possible test for gravitational viscosity is in accretion disks around young/forming stars (i.e. where the disk still has an important mass) and I believe these systems behave as expected in ND (i.e. by tracking nascent planets).

    Like

  33. It is a very complicated problem to rule out partially formulated theories based on classical analogies. The discrepency is not even necessarily of a gravitational origin, though we may hope to describe it in that way. Doesn’t the data just show a relationship between our electromagnetic measurement of the surface brightness and the dispersion of the intrinsic redshift?

    Like

  34. I made the choice between intellectual honesty and loyalty to a particular guild over twenty years ago.

    The trouble with intellectual honesty is that one has to be honest. That is different from agreeing that hypothesis X appears to be wrong, so we must give equal weight to any other that comes along. At most one hypothesis can be correct: most suggestions are going to fail. I have spent more time investigating alternative forms of gravitational dynamics than perhaps any other human on this planet. This experience has made me very good at spotting when an idea is unlikely to succeed.

    But maybe I’m wrong. I’m happy to admit to being wrong – if and when that proves to be the case. That’s another downside to honesty – one should not admit to being wrong when one is not. Shouting, badgering, and bullying don’t make right, though these are in practice often successful techniques for distracting from up a weak argument. If one is so eager about gravitational viscosity, or whatever else, then demonstrate how it is consistent with everything we know so far, and further use it to make quantitative predictions for as-yet unobserved phenomena. That’s how science works.

    Liked by 1 person

  35. @JB – basically, yes. It is a little more involved, as one uses the observed surface brightness to infer the gravitational potential of the observed (normal) matter. Strictly speaking, it is the gradient of the gravitational potential of the normal matter that is related to the gradient of the gravitational potential observed via the rotation curve (which is inferred fro the Doppler shift). So it isn’t just the surface brightness, but a theoretical entity (the gravitational potential) that is constructed from the surface brightness data. I.e., we are already spotting Newton that he knew what he was talking about.

    Like

  36. @tritonstation
    I thoroughly agree with the need for intellectual honesty, but would make one further point. A theory can be wrong but still be useful. Perhaps the best example is the Bondi-Gold-Hoyle Steady-State Theory where in 1953 Hoyle predicted the existence of an excited state of the carbon-12 nucleus which was essential for the stellar nucleosynthesis that the Steady-State Theory required. Had the Steady-State Theory never been formulated, at some time later it would have been realised that Big Bang Nucleosynthesis could not create all the elements and hence stellar nucleosynthesis was needed even for Big Bang models to produce the observed abundances of the elements.

    It is for this reason that, while I appreciate your and Apass’ criticisms of McCulloch’s MiHsC theory it does make the specific prediction that rotation speeds should be proportionately higher in the past when the universe was smaller. That is something that can be tested.

    Like

  37. Yes, indeed… at some level, all theories are wrong insofar as they can only ever be approximations to the underlying reality. So the measure is whether they make testable predictions, and whether those predictions come true. To judge the latter, for what happens at high redshift, one must sort through what to believe in the data, which is always a dodgy proposition at high redshift. In my judgement, the best data there (from Sarah Miller https://arxiv.org/abs/1201.4386 and di Teodoro https://arxiv.org/abs/1602.04942) show essentially no evolution in rotation curves out to z = 1.5 or so: speeds were not proportionately higher. That’s a problem for many theories that would have it otherwise: MiHsC, LCDM, and versions of MOND with a0 that varies as the product c*H0. It is consistent with MOND with a constant a0, and no doubt other things as well – if we believe the data I cite. One can find data that say other things: these things take a while to sort out.

    Like

  38. Is it possible that there is no Dark Matter involved in galactic rotation curves, i.e some modified gravity is the real explanation, while meanwhile the effects seen at cosmological scales are in fact a quite separate phenomena and are the result of some unseen mass?
    In other words is there any evidence that these two scales are unavoidably tied together in some way, or is it possible they are unrelated?

    Like

  39. Yes, this is possible. I prefer that there be one over-arching solution, but the universe doesn’t care about our preferences and it is indeed possible that the deficiencies we observe on galactic and cosmic scales are distinct phenomena.
    That said, if dark matter exists in the way is “must” for cosmology, I don’t see how one keeps it out of galaxies. Similarly, if the phenomena in galaxies indicates a change in the force law, this ought to have consequences for cosmology. So while I wouldn’t say they are unavoidably tied together, it does make sense that they should be.

    Like

  40. Dr McGaugh,

    First of all, I did not say, nor did I mean to imply, that you are being intellectually dishonest. I apologize if I gave you the impression that I was. The fundamental point I am trying to get across is that a model that can’t get to agreement with observations without the ad hoc invocation of either dark matter or MoND is highly unlikely to be correct in its assumptions about the actual physical galactic systems it is attempting to model. To put it another way the missing mass is in the model’s analytical structure – not in physical reality.

    In this regard, I am using the term gravitational viscosity in the same descriptive sense that Zwicky employed in his 1937 paper. Gravitational viscosity, in that sense, is the reason that the rotation curve does not drop sharply beyond the bulge; it is the self-gravitation of the disk components.

    In an earlier post you said the following with regard to modeling the RAR data:

    “Here I adopt the simple model used to construct the radial acceleration relation: a constant 0.5 M⊙/L⊙ at 3.6 microns for galaxy disks, and 0.7 M⊙/L⊙ for bulges.”

    The need for a sharp bulge/disk differential appears to follow directly from the physics, the change in both density and structure at the disk/bulge divide. However, having a constant 0.5 M⊙/L⊙ ratio for the disk is problematic since both the density and the relative structural components of the disk change with increasing radius. I don’t see how then, the constant M/L assumption can accurately reflect the physics of an actual galactic system. This suggests, again, that the missing mass problem lies within the model employed.

    Like

  41. Dr. McGaugh– Zwicky’s paper is here: http://adsabs.harvard.edu/abs/1937ApJ….86..217Z
    There is no “novel” physics in it, and Zwicky only uses the term “viscosity” by analogy. Zwicky’s argument was that if a star-cluster or galactic core were sufficiently dense that a star’s timescale between strong two-body gravitational encounters became shorter than its orbital period (or equivalently, if its mean-free path between two-body encounters became smaller than the size of the cluster or galactic core), then the stars would execute something more akin to a “brownian” motion instead of an “orbit,” and the cluster or galactic core would behave more like a viscous fluid than a “collisionless” fluid. Under such circumstances, the cluster/core would have relaxed to an approximation of “rigid-body rotation,” and “keplerian” estimates of cluster/core mass would have been wildly in error. Zwicky was at that time arguing that the “missing mass” problem was due to clusters or cores being in such a highly “collisional” state.

    Chandrasekhar “rejected” Zwicky’s hypothesis by computing the two-body collision timescale, and then showing that no cluster or galactic core comes close to being dense enough that it would be “collision dominated”; even in a galactic core or a globular cluster, stars will execute many orbits between strong two-body encounters. Therefore, Zwicky’s “viscous core” hypothesis stood falsified.

    However, Zwicky’s hypothesis as refined by Chandrasekhar _did_ eventually lead to the useful concept of “dynamical friction” in stellar dynamics, https://en.wikipedia.org/wiki/Dynamical_friction — i.e. that a star with high “peculiar velocity” relative to its surrounding stellar environment will on the average lose energy during stellar encounters more often than it gains energy, and will therefore “slow down” while “heating up” the random peculiar motions of the surrounding stars. (One sees striking evidence of such “dynamical friction” during the “violent relaxation” phase of simulated galactic collisions, which to the naked eye appear to be quite “dissipative.” The “energy dissipation” is actually the result of the subpopulation of stars that get “slingshotted” to high velocity, thereby carrying energy away from the collision remnant.)

    Note that even Zwicky only thought that his “viscosity” hypothesis was relevant inside the dense cores of clusters and galaxies, not the sparse outer fringes of galaxies — see Zwiky’s Fig.1 — and therefore even if his hypothesis _had_ been correct, it would have no relevance to the sparse outer fringes of galaxies or low surface-brightness galaxies where the observed “mass discrepancy,” “flat” rotation-curve, and “radial acceleration relation” kick in . Nor does the observed “flat” portion of the rotation curve of galaxies resemble the linearly rising rotation-curve predicted by Zwicky’s “viscous rigid-body core rotation” hypothesis — again, see Zwicky’s Fig.1

    As for the Pavlovich et al paper, the authors have tried to approximate the newtonian gravitational field of a disk galaxy by replacing the disk with a regular lattice of mass-points. Such a discrete numerical approximation has long been known to exhibit logarithmically divergent errors in the estimates of local newtonian forces as the grid-spacing shrinks to zero — which is why it is not used.

    Like

  42. @Laurence Cox
    Indeed, dr. McCulloch claims that his MiHsC (or QI as he calls it) predicts higher rotational speeds vs redshift.
    However, the more I think of his theory, the more astound I am of why none of his critics highlighted these basic facts. To me, these are high school / college freshmen grade issues and I’m really puzzled how/why PhDs and researchers didn’t bring them forth.
    So – the first issue – in QI, how do you calculate the acceleration felt by a test particle in the gravitational field of an extended object that cannot be reduced to a point mass?
    Normally, you start decomposing the extended object in elementary units of a certain mass / volume and then compute the acceleration exerted by each unit and sum (as vectors) the effects. Basically, you’ll end up with Riemann series, and, in the limit case where the unit element goes to zero, with a Riemann integral.
    But as the unit element gets smaller, the acceleration induced approaches the minimum QI acceleration and at some point, the acceleration computed using ND will become smaller than the minimum. What do you do next? Do you limit the acceleration to the minimum? But in that case, as the unit elements can be reduced even further, all the way towards zero, you’ll end up with a higher and higher total acceleration. In the limit case (i.e. for the integral) you’ll end up with an infinite acceleration because you have an infinity of elementary units (say dV or dm ->0) and each would exert a finite acceleration. So – how do you solve it?
    The next issue – imagine two massive objects separated by a certain distance (this is basically from dr. McCulloch’s presentations). You get the Rindler horizons / Hubble distance, Unruh radiation and so and so. Now imagine those two objects as some non-charged particles – say two neutrons or H atoms in a basically empty region of space. I excluded charged particles to have gravity as the dominant force. Now – compute the acceleration felt by the two particles and you’ll notice that it goes bellow the minimum acceleration for quite small separations. (In fact, given the masses involved and the weakness of the gravitational constant, for H atoms I’d say the acceleration is smaller even if you bring the atoms directly in contact). Again – what do you do in this case? Do you limit the acceleration to that minimum value?
    But then think that in every mole of substance you have basically in the order of 10^23 particles. Given that the minimum acceleration is of the order 10^-10m/s^2, if you have a single mole of substance (relatively grouped in a small space) at light years away from you – you’d feel an acceleration in the order of 10^14m/s^2 (no mistake in the order of magnitude – you also multiply 6 from avogadro’s number with 6 – 7 from the minimum acceleration). That’s in the order of 10 trillion times g. This is true even if that mole of substance is at edge of the observable universe!
    Now imagine the amount of particles in the universe and couple this with the fact that a young universe had a smaller Hubble radius => higher minimum acceleration. The further you go back in time, the higher the acceleration would have been. Right after Big Bang – the minimum acceleration would have been equal or slightly less then the maximum acceleration allowed by QI. In this situation, do you think we would have the universe as we see today? If QI is valid, I’d speculate that it would have collapsed into a black hole immediately after Big Bang.

    Like

  43. @apass
    Just because we call it Modified Newtonian Dynamics (MoND) or Quantised Inertia (QI) doesn’t mean that we are replacing General Relativity by Newton’s Theory of Gravitation; a minimum acceleration might just be the observed effect of a minimum curvature of space-time. General Relativity assumes that space-time far away from any masses is flat, but all tests of it have been at curvatures far above the mimimum curvature needed for MoND. As Stacy explains above we calculate the motion of a test particle based on the gravitational potential it moves in, not by gravitational attraction between individual particles.

    Like

  44. Indeed. There is an important, more general point here. Nobody that I know is talking seriously about abandoning General Relativity. I certainly am not: one obviously has to retain the successes of GR, so far as they go. These manifestly do not extend to situations in which we have to invoke a tooth fairy like dark matter.
    Ideally, we would construct an extended theory of gravity that incorporates both GR and MOND, just as GR incorporated Newtonian gravity in a more general theory. TeVeS is an example of an attempt at such a theory. That TeVeS has now been excluded (along with many other ideas Bekenstein proposed before that one) does not mean it is impossible, simply that we aren’t there yet.
    Nevertheless, every time MOND or any such theory is discussed, the misconception arises – often among practicing scientists – that such discussion means “abandoning” GR. That’s just silly. I despair of making progress so long as this absurd misconception persists.
    In contrast, I see no hand-wringing about dark matter meaning that we are abandoning the standard model of particle physics. Because of course we’re not – we are hoping that there is new [particle] physics there beyond the standard model. But we’re not talking about throwing away the standard model as it exists. Rather, one hopes for a deeper insight that explains why the standard model is the way it is, and also provides a home for new particles.
    So it is for gravity as well. The situation is completely symmetric. But for reasons that have everything to do with history and sociology and nothing to do with science, we we find ourselves at a peculiar time when we hope very much to see evidence for new physics beyond the standard model of particle physics but deride the possibility of new physics beyond the standard theory of gravity. This, despite simultaneously presuming that there must be a quantum theory of gravity beyond the same theory of gravity that we mustn’t abandon.
    I understand, all too well, how hard it is to abandon the notion of dark matter – because that is really what we’re talking about abandoning. I grew up with cold dark matter; I understand the reasons that motivate it as well as anyone. In many ways I remain more comfortable with it as a hypothesis. But guess what? The universe doesn’t care about what we’re comfortable with. It is our job as scientists to figure out how the universe works, not insist that it conform to our favorite idea of how it ought to work. That is the path to religion – faith in the unseen – not science.

    Like

  45. Dr. McGaugh– If you are referring to the GW170817 observations when you write that “TeVeS has now been excluded”, it is possible that TeVeS’s death has been exaggerated.

    The several analyses of the GW170817 event appear to have treated TeVeS as a “Bimetric Theory” in which gravitational and electromagnetic waves have independent “light-cones.” However, both the “scalar” and the “second metric” in TeVeS are in fact just “nondynamical” auxiliary fields, and both of them can be algebraically eliminated everywhere in favor of just the “physical metric” and the vector field, to yield a purely “Tensor-Vector” theory that is in the same family as a generalization of Jacobson’s “Einstein-Aether” theory — see e.g. Zlosnik, Ferreira, Starkman, arXiv:gr-qc/0606039. (Moreover, the vector field couples only to the physical metric in the algebraically reduced theory, it does not couple directly to matter, — and so the vector field effectively becomes a “Dark Fluid” (albeit one with a rather peculiar equation of state) that can only interact with itself and gravity; it would not show up in a particle-physics “Dark Matter” experiment.)

    Therefore, since there is in fact only one metric in TeVeS, there should only be one “light cone” in TeVeS, and it should govern causality for both gravity and electromagnetism (as well as every other force) — and hence in TeVeS gravitational and electromagnetic waves should both arrive at the same time, so that the GW170817 observations do not rule out either TeVeS nor Jacobson’s “Einstein-Aether” theories

    (There are, of course, more complicated “true bimetric” theories in which the second metric is dynamical rather than being just an algebraic “auxiliary field” — but TeVeS is not one of them.)

    Like

  46. @gdp,

    “Note that even Zwicky only thought that his “viscosity” hypothesis was relevant inside the dense cores of clusters and galaxies, not the sparse outer fringes of galaxies — see Zwiky’s Fig.1 — and therefore even if his hypothesis _had_ been correct, it would have no relevance to the sparse outer fringes of galaxies or low surface-brightness galaxies where the observed “mass discrepancy,” “flat” rotation-curve, and “radial acceleration relation” kick in .”

    Well, yes and no. Zwicky did think the viscosity concept applies beyond the “dense core”:

    “…the internal viscosity will not vanish abruptly at r = r(0) but will disappear gradually with increasing r.”

    He did dismiss its relevance to the low acceleration regime, however.

    My point regarding the viscosity concept is not to suggest that Zwicky’s rather informal discussion is in itself adequate, but rather that the concept needs to be reconsidered in light of our current knowledge of the nature of galactic structures which is certainly more nuanced and deeper than what was understood in 1937 or 1941.

    Which brings me to Chandrasekhar’s paper. I have not had time to reread it but my original impression was that the math was derived in consideration of globular clusters and then rather casually applied to galactic systems. But galactic systems are not similar to globular clusters in the same way they are not similar to stellar systems – both the scale and structure are significantly different. In that regard also the concept of collisional vs collisonless conditions with regard to galactic cores seems just a term of art employed for the sake of mathematical convenience rather than a rigorously defined state.

    Like

  47. @budrap– On the contrary, the terms “collisional” versus “collisionless” have a well-defined meaning to anyone doing many-body theory. One need only google the phrases “collisional system” and “collisionless system” to find thousands of discussions.

    In a “collisional” system, as previously stated, the mean-free-path of a body between two-body encounters is small compared to the scale of the system, or equivalently the mean time between two-body encounters for a body is short compared to the “crossing time” (for particle-in-a-box type systems) or “orbital period” for “potential-well”-like systems. In such systems, each of the subcomponent bodies executes a “diffusive” motion rather than an “orbital” motion, and given sufficient time, each body fully explores the phase-space available to it given the available energy.

    Conversely, in a “collisionless” system, the mean-free-path of a body between two-body encounters is large compared to the scale of the system, or equivalently the mean time between two-body encounters for a body is long compared to the “crossing time” or “orbital period”, so that each body crosses the system many times or equivalently makes many orbits before a two-body encounter perturbs it only a new orbit. Therefore, by the KAM theorem, the trajectory of each body in a “collisionless” system that is undergoing a “regular” motion (as opposed to a “resonant” or a “chaotic” motion), will remain bounded by a “KAM Torus” for a long period of time, before jumping to a new KAM torus after a two-body encounter. For the KAM Theorem, https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Arnold%E2%80%93Moser_theorem

    Chandrasekhar’s paper was written quite abstractly, and did not assume any particular type of star-system:
    http://adsabs.harvard.edu/abs/1943ApJ….97..255C

    Like

  48. P.S.– Chandrasekhar himself makes the following comment on his deliberate choice to use the new term “dynamical friction” rather than “viscosity” to describe the result of his analysis:

    To avoid misunderstandings we shall make some remarks (which are otherwise obvious) concerning the reasons for introducing the new notion of dynamical friction and avoiding the usage of the term “viscosity.” First, the physical ideas underlying the concepts of dynamical friction and viscosity are quite distinct: thus, while the “coefficient of dynamical friction” refers to the systematic deceleration which individual stars experience during their motion, “viscosity,” as commonly understood, refers to the sheering force exerted by one element of gas on another. Second, dynamical friction is an exact notion expressing the systematic decelerating effect of the fluctuating field of force acting on a star in motion, in contrast to viscosity, which, as a concept, is valid only when averaged over times which are long compared to the time of relaxation of the system and over spatial dimensions which are large compared to the mean free paths of the individual molecules. Thus, while the introduction of dynamical friction in stellar dynamics presents no difficulty, the circumstances are very different for a rational introduction of “viscosity” in the subject (cf. Stellar Dynamics, pp. 76-78 and 184).

    Like

Comments are closed.