The MOND at 40 conference

I’m back from the meeting in St. Andrews, and am mostly recovered from the jet lag and the hiking (it was hot and sunny, we did not pack for that!) and the driving on single-track roads like Mr. Toad. The A835 north from Ullapool provides some spectacular mountain views, but the A837 through Rosehall is more perilous carnival attraction than well-planned means of conveyance.

As expected, the most contentious issue was that of wide binaries. The divide was stark: there were two talks finding nary a hint of MONDian signal, just old Newton, and two talks claiming a clear MONDian signal. Nothing was resolved in the sense of one side convincing the other it was right, but there was progress in terms of [mostly] amicable discussion, with some sensible suggestions for how to proceed. One suggestion was that a neutral party should provide all the groups with several sets of mock data, one Newtonian, one MONDian, and one something else, to see if they all recovered the right answers. That’s a good test in principle, but it is a hassle to do in practice, as it is highly nontrivial to produce realistic mock Gaia data, so no one was leaping at the opportunity to stick their hand in this particular bear trap.

Xavier Hernandez made the excellent point that one should check that one’s method recovers Newtonian behavior for close binaries before making any claims to require/exclude such behavior for wide binaries. Neither MOND nor dark matter predicts any deviation from Newtonian behavior where stars are orbiting each other well in excess of a0, of which there are copious examples, so they provide a touchstone on which all should agree. He also convinced me that it was a Good Idea to have radial velocities as well as proper motions. This limits the sample size, but it helps immensely to insure that sample binaries are indeed bound pairs of binary stars. Doing this, he finds MOND-like behavior.

Previously, I linked to a talk by Indranil Banik, who found Newtonian behavior. This led to an exchange with Kyu-Hyun Chae, who has now posted an update to his own analysis in which he finds MONDian behavior. It is a clear signal, and if correct, could be the smoking gun for MOND. It wouldn’t be the first one; that honor probably goes to NGC 1560, and there have been plenty of other smoking guns since then. The trick seems to be finding something than cannot be explained with dark matter, and this could play that role since dark matter shouldn’t be relevant to binary stars. But dark matter is pretty much the ultimate Rube Goldberg machine of science, so we’ll see explanation people come up with, should they need to do so.

At present, the facts of the matter are still in dispute, so that’s the first thing to get straight.


Thanks to everyone I met at the conference who told me how useful this blog is. That’s good to know. Communication is inefficient at best, counterproductive at worst, and most often practically nonexistent. So it is good to hear that this does some small good.

53 thoughts on “The MOND at 40 conference

  1. Am slowly reading Kyu-Hyun Chae’s updated paper on wide binaries, linked by Stacy, which was published just a few days ago on 22 June. I’m only up to page 3, but was intrigued by this sentence in the next to last paragraph on page 2, that refers to a selected sample of wide binaries, where “PM” stands for Proper Motion: “It is found that these accurate PMs reveal an immovable anomaly of gravity in favor of MOND-based modified gravity.” The paper is 37 pages long, so will take me some time to read and comprehend. But with gorgeous Sunday weather, a bike ride is in order first.

    Like

    1. Bike rides should always come first when the weather is gorgeous.

      There are a series of figures, starting with Fig. 21, that I appreciate for clearly showing the effect as an offset from the purely Newtonian expectation. Whether it is correct is a separate matter, but I can clearly see the signal.

      Like

  2. any one could come and read this

    arXiv:2306.13026 (physics)
    [Submitted on 22 Jun 2023]
    Methodological Reflections on the MOND/Dark Matter Debate
    Patrick M. Duerr (Hebrew University and Oxford University), William J. Wolf (Oxford University)
    Download PDF

    The paper re-examines the principal methodological questions, arising in the debate over the cosmological standard model’s postulate of Dark Matter vs. rivalling proposals that modify standard (Newtonian and general-relativistic) gravitational theory, the so-called Modified Newtonian Dynamics (MOND) and its subsequent extensions. What to make of such seemingly radical challenges of cosmological orthodoxy? In the first part of our paper, we assess MONDian theories through the lens of key ideas of major 20th century philosophers of science (Popper, Kuhn, Lakatos, and Laudan), thereby rectifying widespread misconceptions and misapplications of these ideas common in the pertinent MOND-related literature. None of these classical methodological frameworks, which render precise and systematise the more intuitive judgements prevalent in the scientific community, yields a favourable verdict on MOND and its successors — contrary to claims in the MOND-related literature by some of these theories’ advocates; the respective theory appraisals are largely damning. Drawing on these insights, the paper’s second part zooms in on the most common complaint about MONDian theories, their ad-hocness. We demonstrate how the recent coherentist model of ad-hocness captures, and fleshes out, the underlying — but too often insufficiently articulated — hunches underlying this critique. MONDian theories indeed come out as severely ad hoc: they do not cohere well with either theoretical or empirical-factual background knowledge. In fact, as our complementary comparison with the cosmological standard model’s Dark Matter postulate shows, with respect to ad-hocness, MONDian theories fare worse than the cosmological standard model.

    Comments: forthcoming in Studies in History and Philosophy of Science
    Subjects: History and Philosophy of Physics (physics.hist-ph)
    Cite as: arXiv:2306.13026 [physics.hist-ph]

    Like

  3. I expected someone to bring this up. I have not read the whole thing, but the abstract alone flies in the face of my lived experience working on both dark matter and MOND. One might write a rebuttal if Merritt hadn’t already written an entire book doing so. The complaint about ad-hocness is especially rich, as if it is not ad hoc to invoke invisible mass from an unobserved dark sector as an auxiliary hypothesis to save FLRW cosmology when there is zero evidentiary support for such stuff outside the astronomical evidence that I am an expert on and these authors are not. I’d be inclined to dismiss this paper as a philosophical exercise in asserting that up is down, left is right, black is white, but that seems too generous in this day and age when really it just seems like a form of trolling, trying to prompt a reaction by begging for attention they don’t deserve.

    Like

    1. okay
      One argument they use is that dark matter explains everything from galactic rotation curves, gravitational lensing, CMB, large scale structure, with just plain GR plus unknown dark matter, whereas MOND only explains galactic rotation curves, but not the other observations, and changing GR to fit only rotation curves, but not explain the other astronomical observations is “ad hoc”

      Like

      1. But that’s not true. Not remotely. It was the first thing I checked in the mid-90s, and I spent a couple of years doing so. I’ve checked many times since then (http://astroweb.case.edu/ssm/mond/LCDMmondtesttable.html). They seem to be repeating a straw-man version of what they, and apparently many people, perceive MOND to be. So we’re not really talking about the same thing, and I can’t do more than I already have to enlighten them, see, e.g. https://tritonstation.com/2018/10/04/the-arrogance-of-ignorance/

        It is an oft-repeated falsehood that “MOND only explains rotation curves.” That’s not right, and I showed it wasn’t right a quarter century ago, and it is less right now than it was then. MOND does explain gravitational lensing (esp. the lensing extension of the RAR, which DM does not explain) and some aspects of large scale structure (empty voids, early galaxy formation) but not all (the power spectrum [some people seem to think that’s all there is to it]). In contrast, dark matter only “explains” these things in being an inference with arbitrary freedom to fit whatever the data do. It does not provide a satisfactory explanation for rotation curves and cannot predict them with the precision MOND does. If I thought it could, I would say so, and stop having this argument with every poser who happens along. I worked very hard to save DM in this regard. I couldn’t. They don’t have to agree with me about that interpretation, but they’re not even engaging with it. They’re just accepting that the DM interpretation is OK without apparently understanding what’s involved.

        Like

      2. See anything can be explained when it’s possible to introduce – in an in hoc way – anything.Which is what, and how, dark matter exponents do… But this leaves the question of the range of application. Different ranges seemingly require different ad hoc explanations. So…?

        Like

    2. Dear Stacy,
      I had 3 years of philosophy during my physics degree. The scientific work in this subject consisted mainly of quoting the 3 chief philosophers. The first year was okay. The 2nd year was acceptable. The 3rd year was really just plain nonsense. Unfortunately, we had the main exam at the end of the 3rd year. I barely got a 4. A 3 in philosophy overall and a 2 on the diploma report card as an overall grade.
      There are two things worth noting about everything:
      1. two years later everything turned out to be a big nonsense.
      2. there were a lot of students at that time who got a 1 in this subject. How did they do it?

      Like

      1. The philosophy of science is important and can be useful, but it can also be overwrought and used to mislead. There seems to be an ethos in the field that everything that’s been said has to be challenged and contradicted, just for the sake of being contrary.

        Like

        1. Dr. McGaugh,

          Whether intended or not, the activity of analytic philosophers does see progress of sorts. It is impeded, however, because people demand that their beliefs be truths. I have been investigate the modal logic of knowledge recently because of a commenter on this blog wrote that “knowledge is more important than understanding.”

          The classical definition of “knowledge” is “justified true belief.” It is criticized in terms of what are called “Gettier problems.” On this basis, it is difficult to see how people can make any knowledge claims at all.

          A related problem which affects many “verificationist paradigms” that are promoted with respect to the sciences is Fitch’s paradox of knowability. It suggests that the assumption that there exist unknown truths (non-omniscience) to be discovered is severely problematic. The argument collapses the assumption to the conclusion that all truths are known.

          Naturally, philosophers look for solutions through modification. One such modification is to introduce a relevance condition. This is discussed in Section 5.3 of the SEP entry on knowledge,

          https://plato.stanford.edu/entries/knowledge-analysis/#ReleAlte

          In so far as modern science can trace its influence to Galileo, the relevance condition is that of “measurement”:

          https://libquotes.com/galileo-galilei/quote/lbi0u0k

          Presumably, this is why “numbers” and “calculation” become central to physical reasoning.

          In mathematics, there had really been no “ontology of real numbers” until the late 19th century. This arises in responses to make mathematics “more logical.” Before then, Berkeley maintained that the calculus always had small errors, and, reasoning in analysis focused upon the linear component of expansions.

          Eventually, proofs accommodating this situation took the form of epsilon-delta arguments. But, what do such arguments “prove”?

          They prove that no interlocutor can produce a counterexample. They do not prove a “truth” in the sense of objects having properties.

          Because of measurement, science produces effective “stories.” It is a relevance condition on knowledge claims. But when people with beliefs demand that mathematics and science justify their beliefs, it is a demand that mathematics and science be a corpus of truths.

          Bad philosophy does a disservice to us all.

          Like

        2. I use Popper and Occham as a guideline.
          If a theory makes no predictions, it is useless.
          A good theory must make predictions and thus be falsifiable. (Popper)
          And it must not be too flexible.
          It must contain as few degrees of freedom as possible. (Occam’s razor)

          Liked by 1 person

          1. It was Occham that caused me concern for dark matter in the first place. I cannot describe all the efforts I made to save the idea before learning about MOND. These efforts all violated the rule of parsimony quite brutally. People don’t seem to grasp this, nor just how flexible dark matter models have become. I thought that the models I considered in the ’90 violated parsimony; they were simple compared to modern simulations. The latter still manage to fall in the broad categories I discussed (almost all modern models are “SH” with some having an element of “DD” mixed in under the new name “assembly bias”) and suffer the same failings. The only difference is that I recognized them as failings where most workers do not. Maybe I am wrong to consider all the fine tuning and ad hoc parameter adjustment to be failings (some feedback parameters are black boxes full of additional fee parameters), but no one is engaging with that; they’re just pretending that it is OK to have arbitrarily flexible theories with an arbitrary number of degrees of freedom. When I challenge people about this, they usually say something like “galaxies are complicated” which should be true in LCDM but isn’t in nature: galaxies are so simple that they obey a single effective force law.

            So, yeah, if someone tells you dark matter is simpler than MOND, it only informs you that that person does not understand what is involved in dark matter.

            Like

        3. Yes indeed. And some people do it for the exact, and only, reason of being contentious, without explaining – precisely – what their objections are to whatever their contentions refer to. Which, really, is nonsense.

          Like

  4. Your post mentions that dark matter shouldn’t be relevant to binaries. Can you elaborate if that holds for all DM candidates, including axions?

    Like

    1. I hesitate to say it holds for all DM candidates, because there are infinite possibilities. However, yes, it does apply to all I can think of offhand, which is a lot. The basic constraint is that the density of dark matter has to have a certain value locally, which is a number we know pretty well. If you integrate up this dark matter density within the solar system, out to 40 AU (the orbit of Pluto), it adds up to about one asteroid – practically nothing in the scheme of solar system dynamics, especially since it is [presumed to be] spread thin throughout. If we now talk about wider binaries of 1000s of AU, the integrated mass goes up, but it is still a meager amount that will not affect the binary stars. I don’t see why axions would be any different to WIMPs in that respect. If instead the dark matter were 40 solar mass primordial black holes, then you’d either have one or not. If one came by, it would disrupt the binary entirely, not imprint MONDian behavior. So yeah, the statement is pretty general.

      Like

      1. That axions can manifest as a field (with an interaction that may depend on the gradient of the gravitational potential, and also perhaps on local EM fields), to me suggests a very different behavior from WIMPs.
        Whether axion-like particles must have a minimum local mass density to simulate DM effects is not really clear to me, but maybe that is true as you suggest.

        Like

        1. This was part of my hesitation. For ideas like superfluid dark matter, where MOND-like behavior stems from a bosonic condensate, maybe binaries would be appropriately affected. I have no idea, really.

          Like

  5. Another possibility is to select the overlapping subsample (assuming such thing exists) common to two papers that reach a different conclusion, then redo the two analysis. This would reduce the statistical significance of the result, but could help deciding if the difference is due to the analysis or to contaminants.

    Like

    1. Sample selection is definitely a big issue. As best I can tell, the approach of the camp that finds a Newtonian result is to take a huge sample of binary candidates (to oversimplify, stars with the same proper motion within the errors that are far enough apart to be in the low acceleration regime) and model the bejeepers out of the whole population and the various relevant effects like triple systems and chance projections. The camp that finds a MONDian signal focuses on selecting a smaller, higher quality sample that is more certain to be composed of binaries of interest. The latter is bound to be a subset of the former, but is so different that the same analysis approach doesn’t really pertain. Normally, I’m all for a high quality sample, but one has to worry if it is skewed in some way. I don’t see how, but I don’t know enough to judge.

      Like

  6. Just following this interesting debate, I’m just layperson, but what about Curry’s approach of a two phase DM particle, that was adapted an brought forward by S. Hossenfelder. Why isn’t it part of the discussion?

    Like

    1. The discussion is about a specific test of Newtonian vs MOND behavior in wide binaries, not about the relative values of DM models. Since this test most likely exclude DM (as Stacy explained in a comment above), the particular model you’re mentioning probably has nothing to say on the topic.

      Like

    2. I’m familiar with a large number of hypothesized dark matter candidates, but I’ve never heard of this one. There are just too many – a problem I pointed out a long time ago: if particle A (WIMPs) doesn’t work, let’s go with particle B (Axions). If not B, then C, ad infinitum. I think the community needs to go through a phase where it tries out many new ideas, but we need to bear in mind that at most one of these can be right and perhaps zero. I worry that the aftermath of the demise of the WIMP is a chaos of competing oddities that superficially all agree that there is dark matter without actually agreeing on what it is or what it needs to be or much of anything else. Cold dark matter already has this problem – most workers in the field will tell you that is the leading candidate – but if you press each one about what they mean by that, there are a few answers in common followed by a whole bunch of disagreement.

      Like

      1. Thank you for your reply, you’re right a lot of approaches, I’m afraid, axions won’t work either. Therefore I figure Hossenfelder’s work interesting, sad, that it isn’t even discussed. She and her group published a last paper about it in 2021: arXiv:2320.08560v1 [astro-ph.Ga]
        Tobias Mistele, Stacy McGaugh, Sabine Hossenfelder: Superfluid dark matter in tension with weak gravitational lensing data.
        There’s also a video with her about that on the YouTube channel Ri – the royal institution, titled “Is Dark Matter Real? – with Sabine Hossenfelder. It’s about a year ago. All the best

        Like

        1. Right. I am Stacy McGaugh, a coauthor* of that paper. All we were saying there is that superfluid dark matter does not appear to work to explain weak lensing.

          You said Curry’s approach earlier – could you have meant Justin Khoury? I hope so, as I know well who Khoury is but have no idea what you’re talking about if it is Curry as you wrote. Khoury is one of the proponents of superfluid dark matter, and as I comment elsewhere in these comments, that might be OK with MONDian binary behavior as it is one of the hybrid models designed to yield MOND in the right limit. So yes, we should pay more attention to such ideas.

          *Funny that you should refer to us as “her group”. The order of priority of authorship differs in her field and mine. In mine, coming earlier in the list is better. In hers, being last is the mark of being the senior author. So this works out the best for both of us in each of our respective conventions. Which is all very silly, especially since Tobias did all the work.

          Like

          1. So I’m honored to talk to you as one of the authors. I apologize, as I said, I’m layman, who’s interested in this astrophysical stuff since early youth. ‘Justin Khoury’ is surely right, a translation fault, I guess, so I wrote it wrong. I just know about your paper from S.Hossenfelder’s video, she gave me the link in the comments, and two lectures, she hold, so I had a look on it, therefore I called it ‘her group’. She didn’t claim it as her work though, my fault again.
            Ok, didn’t know that it’s called “Mondian binary…” , learned that now and understand better. Must be a particle of low mass, right, to switch from fluidity to superfluidity? Axions are still more than phantasy after so many years of nothing? Heard about a new experiment about it, ALPS2, but sceptical. Understood your paper so far that the model doesn’t work well with the data of weak gravitational lenses, but it’s still part of the debate? Thanks again for your attention and information.

            Like

            1. Yes, superfluid dark matter has problems – as all options do – so is still part of the discussion. It would indeed need to be very low mass. No sign of axions that could serve as dark matter, nor WIMPs, nor anything else so far in the laboratory.

              Like

  7. I can only comment as an astronomy-ignorant physicist, but if Pawlowski’s summary of the situation is accurate, I really can’t see any way in which Banik’s methodology is salvageable. If only binaries with massive separations are considered, then all possible MOND-like behavior is just calibrated right the hell out of the analysis by default, and you’ll obviously only see Newtonian behavior by design. To an outsider, it would almost look like one was purposefully trying to sabotage the analysis by specifically choosing the exact subset of data that would falsify MOND. If Banik doesn’t calibrate the data to a regime that’s safely Newtonian in both null and alternative hypotheses, then I don’t trust the analysis.

    Based on every account I’ve heard so far, it seems pretty obvious to me which way the wind is going to blow on this, given sufficient time. I must be missing something big, because I can see why there’s even serious debate at this point regarding these competing studies.

    Liked by 1 person

    1. I don’t think Banik’s analysis was purposefully trying to sabotage the analysis, but it is certainly possible to have this effect unintentionally.

      I guess I’ve seen too many astronomical results go south (H0=50 for sure!) so I am reluctant to leap to conclusions. At least one set of analyses has to be wrong, but I don’t want to put my thumb on the scale, or even give even the appearance of doing so.

      Like

  8. I also attended these talks, and agree with your overall analysis. I was most impressed by Hernandez’s very careful analysis, and nuanced conclusions. But for me, the most exciting talk of the conference was the one by Riccardo Scarpa, whose analysis of the JWST data on galaxies at redshift z>10 was literally jaw-dropping. He claims the data contradict the expansion of the universe, and therefore the Big Bang itself. Do you have any comment on that?

    Like

    1. Somewhat tangential, but FWIW Pengfei Li has an interesting paper out on “Distance Duality Test: The Evolution of Radio Source Mimics a Nonexpanding Universe.” From the summary: “…the size and luminosity density of ultracompact radio sources evolve in the way that precisely mimics a nonexpanding universe.”

      Like

    1. There is something odd about the angular size-distance relation, which looks pretty Euclidean in a number of tests. One of them is that bright, high-redshift galaxies appear to be very compact, much smaller than their modern day counterparts. If one asserts that these are standard rods, then that breaks cosmology. Presumably they aren’t standard rods, having evolved in some way. “Some” is doing a lot of work there, as the required evolution, to use technical jargon, is pretty dang weird.

      Like

      1. Does this oddity include the observation that the Hubble tension grew larger as CMB analyses wee based on finer angular resolution (which I believe you have noted elsewhere on this blog – apologies if I am not stating it correctly)?

        Like

        1. Maybe, but probably not. The weird thing about the weird thing about the angular diameter distance is that it doesn’t seem to apply to the luminosity distance. The two are intimately related; one can’t be way off without dragging the other along with it. To be fair, strange luminosity distances gave us dark energy. But that’s only one degree of weird compared to ALL OF COSMOLOGY IS WRONG.

          Like

          1. This wouldn’t be the first time where all of cosmology is wrong. The dominant circular-orbits geocentric model of the universe with epicycles back in the day was proven wrong by the likes of Galileo, Copernicus, and Kepler as more data came in.

            The notion of an expanding universe, upon which our current cosmological framework rests upon, was developed in the mid 1920s based upon only one observation: cosmological redshift, long before any tests, like the Tolman surface brightness test or Li’s distance duality test, were developed to show whether the universe is actually expanding or not. It is only very recently where enough data has been accumulated to actually tease out whether the universe is expanding or not, and the results for the Tolman surface brightness test are very much consistent with a non-expanding universe:

            https://academic.oup.com/mnras/article/477/3/3185/4951333?login=false

            Like

        1. Seriously though, if complementarity is actually fundamental to our understanding, then anyone trying to understand the universe by fitting all observations into one complete and compatible description is in for an extreme headache!

          Like

  9. Shaun Hotchkiss interviews Jenny Wagner on abandoning the cosmological principle and the FLRW metric:

    Like

    1. I don’t normally watch these things (coals to Newcastle), but this is refreshing and interesting.

      The very first attempt to contemplate cosmology in MOND (Felten 1984, ApJ, 286, 3) pointed out that large scale anisotropies were likely to develop it a MOND universe so that the cosmological principle (homogeneity & isotoropy) would cease to hold by late times. By “late” I estimate z < 2, but that is very uncertain.

      Like

  10. Having read many times in various articles here on Tritonstation, and elsewhere, that MOND’s acceleration scale a0 ties in mathematically to cosmological parameters, I decided to work through the math just to see it for myself. One of a number of relations involves the speed of light, Hubble’s Constant (H0) and a0 (can’t do subscripts for the zeros). This relation, as mentioned by David Merritt (page 220) in his book “A Philosophical Approach to MOND”, was already noted by Milgrom in his first paper in 1983. The relationship is cH0/2 pi = a0. Quoting from Merritt’s book (bottom of page 220): “Milgrom and others noted an even more striking coincidence: that a0 is essentially identical to cH0/2 pi.”, where “identical” is italicized for emphasis.

    So for this relationship, and others (involving Lambda), mentioned in Merritt’s book, I first checked the dimensionality of the equations and they worked out after some thinking through. Then with a hand calculator I did the math, and sure enough it was astonishingly close, though I’m too lazy this hour in the morning to go through the calculation again to figure out what value of H0 I used. But in equation 8.13 (page 220) H0 is set at 75 km s^-2 resulting in a value for a0 of 1.16 x 10^-10 m/s^2. I mean like, wow, that’s amazingly close to the empirically determined best value (I think) for a0 of 1.20 x 10^-10 m/s^2.

    Like

    1. cH0/2 pi = a0
      This is a conjecture.
      You will not find a derivation.
      A derivation of this formula would be a killing strike for dark matter.
      By the way, the formula could also look like this:
      a0 = cH0/6 = cH0/3!
      or
      a0 = cH0/6.05278…
      But 2pi looks the most beautiful of course…

      Like

      1. I just dug out a paper, from my printed out paper collection, where Milgrom discusses the cosmology connection to MOND. The paper’s title is “The a0 – cosmology connection in MOND”. I’ve only skimmed the paper, but in the abstract it shows a0 (is approximately equal to) c^2 x Lambda^1/2. By dividing that product by the ubiquitous 2 pi you also get pretty close to the astronomical data derived value for a0. The number I came up with was 1.2534331 x 10^-10 m/s^2. So between that value and the one shown in Merritt’s book for cH0/2 pi, or 1.16 x 10^-10 m/s^2, it pretty much straddles the empirically derived value for a0. I must read the entire paper to get a better grasp of all this. It’s just so interesting like a detective novel!

        Like

  11. Yes, it sure would be a breakthrough if a physical mechanism was discovered to account for the “coincidence” between cosmological parameters and a0, especially using pi, which suggests a geometric explanation.

    Like

    1. Yes, it is striking that a0 ~ c*H0 ~ sqrt(Lambda). Even so, it strikes me as a bit of numerology, so I have always been dubious. Perhaps I feel too burned by buying into the WIMP miracle, in which the relic density of weakly interacting massive particles was “just right” to be the amount of dark matter needed. On examination, that doesn’t really hold up to even a random factor of 2*pi accuracy, but it contributed hugely to all of us hopping on the WIMP bandwagon.
      I think I was the one who first pointed out that 2*pi was very nearly the right fudge factor to make a0 and c*H0 effectively equal. And 2*pi is suggestive of something geometric. But it is still just numerology at this point; there is no derivation nor reason to think there will be a derivation, just the observation that 2*pi is the sort of factor that might maybe kinda sorta pop out of a derivation we don’t have.
      One thing about the coincidence with c*H0 is that it implies a0 should evolve with H0, so as the universe expands we’d have a(z) ~ H(z). However, there is no evidence that a0 evolves – so far it is consistent with being a constant, and yes, there are data that should be sensitive to this out to z > 1. So the potential connection to Lambda seems at present to be more compelling, or at least less unlikely.

      Like

  12. A group of astrophysicists have developed a method to measure the cosmological constant using the Milky Way and Andromeda in the Local Group:

    https://arxiv.org/abs/2306.14963

    Apparently there are already deviations from vanilla general relativity at the galaxy group scale when one adds dark energy to gravity.

    Like

    1. Yes, the interface between individual galaxies and where you have to start worrying about Lambda is interesting.

      I note that these authors have rediscovered that the MOND acceleration scale is relevant. Using their definition for T_L, a_L = c/T_L = 1.5E-10 m/s/s, which is the oft-noted coincidence with MOND’s a0 = 1.2E-10 m/s/s.

      Like

Comments are closed.