Continuing from last time, let’s compare recent rotation curve determinations from Gaia DR3:

Fig. 1 from Jiao et al. comparing three different realizations of the Galactic rotation curve from Gaia DR3. The vertical lines* mark the range of the Ou et al. data considered by Chan & Chung Law (2023).

These are different analyses of the same dataset. The Gaia data release is immense, with billions of stars. There are gazillions of ways to parse these data. So it is reasonable to have multiple realizations, and we shouldn’t expect them to necessarily agree perfectly: do we look exclusively at K giants? A stars? Only stars with proper motion and/or parallax data more accurate than some limit? etc. Of course we want to understand any differences, but that’s not going to happen here.

My first observation is that the various analyses are broadly consistent. They all show a steady decline over a large range of radii. Nothing shocking there; it is fairly typical for bright, compact galaxies like the Milky Way to have somewhat declining rotation curves. The issue here, of course, is how much, and what does it mean?

Looking more closely, not all of the data agree with each other, or even with themselves. There are offsets between the three at radii around the sun (we live just outside R = 8 kpc) where you’d naively think they would agree the best. They’re very consistent from 13 < R < 17 kpc, then they start to diverge a little. The Ou data have a curious uptick right around R = 17 kpc, which I wouldn’t put much stock in; weird kinks like that sometimes happen in astronomical data. But it can’t be consistent with a continuous mass distribution, and will come up again for other reasons.

As an astronomer, I’m happy with the level of agreement I see here. It is not perfect, in the sense that there are some points from one data set whose error bars do not overlap with those of other data sets in places. That’s normal in astronomy, and one of the reasons that we can never entirely trust the stated uncertainties. Jiao et al. make a thorough and yet still incomplete assessment of the systematic uncertainties, winding up with larger error bars on the Wang et al. realization of the data.

For example, one – just one of the issues we have to contend with – is the distance to each star in the sample. Distances to individual objects are hard, and subject to systematic uncertainties. The reason to choose A stars or K giants is because you think you know their luminosity, so can estimate their distance. That works, but aren’t necessarily consistent (let alone correct) among the different groups. That by itself could be the source of the modest difference we see between data sets.

Chan & Chung Law use the Ou et al. realization of the data to make some strong claims. One is that the gradient of the rotation curve is -5 km/s/kpc, and this excludes MOND at high confidence. Here is their plot.

You will notice that, as they say, these are the data of Ou et al, being identical to the same points in the plot from Jiao et al. above – provided you only look in the range between the lines, 17 < R < 23 kpc. This is where the kink at R = 17 kpc comes in. They appear to have truncated the data right where it needs to be truncated to ignore the point with a noticeably lower velocity, which would surely affect the determination of the slope and reduce its confidence level. They also exclude the point with a really big error bar that nominally is within their radial range. That’s OK, as it has little significance: it’s large error bar means it contributes little to the constraint. That is not the case for the datum just inside of R = 17 kpc, or the rest of the data at smaller radii for that matter. These have a manifestly shallower slope. Looking at the line boundaries added to Jiao’s plot, it appears that they selected the range of the data with the steepest gradient. This is called cherry-picking.

It is a strange form of cherry-picking, as there is no physical reason to expect a linear fit to be appropriate. A Keplerian downturn has velocity decline as the inverse square root of radius (see the dotted line above.) These data, over this limited range, may be consistent with a Keplerian downturn, but certainly do not establish that it is required.

Contrast the statements of Chan & Chung Law with the more measured statement from the paper where the data analysis is actually performed:

… a low mass for the Galaxy is driven by the functional forms tested, given that it probes beyond our measurements. It is found to be in tension with mass measurements from globular clusters, dwarf satellites, and streams.

Ou et al. (2023)

What this means is that the data do not go far enough out to measure the total mass. The low mass that is inferred from the data is a result of fitting some specific choice of halo form to it. They note that the result disagrees with other data, as I discussed last time.

Rather than cherry pick the data, we should look at all of it. Let’s see, I’ve done that before. We looked at the Wang et al. (2023) data via Jiao et al. previously, and just discussed the Ou et al. data. That leaves the new Zhao et al. data, so let’s look at those:

Milky Way rotation curve with RAR model (blue line from 2018) and the Gaia DR3 data as realized by Zhou et al. (2023: purple triangles). The dashed line shows the number of stars (right axis) informing each datum.

These data were the last of the current crop that I looked at. They look… pretty good in comparison with the pre-existing RAR model. Not exactly the falsification I had been led to expect.

So – the three different realizations of the Gaia DR3 data are largely consistent, yet one is being portrayed as a falsification of MOND while another is in good agreement with its prediction.

This is why you have to take astronomical error bars with a grain of salt. Three different groups are using data from the same source to obtain very nearly the same result. It isn’t quite the same result, as some of the data disagree at the formal limits of their uncertainty. No big deal – that’s what happens in astronomy. The number of stars per bin helps illustrate one reason why: we go from thousands of stars per bin near the sun to tens of stars in wider bins at R > 20 kpc. That’s not necessarily problematic, but it is emblematic of what we’re dealing with: great gobs of data up close, but only scarce scratches of it far away where systematic effects are more pernicious.

In the meantime, one realization of these data are being portrayed as a death knell for a theory that successfully predicts another realization of the same data. Well, which is it?


*Thanks to Moti Milgrom for pointing out the restricted range of radii considered by Chan & Chung Law and adding the vertical lines to this figure.

7 thoughts on “Recent Developments Concerning the Gravitational Potential of the Milky Way. II. A Closer Look at the Data

  1. Thank you very much. I have learned a lot. I find especially the presentation of the number of stars per averaging very informative. And so I can continue to work in peace on a model for our space….

  2. In the finance world, we call this sort of analysis “chart crime”. They clearly zoomed in on a section of the chart that supported their preconceived notions and/or would generate clicks. Their result is laughable if you look at the overall data set.

    1. When funding, promotion or even employment depend on following mainstream ideas, official narrative this is unavoidable. The system rewards “team players” not objectivity.

  3. I’m really surprised by how bad the paper is.
    But as Indranil put it… I too really suspect that the career interests are better served by publishing a bad result.
    And I’d say this is a problem in Academia, which first looks at the number of citations in order to assess the quality of a work. Spectacularly wrong results (as Indranil put it) result in more citations.

  4. The sociology is such that people are afraid to say nice things about MOND regardless of career status. They’re not wrong, given my own experience.

    There also seem to be some people who drag it just to seek attention. It’s easy to take cheap shots at because most everyone will agree with any negative assessment without bothering to fact-check. Confirmation bias works wonders that way.

Comments are closed.