Note: this is a guest post by David Merritt, following on from his paper on the philosophy of science as applied to aspects of modern cosmology.

Stacy kindly invited me to write a guest post, expanding on some of the arguments in my paper. I’ll start out by saying that I certainly don’t think of my paper as a final word on anything. I see it more like an opening argument — and I say this, because it’s my impression that the issues which it raises have not gotten nearly the attention they deserve from the philosophers of science. It is that community that I was hoping to reach, and that fact dictated much about the content and style of the paper. Of course, I’m delighted if astrophysicists find something interesting there too.

My paper is about epistemology, and in particular, whether the standard cosmological model respects Popper’s criterion of falsifiability — which he argued (quite convincingly) is a necessary condition for a theory to be considered scientific. Now, falsifying a theory requires testing it, and testing it means (i) using the theory to make a prediction, then (ii) checking to see if the prediction is correct. In the case of dark matter, the cleanest way I could think of to do this was via so-called  “direct detection”, since the rotation curve of the Milky Way makes a pretty definite prediction about the density of dark matter at the Sun’s location. (Although as I argued, even this is not enough, since the theory says nothing at all about the likelihood that the DM particles will interact with normal matter even if they are present in a detector.)

What about the large-scale evidence for dark matter — things like the power spectrum of density fluctuations, baryon acoustic oscillations, the CMB spectrum etc.? In the spirit of falsification, we can ask what the standard model predicts for these things; and the answer is: it does not make any definite prediction. The reason is that — to predict quantities like these — one needs first to specify the values of a set of additional parameters: things like the mean densities of dark and normal matter; the numbers that determine the spectrum of initial density fluctuations; etc. There are roughly half a dozen such “free parameters”. Cosmologists never even try to use data like these to falsify their theory; their goal is to make the theory work, and they do this by picking the parameter values that optimize the fit between theory and data.

Philosophers of science are quite familiar with this sort of thing, and they have a rule: “You can’t use the data twice.” You can’t use data to adjust the parameters of a theory, and then turn around and claim that those same data support the theory.  But this is exactly what cosmologists do when they argue that the existence of a “concordance model” implies that the standard cosmological model is correct. What “concordance” actually shows is that the standard model can be made consistent: i.e. that one does not require different values for the same parameter. Consistency is good, but by itself it is a very weak argument in favor of a theory’s correctness. Furthermore, as Stacy has emphasized, the supposed “concordance” vanishes when you look at the values of the same parameters as they are determined in other, independent ways. The apparent tension in the Hubble constant is just the latest example of this; another, long-standing example is the very different value for the mean baryon density implied by the observed lithium abundance. There are other examples. True “convergence” in the sense understood by the philosophers — confirmation of the value of a single parameter in multiple, independent experiments — is essentially lacking in cosmology.

Now, even though those half-dozen parameters give cosmologists a great deal of freedom to adjust their model and to fit the data, the freedom is not complete. This is because — when adjusting parameters — they fix certain things: what Imre Lakatos called the “hard core” of a research program: the assumptions that a theorist is absolutely unwilling to abandon, come hell or high water. In our case, the “hard core” includes Einstein’s theory of gravity, but it also includes a number of less-obvious things; for instance, the assumption that the dark matter responds to gravity in the same way as any collisionless fluid of normal matter would respond. (The latter assumption is not made in many alternative theories.) Because of the inflexibility of the “hard core”, there are going to be certain parameter values that are also more-or-less fixed by the data. When a cosmologist says “The third peak in the CMB requires dark matter”, what she is really saying is: “Assuming the fixed hard core, I find that any reasonable fit to the data requires the parameter defining the dark-matter density to be significantly greater than zero.” That is a much weaker statement than “Dark matter must exist”. Statements like “We know that dark matter exists” put me in mind of the 18th century chemists who said things like “Based on my combustion experiments, I conclude that phlogiston exists and that it has a negative mass”. We know now that the behavior the chemists were ascribing to the release of phlogiston was actually due to oxidation. But the “hard core” of their theory (“Combustibles contain an inflammable principle which they release upon burning”) forbade them from considering different models. It took Lavoisier’s arguments to finally convince them of the existence of oxygen.

The fact that the current cosmological model has a fixed “hard core” also implies that — in principle — it can be falsified. But, at the risk of being called a cynic, I have little doubt that if a new, falsifying observation should appear, even a very compelling one, the community will respond as it has so often in the past: via a conventionalist stratagem. Pavel Kroupa has a wonderful graphic, reproduced below, that shows just how often predictions of the standard cosmological model have been falsified — a couple of dozen times, according to latest count; and these are only the major instances. Historians and philosophers of science have documented that theories that evolve in this way often end up on the scrap heap. To the extent that my paper is of interest to the astronomical community, I hope that it gets people to thinking about whether the current cosmological model is headed in that direction.

Kroupa_F14_SMoCfailures
Fig. 14 from Kroupa (2012) quantifying setbacks to the Standard Model of Cosmology (SMoC).

 

 

26 thoughts on “Cosmology and Convention (continued)

  1. A very helpful post. Thank you.
    I recall that Sean Carroll has written that he sees falsifiability as an idea ready for retirement.

    http://www.preposterousuniverse.com/blog/2014/01/14/what-scientific-ideas-are-ready-for-retirement/

    This was discussed here http://rationallyspeaking.blogspot.com.es/2014/01/sean-carroll-edge-and-falsifiability.html

    And in this rather nice response (note the date uploaded to the arXiv)
    https://arxiv.org/abs/1504.00108
    A Farewell to Falsifiability
    Douglas Scott, Ali Frolop, Ali Narimani, Andrei Frolov
    (Submitted on 1 Apr 2015)

    Liked by 1 person

    1. “We believe that we should also dispense with other outdated ideas, such as Fidelity, Frugality, Factuality and other “F” words”

      On the 1st of april, this remind us of the Bishop of Dublin Swift. “A modest proposal to prevent our favorite theories from suffering an experimental refutation”

      Great post, great blog. Thanks !

      Like

  2. There seems to be a correlation between those who want to throw Popper and falsifiability under the bus and those whose fancy favors non-falsifiable ideas that Popper would categorize as metaphysics.

    Like

    1. I am shocked to hear that Sean Carroll endorses such a view. Even though I was never impressed by his conclusions on DM, I always thought of him as a sincere scientist. I do not see how a sincere scientist could ever turn their back on the principle of falsifiability, without it there is not much to distinguish science from religion.

      Like

      1. Ron I think MargaritaMc is putting strong emphasis on the publication date which happens to be April fool’s Day. So what Sean Carroll wrote should be taken on the light side as humour.

        Like

        1. @thenexusparadigm
          You wrote
          “Ron I think MargaritaMc is putting strong emphasis on the publication date which happens to be April fool’s Day. So what Sean Carroll wrote should be taken on the light side as humour.”

          Sorry, but you have totally misunderstood what was in my comment.

          I posted three separate links.

          1. My first link was to a blog in which Sean Carroll mentioned that he had seriously written that falsifiability is an idea ready for retirement. This piece was written in response to The Edge’s annual wish question for 2014 and the full article (linked within Dr Carroll’s blog post) can be found here
          https://www.edge.org/response-detail/25322

          2. My second link was to a thoughtful blog post by philosopher of science Massimo Pigliucci, examining the article that Dr Carroll wrote.

          Then,
          3. My third link was something posted to the arXiv by cosmologist Dr Douglas Scott, of the University of British Columbia, as a very pointed April Fool’s Day joke, in which he expresses his disagreement with Dr Carroll’s position about falsifiability in a highly satirical and amusing manner.

          I made a point of emphasising the date that this third article was published in order to alert any readers here who are not native English speakers and who thus may not be familiar with this kind of satire.

          All the articles are worth reading.

          Liked by 1 person

    2. “… non-falsifiable ideas that Popper would categorize as metaphysics …” The string landscape might be non-falsifiable but the concept of the string landscape might generate many falsifiable hypotheses. If dark matter particles are somehow involved in maintaining the structure or the multiverse, then the physical properties of dark matter particles might contradict the known laws of physics. Consider the MOND-chamelon Hypothesis: Supersymmetry needs to be replaced by MOND-compatible-supersymmetry. There exists a quantum theory of gravity in which there are general relativistic pole masses and quantum theoretical running masses, which are unexpectedly non-conventional for MOND-chameleon particles. For galactic dynamics, most of the mass-energy of dark matter particles has the form of MOND-chameleon particles that have variable effective mass depending upon nearby gravitational acceleration. The empirical successes of MOND can be explained as follows: Replace the -1/2 in the standard form of Einstein’s field by a term which represents an apparent (but not real) failure of general relativity theory. The apparent failure is caused by ignoring the existence of MOND-chameleon particles. In other words, replace the -1/2 by -1/2 + MOND-chameleon-tracking-function — how might this explain MOND? In the range of validity of MOND, assume that MOND-chameleon-tracking-function is roughly a constant. An easy scaling argument shows that this amounts to boosting the gravitational redshift in such a way that there appears to be a universal acceleration constant as postulated in MOND.

      Like

  3. A well written post in which the writer demonstrates a profound grasp of the theory of knowledge and its development. In my opinion the theory of knowledge should be added as a module in all undergraduate and postgraduate courses. This removes the “shut up and calculate” approach in theoretical research. The LHC has recently released null results for supersymmetric particle searches. One might hope that these results should guide researchers towards a correct path in solving the galaxy rotation curve problem.

    Like

  4. First, I just discovered this blog today. It is great, one of the best I have ever come across.

    It seems to me that astronomy and physics are only now dealing with issues that faced psychology (and the other various “social sciences”) decades earlier. My background is in biology/medicine, but I found Paul Meelh’s critiques (dating back to the 1960s) generalized quite well to what has (more recently) been going on in that area of research as well. The problems he describes may be some kind of general failure mode of intellectual pursuits. He was also a big Lakatos fan. These papers may help to understand what is going on regarding the lack of respect for epistemology described in this blog:

    Theory-Testing in Psychology and Physics: A Methodological Paradox. Paul E. Meehl. Philosophy of Science Vol. 34, No. 2 (Jun., 1967), pp. 103-115. https://www.jstor.org/stable/186099

    Theoretical Risks and Tabular Asterisks: Sir Karl, Sir Ronald, and the Slow Progress of Soft Psychology. Paul E. Meehl. Journal of Consulting and Clinical Psychology 1978, Vol. 46, 806-834. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.200.7648&rep=rep1&type=pdf

    Appraising and Amending Theories: The Strategy of Lakatosian Defense and Two Principles That Warrant It. Paul E. Meehl. Psychological Inquiry 1990, Vol. 1, No. 2, 108-141. https://pdfs.semanticscholar.org/2a38/1d2b9ae7e7905a907ad42ab3b7e2d3480423.pdf

    The Problem Is Epistemology, Not Statistics: Replace Significance Tests by Confidence Intervals and Quantify Accuracy of Risky Numerical Predictions. Paul E. Meehl. In L. L. Harlow, S. A. Mulaik, & J. H. Steiger (Eds.), What If There Were No Significance Tests? (pp. 393–425) Mahwah, NJ : Erlbaum, 1997. http://meehl.umn.edu/sites/g/files/pua1696/f/169problemisepistemology.pdf

    Also, some videos of lectures:
    http://meehl.umn.edu/recordings/philosophical-psychology-1989

    Another thing, I suspect there are many people without expertise in astronomy/astrophysics/etc who are dissatisfied with the dark matter hypothesis. One thing that would be great is if you could post a walk through of how you would analyze a rotation curve (ie share the python/matlab/R/whatever code you would use). I have no idea how much effort that may take, but it would make a useful post.

    Like

    1. This is all happened before, and it will all happen again. All subjects of intellectual pursuit are subject to this sort of failing, and it crops up repeatedly, over and over, in the same field, after sufficient time has passed for memories to fade. Eternal Vigilance is the only solution, and an inadequate one at that.

      Like

  5. In my paper, I documented one case in which cosmologists had dealt with an awkward observation by simply ignoring it: the “mass discrepancy-acceleration relation.” Non-scientists may find this behavior surprising, and so I thought it would be useful to reproduce the following statement made by Karl Popper in 1974 (printed in The Philosophy of Karl Popper, ed. A. Schilpp, p. 983):

    “[W]e can always immunize a theory against refutation. There are many such evasive immunizing tactics; and if nothing better occurs to us, we can always deny the objectivity — or even the existence — of the refuting observation. (Remember the people who refused to look through Galileo’s telescope.) Those intellectuals who are more interested in being right than in learning something interesting but unexpected are by no means rare exceptions…I once met a very famous physicist who, although I implored him, declined to look at a most simple and interesting experiment of a professor of experimental physics in Vienna. It would not have taken him five minutes, and he did not plead lack of time. The experiment did not fit into his theories, and he pleaded that it might cost him some sleepless nights.”

    Plus ca change….

    Liked by 1 person

    1. I found your plot of this (Fig 2) to be kind of strange given my understanding. I was under the impression something special happened around 10^-10 m/s^2, but in this plot it looks like a smooth curve that may even continue to the right under (V/V_Newton)^2 < 1. This leads me to believe that the precise value of "a0" has no particular significance.

      Like

      1. Replying to myself since it looks like a nesting limit was reached…

        “The bend happens at 1E-10 m/s/s. Something special certainly does happen there. It is not an instantaneously sharp turn, if you zoom in enough as we have here. If you zoom out the turn over appears sharper of course. See, e.g., Figures 10 and 11 of https://link.springer.com/article/10.12942/lrr-2012-10

        I just expected there to be more of a kink… but I suppose you can say it starts deviating at 1-2 x 10^-10. This brought up a few other questions if you don’t mind. I’m mainly still wondering about how the curve seems to continue all the way until the data runs out on the right (going below 1) in the Merrit (2017) fig 2. Looking at fig 11 (Famaey and McGaugh, 2012):

        1) Is it at all possible to collect data where a is between 10^-9 and 10^-6 m/s^2?
        2) Am I right that the velocities at “low” accelerations are deduced from redshift, while those at “high” (a > 10^-7) accelerations are arrived at some other way?
        3) Is this data available anywhere to download?

        Like

    1. I gather that was the “Beyond Wimps” meeting.
      My impression is that most of the attendees were particle physicists. Is this correct?
      In that case it would not surprise me if they were not particularly open to an alternate view. Since astronomy is not their specialty they will tend to just accept the orthodoxy that main stream astronomers provide.
      I’m curious, what type of physicists/astronomers do you find are, on average, more receptive to the possibility of modified gravity solutions to DM? My guess would be gravitational physicists, but that is just a guess.

      Like

      1. Astronomy and particle physics are two subjects divided by a common interest in the missing mass problem. All the evidence is astronomical in nature. Most of the ideas for what dark matter might be come form particle physics, where there is a sense that there should be new physics beyond the standard model. The confluence of these issues led to a snowballing group-think where each community took from the other what it found convenient without really understanding the rest.

        Like

  6. To answer these questions:
    1) Is it at all possible to collect data where a is between 10^-9 and 10^-6 m/s^2?
    There are in practice few accessible systems in this range.
    2) Am I right that the velocities at “low” accelerations are deduced from redshift, while those at “high” (a > 10^-7) accelerations are arrived at some other way?
    Yes – typically planetary ephemerides at high acceleration.
    3) Is this data available anywhere to download?
    Yes: http://astroweb.case.edu/ssm/data/
    and http://astroweb.case.edu/SPARC/

    Like

    1. Thanks again, looking at the data now. Also, in that case I am sure the choice of H0 must affect the results. From Merrit (1017):

      Determination of H0 using the classical redshift-magnitude test (the ‘Hubble diagram’) has consistently yielded higher values than the ‘concordance’ value; a recent study (Riess et al.2016) finds
      H0=73.03 +/- 1.79 kms-1 Mpc-1 compared with the concordance value of 67.3 +/ 0.7 kms-1 Mpc-1, a ‘three-sigma discrepancy’.

      Sorry for the possibly dumb questions, but I wonder if a different choice of this value could shift the curve?

      1) Is it at all possible to collect data where a is between 10^-9 and 10^-6 m/s^2?
      There are in practice few accessible systems in this range.

      I think it is interesting that there is a large gap in the data, then the discrepancy shows up almost immediately.

      Like

      1. Ok, after looking it up a bit more it seems the hubble constant value isn’t supposed to have any effect on these redshift calculations. So that was apparently indeed a dumb question.

        Like

  7. The acceleration scales is V^2/R. The R depends on the distance scale. So for those galaxies lacking a direct determination of distance, the value of the Hubble constant affects the acceleration scale. There are enough galaxies with distances measured independently of the Hubble flow that this does not appear to be an issue.

    The rather sudden appearance of the discrepancy is related to the fact that there are no galaxies with surface densities grossly in excess of a0/G. This is the characteristic scale at which galaxies form. It is tempting to infer that this is physical. Galaxies that are too dense may be unstable (a problem going all the way back to Ostriker & Peebles 1973); and may not form in the first place without the boost provided at this scale.

    Like

  8. Ron, people from GR community are not interested in either dark matter, dark energy or inflation. They are only worried about initial big bang singularity and to some extent arrow of time. Ironically particle physicists and cosmologists don’t care about big bang singularity in cosmology/astrophysics or arrow of time (or you will never hear these things talked about in prominent cosmology conferences)

    Like

Comments are closed.