Reading Merritt’s paper on the philosophy of cosmology, I was struck by a particular quote from Lakatos:
A research programme is said to be progressing as long as its theoretical growth anticipates its empirical growth, that is as long as it keeps predicting novel facts with some success (“progressive problemshift”); it is stagnating if its theoretical growth lags behind its empirical growth, that is as long as it gives only post-hoc explanations either of chance discoveries or of facts anticipated by, and discovered in, a rival programme (“degenerating problemshift”) (Lakatos, 1971, pp. 104–105).
The recent history of modern cosmology is rife with post-hoc explanations of unanticipated facts. The cusp-core problem and the missing satellites problem are prominent examples. These are explained after the fact by invoking feedback, a vague catch-all that many people agree solves these problems even though none of them agree on how it actually works.
There are plenty of other problems. To name just a few: satellite planes (unanticipated correlations in phase space), the emptiness of voids, and the early formation of structure (see section 4 of Famaey & McGaugh for a longer list and section 6 of Silk & Mamon for a positive spin on our list). Each problem is dealt with in a piecemeal fashion, often by invoking solutions that contradict each other while buggering the principle of parsimony.
It goes like this. A new observation is made that does not align with the concordance cosmology. Hands are wrung. Debate is had. Serious concern is expressed. A solution is put forward. Sometimes it is reasonable, sometimes it is not. In either case it is rapidly accepted so long as it saves the paradigm and prevents the need for serious thought. (“Oh, feedback does that.”) The observation is no longer considered a problem through familiarity and exhaustion of patience with the debate, regardless of how [un]satisfactory the proffered solution is. The details of the solution are generally forgotten (if ever learned). When the next problem appears the process repeats, with the new solution often contradicting the now-forgotten solution to the previous problem.
This has been going on for so long that many junior scientists now seem to think this is how science is suppose to work. It is all they’ve experienced. And despite our claims to be interested in fundamental issues, most of us are impatient with re-examining issues that were thought to be settled. All it takes is one bold assertion that everything is OK, and the problem is perceived to be solved whether it actually is or not.
That is the process we apply to little problems. The Big Problems remain the post hoc elements of dark matter and dark energy. These are things we made up to explain unanticipated phenomena. That we need to invoke them immediately casts the paradigm into what Lakatos called degenerating problemshift. Once we’re there, it is hard to see how to get out, given our propensity to overindulge in the honey that is the infinity of free parameters in dark matter models.
Note that there is another aspect to what Lakatos said about facts anticipated by, and discovered in, a rival programme. Two examples spring immediately to mind: the Baryonic Tully-Fisher Relation and the Radial Acceleration Relation. These are predictions of MOND that were unanticipated in the conventional dark matter picture. Perhaps we can come up with post hoc explanations for them, but that is exactly what Lakatos would describe as degenerating problemshift. The rival programme beat us to it.
In my experience, this is a good description of what is going on. The field of dark matter has stagnated. Experimenters look harder and harder for the same thing, repeating the same experiments in hope of a different result. Theorists turn knobs on elaborate models, gifting themselves new free parameters every time they get stuck.
On the flip side, MOND keeps predicting novel facts with some success, so it remains in the stage of progressive problemshift. Unfortunately, MOND remains incomplete as a theory, and doesn’t address many basic issues in cosmology. This is a different kind of unsatisfactory.
In the mean time, I’m still waiting to hear a satisfactory answer to the question I’ve been posing for over two decades now. Why does MOND get any predictions right? It has had many a priori predictions come true. Why does this happen? It shouldn’t. Ever.
14 thoughts on “Degenerating problemshift: a wedged paradigm in great tightness”
I have my own ideas of why Mond works, but they are still mostly speculation.
However, I will bet that, whatever the actual answer turns out to be, it will seem, in retrospect, to be so relatively simple and obvious that everyone will be slapping their foreheads saying “Why didn’t I think of that?”
That’s where I hope we get to: an elegant solution that is obvious in retrospect.
Google “lex newton milgrom”.
I’d like to make a related point. I wonder how often in the past it has turned out that the correct theoretical solution to a major anomaly turned out to be the simplest, most obvious, most intuitive solution — the solution that first came to everyone’s minds? It’s easy to think of cases where this approach did *not* have a happy ending: Aristotle’s theory of motion; phlogiston; the aether; the planet Vulcan, were all “obvious” solutions. In the same way, I’m struck by how absurdly simple the postulates of “dark matter” and “dark energy” are. Dark matter is the kind of idea that any reasonably intelligent undergraduate could have come up with, after being told about flat rotation curves. (Though if he were thoughtful as well as intelligent, he would probably reject the idea as too ad hoc.) Dark energy is not quite as obvious; but an undergraduate who knew a little GR, and also a little about its history (including the effects of a ‘cosmological constant’), would have no difficulty hitting on ‘dark energy’ as a solution to the accelerating universe anomaly. In fact, both of these ideas could be introduced to students in the form of solutions to homework problems! One wonders if they are just *too* simple to be correct.
I was thinking more along the lines of a simple, but profoundly unconventional, new concept. The best example I know of was Einstein’s simple postulates that space and time are not immutable, that instead it was the speed of light that was immutable. In retrospect it now seems simple and obvious, but in no way was it obvious at the time. Changed everything.
My strong hunch is that MOND is pointing toward such a basic, profound, paradigm shift. If so then whoever figures it out will change our understanding of the universe as much as did Einstein.
Perhaps that is what you meant as well.
No, I was pursuing a different thought. But, we could both be right!
“Why does MOND get any predictions right? … It shouldn’t. Ever.”
I cannot tell if you are speaking purely ironically, in the persona of the dominant paradigm, or if there is some trace of earnestness in that question and answer.
*Is* there a physical or logical reason why MOND should not be true? Or is the only reason that it “shouldn’t” be true, is that we already have another theory (lambda CDM) that we are comfortable with?
Yes, all of those. Attitude matters – I am speaking from the perspective I started from. From an objective perspective, there is no physical or logical reason why MOND should not be true.
I had thought you meant that MOND could NOT be true because it violates our most basic assumptions about geometry, at least as applied to gravity. However the data continues to drive us nuts by supporting this “impossible” model.
“Why does MOND get any predictions right?”
“MOND works far too well ! In fact, just as planetary systems are Keplerian objects, galaxies are Milgromian objects. Milgrom’s discovery of a0 is likely as epochal as Planck’s discovery of h.”
— Pavel Kroupa, 21 March 2011
MOND makes many valid predictions because Milgrom is the Kepler of contemporary cosmology — the question is who might be the Newton-Einstein of contemporary cosmology.
Negative mass as a solution to the dark matter/dark energy enigma, an interesting idea from an astrophyicist at Oxford:
The arXiv version of the paper in Astronomy and Astrophysics is at https://arxiv.org/abs/1712.07962
Yeah, I was just looking at that. Certainly it would be nice to find a unified solution to the dark matter and dark energy problems. In this particular case, I am skeptical that his eq. 29 will give the right behavior. It is not the same as the known empirical relation between velocity/acceleration and the baryonic mass/distribution (the existence of which he appears to be unaware of). This could perhaps be made to match with fine-tuning, but then it’d be fine-tuned. The need for fine-tuning is apparent in his Fig. 2, where any number of things could happen. I’ll have to read it more closely to see if there is some deeper reason nature would pick out the right value, but offhand it looks like the same kind of fudge Einstein made when he introduced the cosmological constant in the first place. That only kept the universe static for a fine-tuned value of Lambda; it looks like you’ll need to do the same fine-tuning here to get a flat rotation curve and not a rising one.
Comments are closed.