“Winning isn’t everything. It’s the only thing.”Red Sanders
This is a wise truth that has often been poorly interpreted. I despise some of the results that this sports quote has had in American culture. It has fostered a culture of bad sportsmanship in some places: an acceptance, even a dictum, that the ends justify the means – up to and including cheating, provided you can get away with it.
Winning every time is an impossible standard. In any competitive event, someone will win a particular game, and someone else will lose. Every participant will be on the losing side some of the time. Learning to lose gracefully despite a great effort is an essential aspect of sportsmanship that must be taught and learned, because it sure as hell isn’t part of human nature.
But there is wisdom here. The quote originates with a football coach. Football is a sport where there is a lot of everything – to even have a chance of winning, you have to do everything right. Not just performance on the field, but strategic choices made before and during the game, and mundane but essential elements like getting the right personnel on the field for each play. What? We’re punting? I thought it was third down!
You can do everything right and still lose. And that’s what I interpret the quote to really mean. You have to do everything to compete. But people will only judge you to be successful if you win.
To give a recent example, the Kansas City Chiefs won this year’s Superbowl. It was only a few months ago, though it seems much longer in pandemic time. The Chiefs dominated the Superbowl, but they nearly didn’t make it past the AFC Championship game.
The Tennessee Titans dominated the early part of the AFC Championship game. They had done everything right. They had peaked at the right time as a team in the overly long and brutal NFL season. They had an excellent game plan, just as they had had in handily defeating the highly favored New England Patriots on the way to the Championship game. Their defense admirably contained the high octane Chiefs offense. It looked like they were going to the Superbowl.
Then one key injury occurred. The Titans lost the only defender who could match up one on one with tight end Travis Kelce. This had an immediate impact on the game, as they Chiefs quickly realized they could successfully throw to Kelce over and over after not having been able to do so at all. The Titans were obliged to double-cover, which opened up other opportunities. The Chief’s offense went from impotent to unstoppable.
I remember this small detail because Kelce is a local boy. He attended the same high school as my daughters, playing on the same field they would (only shortly later) march on with the marching band during half times. If it weren’t for this happenstance of local interest, I probably wouldn’t have noticed this detail of the game, much less remember it.
The bigger point is that the Titans did everything right as a team. They lost anyway. All most people will remember is that the Chiefs won the Superbowl, not that the Titans almost made it there. Hence the quote:
“Winning isn’t everything. It’s the only thing.”
The hallmark of science is predictive power. This is what distinguishes it from other forms of knowledge. The gold standard is a prediction that is made and published in advance of the experiment that tests it. This eliminates the ability to hedge: either we get it right in advance, or we don’t.
The importance of such a prediction depends on how surprising it is. Predicting that the sun will rise tomorrow is not exactly a bold prediction, is it? If instead we have a new idea that changes how we think about how the world works, and makes a prediction that is distinct from current wisdom, then that’s very important. Judging how important a particular prediction may be is inevitably subjective.
It is rare that we actually meet the gold standard of a priori prediction, but it does happen. A prominent example is the prediction of gravitational lensing by General Relativity. Einstein pointed out that his theory predicted twice the light-bending that Newtonian theory did. Eddington organized an expedition to measure this effect during a solar eclipse, and claimed to confirm Einstein’s prediction within a few years of it having been made. This is reputed to have had a strong impact that led to widespread acceptance of the new theory. Some of that was undoubtedly due to Eddington’s cheerleading: it does not suffice merely to make a successful prediction, that it has happened needs to become widely known.
It is impossible to anticipate every conceivable experimental result and publish a prediction for it in advance. So there is another situation: does a theory predict what is observed? This has several standards. The highest standard deserves a silver medal. This happens when you work out the prediction of a theory, and you find that it gives exactly what is observed, with very little leeway. If you had had the opportunity to make the prediction in advance, it would have risen to the gold standard.
Einstein provides another example of a silver-standard prediction. A long standing problem in planetary dynamics was the excess precession of the perihelion of Mercury. The orientation of the elliptical orbit of Mercury changes slowly, with the major axis of the ellipse pivoting by 574 arcseconds per century. That’s a tiny rate of angular change, but we’ve been keeping very accurate records of where the planets are for a very long time, so it was well measured. Indeed, it was recognized early that precession would be cause by torques from other planets: it isn’t just Mercury going around the sun; the rest of the solar system matters too. Planetary torques are responsible for most of the effect, but not all. By 1859, Urbain Le Verrier had worked out that the torques from known planets should only amount to 532 arcseconds per century. [I am grossly oversimplifying some fascinating history. Go read up on it!] The point is that there was an excess, unexplained precession of 43 arcseconds per century. This discrepancy was known, known to be serious, and had no satisfactory explanation for many decades before Einstein came on the scene. No way he could go back in time and make a prediction before he was born! But when he worked out the implications of his new theory for this problem, the right answer fell straight out. It explained an ancient and terrible problem without any sort of fiddling: it had to be so.
The data for the precession of the perihelion of Mercury were far superior to the first gravitational lensing measurements made by Eddington and his colleagues. The precession was long known and accurately measured, the post facto prediction clean and irresolute. So in this case, the silver standard was perhaps better than the gold standard. Hence the question once posed to me by a philosopher of science: why we should care if the prediction came in advance of the observation? If X is a consequence of a theory, and X is observed, what difference does it make which came first?
In principle, none. In practice, it depends. I made the hedge above of “very little leeway.” If there is zero leeway, then silver is just as good as gold. There is no leeway to fudge it, so the order doesn’t matter.
It is rare that there is no leeway to fudge it. Theorists love to explore arcane facets of their ideas. They are exceedingly clever at finding ways to “explain” observations that their theory did not predict, even those that seem impossible for their theory to explain. So the standard by which such a post-facto “prediction” must be judged depends on the flexibility of the theory, and the extent to which one indulges said flexibility. If it is simply a matter of fitting for some small number of unknown parameters that are perhaps unknowable in advance, then I would award that a bronze medal. If instead one must strain to twist the theory to make it work out, then that merits at best an asterisk: “we fit* it!” can quickly become “*we’re fudging it!” That’s why truly a priori prediction is the gold standard. There is no way to go back in time and fudge it.
An important corollary is that if a theory gets its predictions right in advance, then we are obliged to acknowledge the efficacy of that theory. The success of a priori predictions is the strongest possible sign that the successful theory is a step in the right direction. This is how we try to maintain objectivity in science: it is how we know when to suck it up and say “OK, my favorite theory got this wrong, but this other theory I don’t like got its prediction exactly right. I need to re-think this.” This ethos has been part of science for as long as I can remember, and a good deal longer than that. I have heard some argue that this is somehow outdated and that we should give up this ethos. This is stupid. If we give up the principle of objectivity, science would quickly degenerate into a numerological form of religion: my theory is always right! and I can bend the numbers to make it seem so.
Hence the hallmark of science is predictive power. Can a theory be applied to predict real phenomena? It doesn’t matter whether the prediction is made in advance or not – with the giant caveat that “predictions” not be massaged to fit the facts. There is always a temptation to massage one’s favorite theory – and obfuscate the extent to which one is doing so. Consequently, truly a priori prediction must necessarily remain the gold standard in science. The power to make such predictions is fundamental.
Predictive power in science isn’t everything. It’s the only thing.
As I was writing this, I received email to the effect that these issues are also being discussed elsewhere, by Jim Baggot and Sabine Hossenfelder. I have not yet read what they have to say.
53 thoughts on “Predictive Power in Science”
Sabine goes on to say that explanatory power surpasses a theory’s predictive power in importance I think beth are equally important.
Still haven’t read what she has to say. But if it is just explanatory power, I disagree. However, we have to be careful by what we mean with those words. Some people consider it an explanation to have just the right number of angels dancing on the head of a pin. That level of explanation is not satisfactory, so I expect that is not what she meant. To me, the point of physical theory is *understanding*. A theory that promotes a physical understanding of what is going on in nature has a more positive kind of explanatory power. So if that’s what she means (as I expect it is) then I agree with her. It is this kind of physical understanding that enables our ability to predict physical observables.
LikeLiked by 2 people
Thinking on the use of words a little more, I guess I consider “explanatory power” to be part of the “everything” that you gotta do in science, as in a winning a football game. Of course you have to be able to explain things, and do so *satisfactorily*. The latter only happens in conjunction with predictive power. That’s the win.
Mond therefore has a higher level of explanatory Power than LCDM in the sense that using fewer assumptions than LCDM it can explain and predict a wider range of phenomena associated with galactic dynamics. MOND is doing more in my opinion to reveal the fundamental theory of gravitation. LCDM is premised on General Relativity in which both ultra violet and infrared catastrophes fog its predictions at extreme scales of gravity. For instance, physical magnitudes are not allowed to assume infinite values yet the Schwarzchild metric allows r to run to infinity as well as generate a singularity at r=0. MOND phenomena is showing us phenomena that appear at infrared scales associated with the fundamental theory of gravitation that prevents infinities from arising at those scales.
Of course, there are other examples of predictions where the observed well-known phenomenon was completely incompatible with the theories existing at the time. For example, the blackbody radiation and the photoelectric effect. In some cases, it is even possible to identify, a posteriori, that the number of free parameters of the old theory would, in fact, be infinite as in the case of epicycles.
“With four parameters I can fit an elephant, and with five I can make him wiggle his trunk”
— John von Neumann
This has been precisely my concern with dark matter for a quarter century. There is a practical infinity of free parameters. Does the dark matter halo distribution not look like the predicted NFW form? No problem – feedback will move it around. Not like that? OK, like this. Or this, or this, or this. Etc., etc. The entire field of galaxy formation theory has become the proverbial house of cards held together with an ever increasing number of epicycles. We’ve been doing this for so long now that younger scientists seem to think that adding more epicycles is how science is suppose to work, and don’t realize that’s what they’re doing. An epicycle isn’t an epicycle if you can’t see it inside a black box inside a computer!
Sabine focuses on explanatory power in terms of what I could call retrodictions: explaining known data, using the fewest assumptions. She thinks predictive power is unnecessary and potentially dangerous as a measure of a theory’s success. I disagree: predictive power is the key thing for any theory beyond intellectual curiosity about retrodictions.
Using the fewest assumptions is important. That’s necessarily built into genuine predictions, since you’re pretty much obliged to make simple assumptions a priori. It can be more of a challenge to keep it simple with retrodiction – I can think of examples where people think they’re making few assumptions when doing this because they’ve forgotten the previous generation of assumptions. As the story evolves, each generation of fudges get baked-in, and what was once absurd becomes the accepted norm. One example is feedback efficiency in galaxy formation simulations. A decade ago, it would be embarrassing to suggest it could be anywhere near 100%. That’s now the new normal.
A new paper on the failed feedback problem in simulations is actually pretty interesting. The simulations are factors of 1.5 to 4 off on various mass to dark matter ratios, but the ratios are pretty consistent and unbiased. https://arxiv.org/abs/2005.01724
Yes, this is an important result. It is related to the claims to explain the radial acceleration relation in the context of said simulations (https://arxiv.org/abs/1610.07663). This only “works” after you magically double the stellar mass – i.e., allow an offset of a consistent ratio. If we allow MOND the same latitude, doubling the baryonic mass “solves” the problem it suffers in clusters.
Great points. I liken it to using legacy software packages to create “new” software. All the bugs and weird architectural quirks are built in and hidden.
Obviously, you’re writing a prelude to your new preprint https://arxiv.org/abs/2004.14402 (which is a good comprehensive pre-print on the subject that I started to blog until my computer crashes and I got temporarily discouraged) and I’m right there with you,
Then again, I don’t really understand the football stuff in your post, although I would add that if you can’t post-dict that the Kansas City Chiefs are mostly a Missouri football team, you are too dumb to make responsible decisions about anything of consequence relating to the United States of America.
To paraphrase cognitive psychologist Pascal Boyer, “a good theory is one that gives you information for free” (via Razib Khan who states “”The cognitive psychologist Pascal Boyer introduced me to the phrase “theory is information for free.” It’s a succinct way of saying that if you have a theoretical framework you can deduce and extrapolate a lot about the world without having to know everything. And, you can take new information and fit it quickly into your model and generate more propositions (you may not need to know everything, but you do need to know something).”). See also “A good theory explains a lot but postulates little.” – Richard Dawkins.
I actually quoted both of them in the context of another recent theoretical advance, the first post-diction of a proton’s parton distribution function this week, from ab initio calculations using only Standard Model Equations and the measured values of fundamental Standard Model calculations as inputs, the consistent with all current experimental evidence. https://arxiv.org/abs/2005.02102 The folks who invented the Standard Model in the late 1970s knew that it was possible in principle to do that. But, for the last four decades we’re determined PDFs largely using the brute force method of counting which particles get produced in certain collisions (involving something on the order of a billion data points) and plotting the results. Now, the same result has been obtained with data points and formulas you could write in their entirety on a black board with chalk (although as I understand it that is a mathematician thing and physicists prefer white boards).
And, while I am at it, I’ll also call attention again to the work of Alexandre Deur, whose primary specialty is QCD but has done some really interesting work on gravity focusing on the weak field implications of the self-interactions of the gravitational field. See http://dispatchesfromturtleisland.blogspot.com/p/deurs-work-on-gravity-and-related.html He basically replicates MOND (in theory without needing an independent a0 constant or cosmological constant), provides a novel explanation for dark energy, made novel and corroborated prediction about dark matter in elliptical galaxies, and in his most recent paper, makes similar statements about spiral galaxy thickness, namely: “A prediction of the model is that there should be a strong correlation between the inferred galactic dark mass and the galactic disk thickness. We use two independent sets of data to verify this.” https://arxiv.org/abs/2004.05905 (admittedly not published pre-measurement). You should really read his stuff. He is an outsider to astrophysics even though he is a professional physicist, so he doesn’t get much citation or notice, but the work really deserves attention from people like you and Sabine.
“If we give up the principle of objectivity, science would quickly degenerate into a numerological form of religion: my theory is always right! and I can bend the numbers to make it seem so….
It doesn’t matter whether the prediction is made in advance or not – with the giant caveat that “predictions” not be massaged to fit the facts.”
Seriously, you don’t see that first statement as a good description of the situation in theoretical physics as it stands today? It’s not a matter of “if”, science has indeed, on the evidence, degenerated into a mathematicist religious fantasy. Setting aside for the moment the incoherent QT/PP situation, let’s just stick to cosmology. The big bang model is an absurd, unscientific compendium of peculiar beliefs that bear no relation to observed reality, none. You have to believe in the big bang model – there is no empirical evidence for any of its structural elements.
You have to believe in Dark Matter and Dark Energy. You have to believe in expanding and curving spacetime. You have to believe in the Inflation event and the inflaton field that drove it. Yet, there is no empirical evidence for any of these fundamental components of the standard model.
You have to believe in the big bang event itself, at the same time you have to accept the fact that the model cannot account for its original condition. According to the model, this inexplicable original condition spawned, from a volume smaller than a gnat’s ass, a gigantic cosmic flatulence event. And you have to believe, that from that miraculous flatulence event sprang the entirety of the vast cosmos that we observe today.
The big bang model is fatuous nonsense and it is a disgrace, not to science, but to the “scientific community” that pretends it is science, and proselytizes this absurd tale with the fervid, irrational, self-assurance of true believers everywhere.
As to the matter of predictive power, the situation is straightforward. Ptolemaic cosmology demonstrates unequivocally that a successful predictive model need not constitute an accurate physical representation of the system being modeled. Therefore, any claim that a model’s predictive power constitutes, solely on that basis, a scientific validation of that model’s description of physical reality, is unwarranted.
Explanatory power is far more fundamental to science. Sabine has that right. Unfortunately her definition of explanatory power lacks, shall we say, explanatory power:
“Explanatory power measures how much data you can fit from which number of assumptions. The fewer assumption you make and the more data you fit, the higher the explanatory power, and the better the theory.”
There is nothing wrong with this on its face, except that it is more bloodless, mathy-philosophy than a rigorously scientific analysis. What’s missing is any mention of the empirical basis of all science. It is not the number of a model’s assumptions that are important, but the number of them that can be empirically resolved that matters. In this you have things backwards:
“Of course you have to be able to explain things, and do so *satisfactorily*. The latter only happens in conjunction with predictive power. That’s the win.”
On the contrary both Ptolemy and Kepler’s methods have predictive power but only the solar-centric model of Kepler resolves empirically – for the win. No empiricism, no science. In the model-centric culture of modern theoretical physics, however, empiricism is an afterthought.
Empirical results are welcome when they conform to a favored theory, but dismissable when they do not. Century-old assumptions are immune from reconsideration despite their lack of empirical support. The standard models can be said to “work” but they are chock-a-block collections of empirically unresolvable assumptions and assertions. Both standard models are Ptolemaic in nature. That is why there is a “crisis in physics”.
I’m curious why you say “You have to believe in expanding and curving spacetime. You have to believe in the Inflation event and the inflaton field that drove it. Yet, there is no empirical evidence for any of these fundamental components of the standard model.” Especially the part with the empirical evidence.
Reading you comment left me with mixed impressions. On the one hand, the way you phrased the part with empirical evidence made me think about flat-earthers or Young-Earth creationists as they frequently assert that for something to count as evidence you have to be able to see it with your own eyes.
On the other hand, your comment (not only this one, but many of your other comments), is very articulated and with pertinent points. I strongly believe that you’re not a flat earther or a creationist.
So I’m curious about your point of view with respect to empirical evidence. I may add that also dr. McGaugh expressed many times a view similar with “Century-old assumptions are immune from reconsideration despite their lack of empirical support” – i.e. repeat it sufficiently enough and everybody starts to forget why it was initially introduced and start to believe it as real. It is basically the same thing as you say.
Also I’m puzzled what’s wrong with the model-centric culture? I’m not sure it would be possible to articulate a theory without an underlying model – even the examples you gave (ptolemaic and keplerian methods) have an underlying model so I don’t see how you could separate the theory from the said model.
Yes, ptolemaic model was trying to have ideal circular motions for the planets but observations (again – empiric) showed that this doesn’t work in reality, resulting the need to add the epicycles. But if you add the epicycles you’re effectively creating an empirical model so I don’t understand why you say that only the model of Kepler resolves empirically.
I’m curious why you say “You have to believe in expanding and curving spacetime. You have to believe in the Inflation event and the inflaton field that drove it. Yet, there is no empirical evidence for any of these fundamental components of the standard model.” Especially the part with the empirical evidence.”
From the Wikipedia entry: “Empirical evidence is information acquired by observation or experimentation…” That is what I mean by empirical evidence, with the understanding that observation is broadly inclusive of direct-detection by means of any of our mechanically extended human senses, such as telescopes & etc. With that definition in mind, the statements of mine you quote are inarguably factual.
So when you ask why I say those things, I can only answer that it is because they are true. And I am prompted to ask, do you think otherwise?
I specifically quoted that part because you assert in that paragraph that there is no empirical evidence for the expansion, for instance. So I have to ask what does the correlation between distance and redshift means for you – does it count as empirical evidence? If yes, then what is the problem with it in the context of expansion?
What about the CMD radiation and its homogeneity?
It seems to me that you’re dismissing them just like you said – “results are welcome when they conform to a favored theory, but dismissable when they do not.”
It feels almost like your argument has an additional, but implicit, sentence – “Were you there?”, typical for young earth creationists. That’s why I said that I was left with mixed impressions.
Just a small correction to the post – it is CMB, not CMD radiation….
The observed, cosmological redshift-distance relationship is an empirical fact. The claim that the cause of that relationship is some form of recessional velocity is an assumption that lies at the base of the standard model. Therefore, to say that the redshift-distance correlation constitutes empirical evidence of the recessional velocity interpretation, is to make a scientifically invalid, circular argument. You cannot justify an assumption by citing the assumption.
As to the CMB, its history is clear and unambiguous. In the year’s immediately prior to its discovery in 1965, well-known cosmologists were making predictions based on the big bang model that ranged over an order of magnitude. At the same time scientists using thermodynamic considerations only, were making predictions that were in reasonable agreement with the actual observation. See: https://en.wikipedia.org/wiki/Cosmic_microwave_background#Timeline_of_prediction,_discovery_and_interpretation
So the alleged successful “prediction” of the CMR was 1) not unique to the big bang model and 2) was imprecise, to say the least. To put it another way, the existence of the cosmic microwave radiation can be, and has been, accounted for without invoking a myth-like, inexplicable event in a distant, unobservable past. For the record, the CMR is only Background radiation in the BB model.
It is worth keeping in mind here the distinction between an empirical observation and empirical evidence. The redshift-distance relation is an empirical observation and it constitutes, therefore, empirical evidence of a redshift-distance relation. The empirical redshift-distance relation does not constitute empirical evidence of the big bang model’s recessional-velocity interpretation; it provides only inferential evidence of that interpretation. Inferential evidence is not as scientifically robust as empirical evidence.
Lastly, I am not interested in your attempts to insert straw man arguments, about creationism or flat-earthers, via a “creative reinterpretation” of the things I have written. I try to write plainly and distinctly for a reason – so I can dismiss that kind of nonsense out of hand. I mean what I write and I write what I mean, nothing more. Any hidden agenda you perceive exists only in your imagination.
Budrap, strongly agree with you here. I’ve actually found Arp’s writings pretty interesting, especially his book Seeing Red, in highlighting some of the ongoing weakness with BB cosmology. I’d also be quite amused if his heretical alternative cosmology becomes more widely considered in light of the now dozens of major problems with BB cosmology, including in particular the Hubble tension and the more recently prominent sigma eight tension. https://www.scientificamerican.com/article/how-heavy-is-the-universe-conflicting-answers-hint-at-new-physics/
One of the really interesting things about Arp’s galaxy creation model, quasars as nascent galaxies birthed from AGNs, is that there is quite a bit of empirical evidence supporting it, whereas, no similar body of evidence exists supporting the standard accretion model of galaxy formation.
The SA article describes well the nauture of “new physics”. It’s basically whatever theoretical patch theorists can come up with to compensate for the latest failure of their standard model. New physics isn’t new and it isn’t physics, just the same old reality-challenged, mathematicist, bullshit in a new bottle.
I’ll begin by repeating what I already said – your comment (not only this one, but many of your other comments), is very articulated and with pertinent points. I strongly believe that you’re not a flat earther or a creationist.
However, I understand that you’re not satisfied with the current state in physics and I can see that you have a very strong bias that filters what you believe and what you say. I can be OK with that, every body has a bias, some stronger, some weaker. I too have a bias and I’m aware of it – because of that I didn’t quoted also the first part from that paragraph where you talk about dark stuff.
But for the part with the correlation between redshift and distance, I’ll have to disagree that the cause of the correlation is a built-in assumption for the BB model on which the standard model is built.
The first observations showed that nebulae seemed to be redshifted, so recessing from us, but at that time, the correlation was still not detected.
Then Georges Lemaitre, trying to explain this observed redshift, proposed in 1927 an expanding model based on Einstein’s theory (without the cosmological constant) which yielded this correlation that was later confirmed by Hubble a couple of years later (now known as the Hubble’s law). So that was a honest to god prediction (sort of a pun related to Lemaitre…) which was later confirmed through observation.
So no, I cannot simply discard this and say that it is not empirical evidence for the big-bang.
For the CMB – I’m sure the wiki list is not exhaustive, but looking at the thermal predictions (excluding Gamow which used the expanding model) I get 5-6K, 3.18K, 2.8K, 0.75K, <20K and 2.3K. For the predictions using the BB model, I get 50K (Gamow – using 3 billion years as the age of the Universe), 5K, 28K, 7K, 6K and 40K (these are only the predictions from the list, I specifically excluded the measurements)
Given the ranges, and without excluding outliers, I get about the same number of magnitude orders for both class of predictions – I'm looking and 20K / 0.75K = 26.66 (so more than one order) for the thermal prediction and 50K / 5K = 10 (so exactly one order of magnitude). If I'm excluding outliers, then, again, the situation is actually similar – for the thermal predictions is simple – I just exclude 0.75K and 20K and get 6K / 2.3K = 2.6.
For the BB predictions, the 50K value is clearly an outlier (after all, Gamow used only 3 billions for the age) and, if you research a bit, also the 28K is an outlier as it was based on a measurement of the Hubble constant which could not be replicated. So that leaves 5K, 7K, 6K and 40K. With this – we get 40K / 5K = 8 (not by much, but still less than an order of magnitude), but, 40K looks quite out of place in that list. I'll just note this thing but leave it in there as I don't want to be overzealous with the outliers in order to get a very tight range.
I'll concede that the predictions for BB are higher than what is measured (around 2x), but these depend on other model parameters (like expansion rate), which, at that time, were not really refined.
But, this is not all about the CMB – note that I said also homogeneity in my post. I'm not talking about the anisotropy, but really about the homogeneity (i.e. the fact that all over you look, the temperature is essentially the same, to about the third decimal).
This is a result that cannot be derived based on thermodynamic considerations.
Your history is a bit skewed here. The FLRW equations were first derived in 1920-21 by the Russian mathematician Alexander Friedmann. The FLRW metric (as it came to be known) was not a part of GR. It was a geometrical consideration that was assumed applicable to the entirety of the “universe” thus introducing a “universal” frame for which the GR formalisms were solved, producing the FLRW equations. There were three solutions, one each for an expanding, contracting and static (but unstable) “universe”. It should also be mentioned that assuming a “universal” frame is antithetical to Realtivity Theory.
Then when Hubble determined the redshift-distance correlation there were the all-purpose expanding-contracting-static FLRW solutions capable of covering any contingency. All you needed was this:
“The first observations showed that nebulae seemed to be redshifted, so recessing from us…
That is simply the assertion of a recessional velocity assumption. That, by your account of Lemaitre’s version of the FLRW expanding solution, was the assumption of his model also. Hubble’s determination of a redshift-distance relation did not establish causation, merely correlation. Hubble himself never fully accepted the recessional velocity interpretation. It was always an assumed interpretation of the observed redshift.
I appreciate you taking the time to quantify and validate my point, that the pre-detection thermodynamic predictions were significantly more accurate than those of the BB theorists. In fact the range of thermodynamic predictions comfortably included the observation; the range of the BB predictions, uncomfortably, excluded the observation. The only point of the exercise is to undermine any claim that the existence of the CMR is strong and conclusive evidence for the BB model. It is not.
“…I can see that you have a very strong bias that filters what you believe and what you say. I can be OK with that, every body has a bias, some stronger, some weaker. I too have a bias…
I quite agree with you, I have a very strong bias. My bias with regard to scientific models is that for a model to be considered scientific, it should be based solidly on empirically verifiable statements about the nature of the aspect of physical reality it is attempting to model. On that basis, I think you know enough about the two standard models to understand the nature of my dissatisfaction, even if you don’t necessarily agree with my point of view. I’d be interested to hear what you think your bias is.
“This is a result that cannot be derived based on thermodynamic considerations.
That is an arguable proposition at best, that would depend a great deal on the underlying assumptions to which you applied the thermodynamic considerations.
Budrap, you beat me to the punch in clarifying that Hubble never actually accepted Hubble’s Law (rather ironically). Arp makes a pretty strong IMHO that recessional redshift is far from the whole story.
Wow. What a refreshing discussion. Explanatory power is of course the important thing, but it can’t be measured, in the way that predictions can be measured. Hence the obsession of the bean-counters with predictions. A prediction is a specific measurable at a fixed point in time, but explanatory power is a long-term investment that provides its rewards over decades and centuries. I appreciate the cosmological issues that readers of this blog are interested in explaining, but to me the absolutely fundamental problem is to explain the mass ratios of electron, proton and neutron. However that may be, there is one obviously false assumption in particle physics that people refuse to even talk about – that is the assumption that the laboratory frame of reference is an inertial frame. It is not. How can we even think about unifying quantum mechanics and general relativity (or any other theory of gravity) if we can’t even agree on the definition of an inertial frame?
I agree with your idea that the mass ratios of the atomic particles constitute a fundamental issue that needs scientific explication. But I don’t think it is the ony one. Also in need of scientific explication are the physical cause of the observed gravitational effect, and the nature of an energy-to-matter process that balances and complements the, at least somewhat understood, matter-to-energy process.
As to inertia, I think it best thought of as a limiting case that is, at most, only approximated in physical reality.
Yes, I agree there are many other fundamental problems that require explanation. Anything that involves mass in any way, for example. I also agree that inertia is only approximated in physical reality. That is why I think it is a fundamental mistake for particle physics to be based on a concept of inertia that is not only obviously inconsistent and therefore incorrect, but also *cannot be corrected*. And again, energy-to-matter is a crucial problem that has occupied my attention for a long time. I can’t say I have any idea how to solve this problem, but I feel a necessary ingredient is to understand how to quantise the energy of the gravitational field. Once it is quantised, it can potentially be captured and turned into matter.
@budrap (interesting that I don’t have a reply button to you last comment in the thread)
Friedmann is irrelevant for the discussion we have. It is true that Friedmann was the first to derive the solution for the the expanding Universe, but It was Lemaitre who proposed what he later called the “primeval atom” – i.e. the first idea / model for BB.
And let’s not confuse the model’s assumptions – he assumed that the Universe is expanding (after all, all galaxies seem to recess), but this single assumption does not constrain how the galaxies at distance should move away from us. They might be recessing at a lower pace, they might be recessing at a constant speed or they might even be not recessing at all in the very far distance. At the time when the observations that showed the nebulae redshifts were made, there was no identified correlation between redshift and distance so all those possibilities were open. Only when you add to the model also GR, the correlation (as later observed by Hubble) can be derived – and he did so, before Hubble.
To me, that counts as a prediction confirmed later by empirical observations.
As for Hubble – again, it’s irrelevant if he accepted or not the interpretation of the redsifts. That was his opinion to which he was fully entitled.
As for the bias – yes, any scientific model should be based solidly on verifiable statements, but I’m not that easily going to throw the baby with the bathwater.
In case you must discount some verifiable observations that appear to you or to anyone else to not conform with the model, you should give a thorough justification of why those observations are not relevant. And if you cannot provide this solid justification you’ll need to start questioning, at first, your assumptions / understanding of the model (and here is a very big bias filter) – maybe those observations really fit in there but you don’t see how.
And if this still doesn’t allow room for the observations, then you’ll need to question the model. Don’t throw it right away, but look at what is salvageable, if something, in the light of what can be empirically observed or tested (that is, don’t introduce exotic, non-baryonic dark matter and call the day).
And when / where the model doesn’t provide sufficient explanatory power, use “as if” instead of definitive statements.
LikeLiked by 1 person
By default, the system places a limit on the number of nested replies that are allowed. Hitting that limit is a Sign that this is the wrong forum for this conversation.
Observation taken and sorry for the disruption that was created!
Apass, I posted a copy of your last comment with my response here: https://thisislanduniverse.com/blog/ –
feel free to continue there if you wish. Also linked back here for reference to the entire thread. Regards.
I can understand the importance of explanatory power when choosing between predictive theories. I can even imagine accepting what appears to be a less predictive theory if it provides more explanatory power that could enable more powerful predictions. I’m still uncomfortable with deprecating predictive power. Sometimes the ability to make predictions is at the heart of the matter. For example, when developing pharmaceuticals, the whole point is to be able to predict that, on a statistical basis, they will do more good than harm. Following developments during the current pandemic we’ve seen the spectrum of theories running from anecdote to retrospective analysis to double blind experiment. Then, one gets into the details of creating a good double blind experiment, and that gets one into choosing the endpoints and criticizing others’ choice of endpoints and absorbing the horror of a study that changes its endpoints. It’s like football. Everyone has to agree on where the goalposts are and how to tell if the ball passes between them and where the goal line is and to tell if the ball has been carried over that line properly.
Certainly an essential item is a fixed goal line – it is not fair to move the goal posts! That’s why it is important to establish a prior in science: what do we expect? Maybe that prediction is wrong, and should be adjusted as we learn more; that happens. But that is not what is going on with the dark matter paradigm. There the goal posts have been in continuous motion for my entire career. On the dark matter detection front, the expected region for WIMPs has moved and moved and moved again – perpetually to lower cross-section as they fail to show up where expected. This is a glaring example of moving the goal posts. We’re never wrong! We just never get there!
Stacy, what is your take on https://www.mdpi.com/2075-4434/8/2/47 by Hofmeister and Criss?
The fact is that we have an problem. Whether you call it the dark matter problem or the acceleration discrepancy is a choice of words; the facts that motivate these words are the same. The debate is a matter of interpretation, not the facts.
Sorry, I did not want to anger you; my comment was too terse. I value your work a lot and I will cite it. My impression is that Hofmeister and Criss are as dedicated and truth-seeking as you are. And this is all what counts. This dedication makes you an authority, an authority to which people listen. And you deserve this special status.
Hofmeister and Criss talk a lot about secondary spin axes in galaxies. They claim that rotation curves should take into account such additional spin axes, because even when observed from along the main rotation axis, rotation curves are not the same in all directions. My question should have been whether secondary spin axes are an important issue in galaxies.
Not really. Yes, there can be non-circular motions. In some cases these are important. In most cases they are not. What annoys me is their implication that this hasn’t been considered, or can somehow fix things. There is a whole community of people who has been through this and over it over and over and over. No, of course we don’t know every detail of every galaxy. But we do understand the amplitude of this effect, there is no way that this makes the whole problem go away – or even make a dent in it.
Although I do not work in the field, scientific papers should all have basically similar recipes so I had a look to the paper.
If I was one of the authors, I would have chosen a different image to illustrate the secondary spin axis if I believe this to be a crucial element. The image they presented in the paper is hardly self-evident as they propose that the spin axes are mostly aligned from our line of sight. I would have used an image in which the secondary spin axis would have been basically perpendicular to the line of sight and argued against a wobbling effect, expected when you spin a non-ideal disk.
My impression is that they argue poorly for the existence of secondary spin axes in galaxies. They note there is a theoretical result for a “self-gravitating fluid body of uniform density”, which galaxies are not. Then they make a strong assertion that “Solutions should also exist for motions involving rotation at different rates about each of the short and moderate length axes […], and for variable density” without providing any citation but only an argument through analogy.
Again, if I was one of the authors and if secondary spin axes are deemed to be important, I would have either cited something (if available), or argued more thoroughly (with some math behind) or even provided a (partial) proof. As a reviewer, for sure I would have asked for something along those lines.
Also, one remark from the introduction caught my eye: “Specifically, the dynamics of
galaxies are satisfactorily explained by Newton’s law of gravitation and no massive NBDM haloes
If that is the case, that would imply an even bigger problem on extragalactic scales where dark matter is still necessary. If they are correct, DM should not act on galactic scales but must still be present in order to account for the observed extragalactic phenomena, making it even stranger than currently it is.
Advice needed! ( Stacey or someone else in the community).
My proposed paper was flatly rejected by MNRAS based on a rather short and sketchy review by one anonymous referee.
The referee implicitly acknowledges that I obtained the RAR and the acceleration scale from five postulates about the gravitational interaction ( too many postulates in view of the referee). But so what!? The referee does not believe in MOND.
My proposed paper consists of five sections, of which the last three were completely ignored by the referee.
In section 4 I describe how the proposed relations can account for the mass discrepancy in galaxy clusters.
How best to proceed? A different journal? Which one? Thanks
Here a link to the referees report
Not sure what to suggest. This happens a fair amount – sometimes for the right reasons, sometimes wrong. Usually people try another journal.
LikeLiked by 1 person
Although in a different field, I had once an issue with a reviewer and after several rounds of updating the paper and resubmitting it for re-review, when the reviewer started to contradict in the objections raised I ask the journal’s editor for arbitration.
The editor decided to involve another reviewer in the process.
Regarding a different journal, it depends on how much time / material do you have because, usually, the journals have in their policies that they do not accept papers with a prior submission to other venues (even if not accepted there). Typically, they have you sign a form to cede the copyright to them once you submit the paper for the first time, so they already have the rights to it, even if they don’t publish it.
Because of this, in order to submit it to other journals you’d have to overhaul it.
That is not the standard practice in astronomy. Perhaps it should be. One shouldn’t submit the same paper to different journals simultaneously, of course, but we’ve had a long history of results rejected by one journal turn out to be important later, so usually journals are more tolerant of reviewing a manuscript that has been rejected elsewhere.
Such policies are in place to keep the journals safe against plagiarism accusations – you’re not allowed to reproduce (extensive) text and figures from your own previous articles because this is still considered plagiarism and journals that initially did not accept your work may reserve their right to re-evaluate it.
Although they are not the one who plagiarize, their reputation (read that as impact factor) gets affected by such accusations.
But, like I said, if you overhaul the paper then there is no problem to resubmit the same results to other journals – the only issue is your lost time (first by trying to conform to the objections raised be the reviewers and then, to re-create the same article in a different form… and of course, to wait again for the review process which can take more than half an year).
@Frank – you’re welcome!
LikeLiked by 1 person
Thanks Apass, very helpful comment. I decided to overhaul the paper and resubmit it to MNRAS.My disappointment with the reviewer is that he/ she did not engage at all with the main section of the paper. My main ‘ sin’ is that I am on the wrong side of the DM / MOND divide.
Okay, thanks, will have another go somewhere else.
Yes – self-plagiarism is a strict no-no. But it isn’t self-plagiarism if what you wrote hasn’t been published yet. Lot’s of fields have a hierarchy of journals that the same manuscript journeys through until it finds a home.
I see that David Merritt has a book on MOND published by Cambridge University Press: https://www.cambridge.org/core/books/philosophical-approach-to-mond/9E770E2F021E79EE639C9A750143C589
Have you read it and, if so, do you have any comments on it? I have just read the first chapter, which is available to read free and it looks very interesting; his comments on Popper and Lakatos fit well with my view of the philosophy of science.
Yes. It is well worth the read – a philosophical page-turner. I mean to post more about it here, but there is so much to say, I find it hard to even get started!
The first chapter of Merritt’s book is indeed interesting, but as is common in philosophical discussions of scientific matters, what is notably missing is a definition of what, precisely, is meant by the term “science”. In all honesty, I’m not sure there even exists a broadly accepted definition of science within the scientific community. So, I’m curious Stacy, what do you consider a proper definition?
I’m not going to attempt a definition here, beyond agreeing with you that the practice of science has grown into so many fields that there probably can be no broadly applicable definition. Those who read Merritt’s book will get a very good idea of what it, what it should be, and where it can go wrong.
Comments are closed.