Imagine if you are able that General Relativity (GR) is correct yet incomplete. Just as GR contains Newtonian gravity in the appropriate limit, imagine that GR itself is a limit of some still more general theory that we don’t yet know about. Let’s call it Underlying Theory (UT) for short. This is essentially the working hypothesis of quantum gravity, but here I want to consider a more general case in which the effects of UT are not limited to the tiny netherworld of the Planck scale. Perhaps UT has observable consequences on very large scales, or a scale that is not length-based at all. What would that look like, given that we only know GR?
For starters, it might mean that the conventional Friedmann-Robertson-Walker (FRW) cosmology derived from GR is only a first approximation to the cosmology of the unknown deeper theory UT. In the first observational tests, FRW will look great, as the two are practically indistinguishable. As the data improve though, awkward problems might begin to crop up. What and where we don’t know, so our first inclination will not be to infer the existence of UT, but rather to patch up FRW with auxiliary hypotheses. Since the working presumption here is that GR is a correct limit, FRW will continue be a good approximation, and early departures will seem modest: they would not be interpreted as signs of UT.
What do we expect for cosmology anyway? A theory is only as good as its stated predictions. After Hubble established in the 1920s that galaxies external to the Milky Way existed and that the universe was expanding, it became clear that this was entirely natural in GR. Indeed, what was not natural was a static universe, the desire for which had led Einstein to introduce the cosmological constant (his “greatest blunder”).
A wide variety of geometries and expansion histories are possible with FRW. But there is one obvious case that stands out, that of Einstein-de Sitter (EdS, 1932). EdS has a matter density Ωm exactly equal to unity, balancing on the divide between a universe that expands forever (Ωm < 1) and one that eventually recollapses (Ωm > 1). The particular case Ωm = 1 is the only natural scale in the theory. It is also the only FRW model with a flat geometry, in the sense that initially parallel beams of light remain parallel indefinitely. These properties make it special in a way that obsessed cosmologists for many decades. (In retrospect, this obsession has the same flavor as the obsession the Ancients had with heavenly motions being perfect circles*.) A natural cosmology would therefor be one in which Ωm = 1 in normal matter (baryons).
By the 1970s, it was clear that there was no way you could have Ωm = 1 in baryons. There just wasn’t enough normal matter, either observed directly, or allowed by Big Bang Nucleosynthesis. Despite the appeal of Ωm = 1, it looked like we lived in an open universe with Ωm < 1.
This did not sit well with many theorists, who obsessed with the flatness problem. The mass density parameter evolves if it is not identically equal to one, so it was really strange that we should live anywhere close to Ωm = 1, even Ωm = 0.1, if the universe was going to spend eternity asymptoting to Ωm → 0. It was a compelling argument, enough to make most of us accept (in the early 1980s) the Inflationary model of the early universe, as Inflation gives a natural mechanism to drive Ωm → 1. The bulk of this mass could not be normal matter, but by then flat rotation curves had been discovered, along with a ton of other evidence that a lot of matter was dark. A third element that came in around the same time was another compelling idea, supersymmetry, which gave a natural mechanism by which the unseen mass could be non-baryonic. The confluence of these revelations gave us the standard cold dark matter (SCDM) cosmological model. It was EdS with Ωm = 1 mostly in dark matter. We didn’t know what the dark matter was, but we had a good idea (WIMPs), and it just seemed like a matter of tracking them down.
SCDM was absolutely Known for about a decade, pushing two depending on how you count. We were very reluctant to give it up. But over the course of the 1990s, it became clear [again] that Ωm < 1. What was different was a willingness, even a desperation, to accept and rehabilitate Einstein’s cosmological constant. This seemed to solve all cosmological problems, providing a viable concordance cosmology that satisfied all then-available data, salvaged Inflation and a flat geometry (Ωm + ΩΛ = 1, albeit at the expense of the coincidence problem, which is worse in LCDM than it is in open models), and made predictions that came true for the accelerated expansion rate and the location of the first peak of the acoustic power spectrum. This was a major revelation that led to Nobel prizes and still resonates today in the form of papers trying to suss out the nature of this so-called dark energy.
What if the issue is even more fundamental? Taking a long view, subsuming many essential details, we’ve gone from a natural cosmology (EdS) to a less natural one (an open universe with a low density in baryons) to SCDM (EdS with lots of non-baryonic dark matter) to LCDM. Maybe these are just successive approximations we’ve been obliged to make in order for FLRW** to mimic UT? How would we know?
One clue might be if the concordance region closed. Here is a comparison of a compilation of constraints assembled by students in my graduate cosmology course in 2002 (plus 2003 WMAP) with 2018 Planck parameters:

The shaded regions were excluded by the sum of the data available in 2003. The question I wondered then was whether the small remaining white space was indeed the correct answer, or merely the least improbable region left before the whole picture was ruled out. Had we painted ourselves into a corner?
If we take these results and the more recent Planck fits at face value, yes: nothing is left, the window has closed. However, other things change over time as well. For example, I’d grant a higher upper limit to Ωm than is illustrated above. The rotation curve line represents an upper limit that no longer pertains if dark matter halos are greatly modified by feedback. We were trying to avoid invoking that deus ex machina then, but there’s no helping it now.
Still, you can see in this diagram what we now call the Hubble tension. To solve that within the conventional FLRW framework, we have to come up with some new free parameter. There are lots of ideas that invoke new physics.
Maybe the new physics is UT? Maybe we have to keep tweaking FLRW because cosmology has reached a precision such that FLRW is no longer completely adequate as an approximation to UT? But if we are willing to add new parameters via “new physics” made up to address each new problem (dark matter, dark energy, something new and extra for the Hubble tension) so we can keep tweaking it indefinitely, how would we ever recognize that all we’re doing is approximating UT? If only there were different data that suggested new physics in an independent way.
Attitude matters. If we think both LCDM and the existence of dark matter is proven beyond a reasonable doubt, as clearly many physicists do, then any problem that arises is just a bit of trivia to sort out. Despite the current attention being given to the Hubble tension, I’d wager that most of the people not writing papers about it are presuming that the problem will go away: traditional measures of the Hubble constant will converge towards the Planck value. That might happen (or appear to happen through the magic of confirmation bias), and I would expect that myself if I hadn’t worked on H0 directly. It’s a lot easier to dismiss such things when you haven’t been involved enough to know how hard they are to dismiss***.
That last sentence pretty much sums up the community’s attitude towards MOND. That led me to pose the question of the year earlier. I have not heard any answers, just excuses to not have to answer. Still, these issues are presumably not unrelated. That MOND has so many predictions – even in cosmology – come true is itself an indication of UT. From that perspective, it is not surprising that we have to keep tweaking FLRW. Indeed, from this perspective, parameters like ΩCDM are chimeras lacking in physical meaning. They’re just whatever they need to be to fit whatever subset of the data is under consideration. That independent observations pretty much point to the same value is far compelling evidence in favor of LCDM than the accuracy of a fit to any single piece of information (like the CMB) where ΩCDM can be tuned to fit pretty much any plausible power spectrum. But is the stuff real? I make no apologies for holding science to a higher standard than those who consider a fit to the CMB data to be a detection.
It has taken a long time for cosmology to get this far. One should take a comparably long view of these developments, but we generally do not. Dark matter was already received wisdom when I was new to the field, unquestionably so. Dark energy was new in the ’90s but has long since been established as received wisdom. So if we now have to tweak it a little to fix this seemingly tiny tension in the Hubble constant, that seems incremental, not threatening to the pre-existing received wisdom. From the longer view, it looks like just another derailment in an excruciatingly slow-moving train wreck.
So I ask again: what would falsify FLRW cosmology? How do we know when to think outside this box, and not just garnish its edges?

*The obsession with circular motion continued through Copernicus, who placed the sun at the center of motion rather than the earth, but continued to employ epicycles. It wasn’t until over a half century later that Kepler finally broke with this particular obsession. In retrospect, we recognize circular motion as a very special case of the many possibilities available with elliptical orbits, just as EdS is only one possible cosmology with a flat geometry once we admit the possibility of a cosmological constant.
**FLRW = Friedmann-Lemaître-Robertson-Walker. I intentionally excluded Lemaître from the early historical discussion because he (and the cosmological constant) were mostly excluded from considerations at that time. Mostly.
Someone with a longer memory than my own is Jim Peebles. I happened to bump into him while walking across campus while in Princeton for a meeting in early 2019. (He was finally awarded a Nobel prize later that year; it should have been in association with the original discovery of the CMB). On that occasion, he (unprompted) noted an analogy between the negative attitude towards the cosmological constant that was prevalent in the community pre-1990s to that for MOND now. NOT that he was in any way endorsing MOND; he was just noting that the sociology had the same texture, and could conceivably change on a similar timescale.
***Note that I am not dismissing the Planck results or any other data; I am suggesting the opposite: the data have become so good that it is impossible to continue to approximate UT with tweaks to FLRW (hence “new physics”). I’m additionally pointing out that important new physics has been staring us in the face for a long time.
The Snowmass 2021 paper on cosmological tensions and anomalies states the following at the end of section VII.H:
“Given the rich variety of the above observations, and the differences in underlying physics, it is hard to imagine that the CMB dipole direction is not a special direction in the Universe. The status quo of simply assuming that it is kinematic in origin, especially since it is based on little or no observational evidence, may be untenable. If this claim is substantiated, not only do the existing cosmological tensions in H0 and S8 need revision, but so too does virtually all of cosmology. In short, great progress has been made through the cosmological principle, but it is possible that data has reached a requisite precision that the cosmological principle has already become obsolete.”
https://arxiv.org/abs/2203.06142
LikeLiked by 1 person
That almost sounds like progress!
LikeLike
If the story of Copernicus and Kepler teaches us anything, it is the importance of better observational data. Without Tycho’s measurements Kepler would never have been able to prove that the orbit of Mars was an ellipse. It was not only knowing that the best result he could achieve otherwise had discrepancies of up to 8 arc minutes, but also that Tycho’s observations had far smaller errors.
LikeLiked by 1 person
As it was described in the popular science books I was reading, 30+ years ago, it seemed that if Omega=1, the assumption was the expansion was in inverse proportion to gravity.
My simple minded thought was, well okay, space, measured in terms of matter, collapses into galaxies, in inverse proportion that space, measured in terms of light, expands between them.
So why would this notion of the universe expanding even be necessary? Wouldn’t some form of convection cycle be a logical solution?
Basically it would be Einstein’s original function for a cosmological constant, to balance the effect of gravity and keep the universe from collapsing. Think in terms of the ball on a rubber sheet analogy, where the sheet would be over water, so that it is pushed up in inverse proportion to the degree all the balls would be pushing it down, around those balls.
Back in the 90’s, in the old NYTimes Mysteries of the Universe section of the discussion boards they had at the time, I’d raised this issue and someone, name long forgotten, mentioned he had majored in cosmology at University of Chicago and had done his masters thesis on a version of the same point. To which his adviser suggested he pursue another career, if he thought to follow up on that idea.
LikeLike
Quick question, are you the one who supports tired light, or was it budrap? I may have misremembered who supports what.
LikeLike
I think it was budrap, he responded to my comment about the nonexistence of a relativistic tired light model with a claim that relativistic gravitational redshift could entirely explain cosmological redshift.
LikeLike
“My simple minded thought was, well okay, space, measured in terms of matter, collapses into galaxies, in inverse proportion that space, measured in terms of light, expands between them.”
Have you seen this paper by Alexandre Deur? It seems to be very similar to what you are describing:
Click to access 2301.10861.pdf
LikeLike
I guess the question would be, why goes gravity self interact?
My sense of gravity is that waves synchronize as a path of least resistance. Entropy for information would be too condense. So while waves do radiate energy out, their organizational tendency is to combine and thus contract. One big wave is more efficient than many small waves. At the very least, they do interact.
So while the energy harmonizes, as it radiates across space, the waves synchronize. Eventually coalescing to the point of black holes. Nodes and networks, organisms and ecosystems, particles and fields.
In my occasional efforts to find papers suggesting it as a basic characteristic of wave behavior and not one that has to be induced, I did come across this rather interesting paper, that while it doesn’t directly connect with my own idea, it does seem similar to what is proposed here, in a different context. That de Broglie pilot waves are actually a macroscopic effect;
https://www.mdpi.com/2311-5521/5/4/226
Basically a function of the interaction.
LikeLike
In the philosophy and sociology of science, my original prediction was that while mainstream cosmologists might stop believing in the cosmological principle in the form of the FLRW metric because it is inconsistent with observation, the big bang in the form of the metric expansion of the universe would continue to be believed, because non-FLRW models of an expanding universe already exist.
However, it seems that many cosmologists use the FLRW metric to justify the metric expansion of the universe in the first place, which means that, contrary to my original hypothesis about the behaviour of cosmologists, if the FLRW metric ever were to fall apart, their belief in the metric expansion of the universe might fall apart entirely as well for many of them, with nothing else to justify it at all. Thus, there is a chance that the cosmological principle being falsified will lead to some mainstream cosmologists to conclude that they need to find an alternative to the big bang. If this occurs, this will lead to the reopening of the expanding universe vs static universe debate in the mainstream scientific community, only in a non-FLRW context in the modern era, rather than in the FLRW context of the 1940s and 1950s.
LikeLike
The failure of the cosmological principle and the reactions of the scientific community will determine which one of the above hypothesis will hold: whether mainstream cosmology will hold fast to the metric expansion of the universe in developing a non FLRW model, whether they will come to accept the big bang again (necessarily non-FLRW) after a protracted debate with non-big bang alternatives, or whether the big bang gets tossed out along with FLRW in favor of an alternative explanation.
LikeLike
Any hypothesis that shares the naive, century-old assumption that the vast Cosmos we observe is a unified, coherent, simultaneous entity that can be modeled with a gravitational equation derived in the context of the solar system, is probably on a down-bound train to the dustbin of history.
LikeLike
There is also this recent talk by Subir Sarkar questioning the validity of the cosmological principle and the cosmological constant:
LikeLike
The cosmologist Licia Verde says that even modest modificiations to the FLRW Lambda CDM model has a noticeable impact on basic cosmological facts such as the age of the universe:
The changes in scientific knowledge that would result from a non-FLRW cosmological model are likely to dwarf anything cosmologists currently believe to be true about the universe.
LikeLike
Licia Verde also mentions throughout the presentation a difference between model dependent measurements and model independent measurements. The Hubble tension, for example, is a tension between the model independent SHOES measurements of the Hubble-Lemaitre parameter using cepheid variables, and the FLRW Lambda CDM model dependent Planck measurements of the Hubble-Lemaitre parameter using the cosmic microwave background. This makes me suspicious of the Planck measurements, because it is quite conceivable if Planck was redone with a different cosmological model, the measurement of the Hubble-Lemaitre parameter would end up being different.
How much of the observational evidence in cosmology is based on model dependent measurements, especially measurements dependent upon the FLRW Lambda CDM model?
LikeLiked by 1 person
There is a tremendous amount of presumption bias in the interpretation of data relevant to cosmology, in that LCDM is almost always assumed and FLRW practically always. It takes considerable work to back out what we actually know empirically and disentangle it from this interpretive overlay. Many practicing cosmologists are not just unwilling to do this (it is a hassle), but often seem incapable of understanding the distinction.
LikeLike
“but often seem incapable of understanding the distinction.”
That is strange. It’s really very strange.
But at the end of the day, it’s your chance to be better than others.
LikeLike
Around 36:40 in the above video, somebody asked Licia Verde about her views about the possibility that the FRW metric needs to be dispensed with and replaced with something else in order to resolve the Hubble tension. Might be worth a listen to.
LikeLike
The first few words captured my imagination right away. I have not yet read the whole post. I am currently stretching my mind to include the idea of a colored counter-space, I don’t know if anyone’s ever heard of it, introduced some time ago by some crazy guys who seem to have an understanding of this crazy MOND Theory, where light and gravity are inversely proportional to the proposition of me as some sort of preposition in an alternate space. Amazingly enough I have run into to this same inverted cone diagram from more than one angle. Looks like an hourglass with labels of inside and outside and time upside and down. I’m probably not smart enough to understand it completely but it sure has my balls in my mouth. Is far as I can tell it has MOND written all over it–an alternative geometry of space. Or maybe I should go to my corner and be the dunce.
LikeLike
The Stanford Encyclopedia of Philosophy has an article about cosmology during the 1930s:
https://plato.stanford.edu/entries/cosmology-30s/
In particular, it describes two different philosophical approaches to cosmology: the inductive-empiricist approach and the hypothetico-deductivist approach. To quoie the article
“At bottom, there were just two opposing positions in the debate, each of which comprised a two-point stance. On one side were those scientists who had their roots mostly in the experimental side of natural science. To them, there was one and only one legitimate method for science. Theory construction, they believed, involved two closely-linked steps. First, one began from the empirical observations, that is, from measurements, manipulations, experiments, whose results were evident to the human senses; this is classic empiricist epistemology. Observational results would then suggest possible hypotheses to examine via further empirical testing. When enough data concerning the hypothesis had been gathered, logical generalization could be carried out, thereby producing a theory; this is classic inductivist logic.
Opposing these inductive-empiricist scientists were those whose roots were mostly in the theoretical side of natural science, most especially mathematical physics. To them, there was another, more logically sound, method to construct theories. First, hypotheses could be generated in any fashion, although most believed that imagining hypotheses which were based upon very general, very reasonable concepts—that the Universe’s physical processes had simple mathematical descriptions, for example—was the best place to begin; this is classic rationalist epistemology. Once the hypothesis had been generated, strict analytical reasoning could be used to make predictions about observations; this is classic deductivist logic. Scientists who held this view came to be called hypothetico-deductivists; their views about both hypothesis generation and deductive predictions were each strongly opposed by the inductive-empiricists.”
Initially, the inductive-empiricists dominated the field of cosmology, advocated by figures like Arthur Eddington. The main advocate of the alternative hypothetico-deductivist approach was Edward Milne, and over the 1930s, he convinced the majority of cosmologists to shift from their inductive-empiricist view to the hypothetico-deductivist viewpoint. This lead to the acceptance of the FLRW metric in their model of cosmology, as detailed in section 5:
“The new year marked a sudden change. In short order, McCrea, Walker and Robertson succumbed to Milne’s methodological recommendations: first, to carry out an operationalist paring of non-observational concepts, then, secondly, to embed the resulting minimalist concept set in an axiomatic hypothetical-deductive structure. Thus was the famous Robertson-Walker spacetime metric born.”
However, with advances in astrophysical and cosmological observational evidence showing the universe to not be FLRW, it would seem that if modern cosmology were to move forwards and advance, it would have to repudiate the axiom of the cosmological principle and more generally Milne’s viewpoint of science of building scientific models from abstract axioms, and return to Eddington’s viewpoint of science of building scientific models from observational evidence.
LikeLike
A strong argument, I think, for teaching physicists more philosophy. The problem is, if anything, even worse in particle physics, where the hypothetical-deductive structure has dominated for at least half a century, and almost every deduction has been robustly falsified by experiment. Inductive-empiricists (in which category I include myself) are either ignored, or vilified.
LikeLike
Stop me if you think this is too far off topic, but the one empirical observation of particle physics that I believe has not been adequately taken account of in the development of the standard model is the discovery of the muon. The fact that there are three generations of fermions is clearly one of the most fundamental facts about the universe. In particle physics, this fact is added on as an optional extra. In astronomy and cosmology, it is ignored completely. I cannot believe that this is a reasonable strategy for trying to understand gravity, or anything else in the universe.
LikeLike
Why there are three generations of fermions is a great puzzle.
Astronomy requires an incredibly vast range of physics; stars alone involve thermodynamics, hydrostatics, gravity, radiation transfer through the quantum mess of the atomic structure of every ionic species of every element in the periodic table, and of course nuclear physics to make them shine. One never really needs much deeper than nuclear physics though; the fundamentalist questions that obsess particle physics are, as a practical matter, moot. The question of the three generations never rises to observational relevance.
One does worry about it in early universe cosmology at the epoch of baryogenesis, which we still don’t understand. Why is there a matter-antimatter asymmetry? We’ve been stumped by that one for a long time.
LikeLike
The number of neutrino families comes up in big bang nucleosynthesis, which very much wants there to be 3. Doesn’t say why though, just constrains that it can’t be more.
LikeLike
The reason why I claim that the three generations are important for gravity is that the mass eigenstates of the neutrinos are different from the flavour eigenstates. It follows that there must be an analogous statement for electrons, which I take to be that the inertial mass eigenstates are different from the gravitational mass eigenstates. I have collected some experimental evidence in favour of this hypothesis, and therefore believe it is worth taking seriously. In particular, there is circumstantial evidence that the relationship between the inertial and gravitational eigenstates has changed over historical time, and can therefore also change over galactic distances. Moreover, all three inertial mass eigenstates are involved in the creation of the gravitational mass eigenstates.
LikeLike
Is this purely a belly guess or less ?!
Otherwise, I would like to ask the following questions:
If there is an eigenstate of the mass, what is the eigenvalue equation?
And what is the operator of the mass?
In the Schrödinger equation, there are operators for the location, the energy and the momentum. But none for the mass.
What are you talking about here?
LikeLike
I don’t think it is particularly useful to ask “why” there are three generations of fermions, but the fact that there are three is undeniable, and the fact that this fact is fundamental to the structure of the universe is also undeniable. The fact that astronomers and cosmologists think they can build a model of the universe, and in particular of gravity, without taking this fundamental fact into account, never ceases to amaze me.
LikeLike
Couldn’t agree more! I believe we need a take on the Standard Model in which the three generations of matter are a natural consequence of some sort of 3-ary symmetry in fermions. Personally I try first to fiddle around with order-3 variants of involutions in mathematics. It’s a fine hobby.
LikeLike
Quite so. The representation theory of Z_3 is already very interesting from a particle physics point of view. The binary tetrahedral group, of order 24, describes 12 fermions and 12 bosons, with most of the standard model symmetries, so the subtle differences from the standard model are particularly interesting.
LikeLike
The reason why the three generations of fermions are important for astronomy and cosmology is that neutrinos oscillate between the three generations as they travel across the Cosmos. Understanding how and why they do this is crucial to building a realistic quantum theory of gravity, and will be crucial to understanding why GR is an incomplete theory of gravity.
LikeLike
Robert,
I meant to post my thoughts on the 3 generation mystery of particle physics under your mention of three generations of fermions, but somehow it popped up under your “Matter/antimatter asymmetry is……” comment. If you happen to have Donald Perkins “Introduction to particle physics”, there’s a photo of a neutral lambda decay that sure looks like it has missing momentum. It’s on page 115 of the 3rd edition, but it might be on a different page in more recent editions.
LikeLike
Matter/antimatter asymmetry is interesting because in particle physics this “symmetry” is implemented as complex conjugation on the Dirac spinors, which is supposed to be equivalent to negation of time (and charge and parity). But this is only true if the symmetries of spacetime and spin are continuous, and both are described by the same Lorentz group. If spacetime itself is required to be quantised, then these arguments no longer apply. I have looked at various possible ways to quantise spacetime, and in all of them the link between spin and spacetime is subtly different from the standard Dirac picture that dates from the 1920s. The model I am most interested in at present does not have a time-reversal symmetry at all, and therefore does not have a matter/antimatter symmetry. It still has Dirac spinors, and complex conjugates of Dirac spinors, so it still has anti-matter, but matter and anti-matter relate to (quantised) spacetime in different ways.
LikeLike
Back in the mid 90’s I was very interested in why there were three generations in the Standard Model and hit upon a simple idea to explain this triplication, which concomitantly imposes a firm limit of just three generations. But a fallout from this idea was the expectation that the two higher generation quark flavors should always be preserved by transfer to the lightest particles carrying those flavors – the muon neutrino/tau neutrino, and their respective anti-particles. But that runs up against a, more or less, forbidden decay mode in the Standard Model called Flavor Changing Neutral Currents or FCNC’s. These are suppressed by the GIM mechanism. Nonetheless searches for such decays are ongoing, according to the Wikipedia entry on FCNC’s.
To be clear I’m not a physicist, so I don’t want to make it sound that I know a lot about this field. I wouldn’t be surprised if someone much smarter and far more knowledgeable than I am in this subject will chime in on this. The math gets rather hairy and I really don’t have a good grasp of it. However, back then, between 94 and about 97, I submitted a paper on this idea to various venues; popular science type magazines, and professional journals, all of which declined to publish. That was OK as it really wasn’t that professional, and at that time I hadn’t worked out a sensible decay pathway that leads to the preservation of 2nd and 3rd generation flavors in the form of those same generation neutrinos.
In that 90’s paper I pointed to a bubble chamber photo of a lambda hyperon decay into a proton and negative pion that seemed to evince missing momentum (this is on page 115 of Donald Perkins “Introduction to High Energy Physics, 3rd edition). In fact, in that book, out of 10 hadron decays (bubble chamber photos) in which supposedly one unit of strangeness is not conserved, on visual inspection 9 suggested possible missing momentum. Using a high magnification headset that I used in my engineering technician job, I carefully measured the incoming/outgoing angles of the particles, and curvature radii of the charged outgoing particles in that lambda hyperon decay. When I worked out the math it indicated missing transverse momentum. It’s been a long time, but possibly my procedure was wrong, since all these bubble chamber photos were undoubtedly thoroughly examined back in the 50’s and 60’s when bubble chambers were in use.
In any case in more recent years I worked out a scheme whereby the higher generation quark flavors might be conserved in the form of a neutrino of the same flavor or generation. But it requires a 2 stage decay rather than a single stage. A lambda hyperon has an up, down, strange, or uds quark composition. In the Standard Model it decays directly to a proton and negative pion, neither of which carry any strangeness. In the proposed (hypothetical) decay pathway the strange quark of the lambda hyperon would transmute to a charmed (c) quark via the emission of a W- boson, which then decays to a negative pion. But with the strange quark now a charmed quark the neutral lambda hyperon transforms to a positively charged, charmed lambda. This has twice the mass of a lambda hyperon, but invoking Heisenberg’s uncertainty principle it seems plausible that it could exist for an extremely brief time, so would not be noticed as a track on the bubble chamber photo. This situation also applies to the W and Z bosons, which are detected indirectly by their decay products.
For the 2nd stage, the c quark in the charmed lambda, would transmute to an up (u) quark via the emission of a Z boson. That results in the udu quark composition of a proton. The Z boson would decay to an electron anti-neutrino and a muon neutrino, the latter carrying away the flavor of the original s quark. Of course this is very speculative, and a 2 stage process would have lower probability according to the rules, so maybe it’s not viable.
LikeLike
Interesting. I am not an expert in these things, so cannot make an informed comment. But my impression is that the field is dominated by empirical or mathematical rules (selection rules, the GIM mechanism, or whatever) to explain what is seen, at the expense of fundamental physical principles. A return to fundamental physical principles is long overdue, but is excruciatingly difficult to achieve in practice. Tracking the missing momentum carefully is clearly an important part of this. The W/Z mass anomaly is one piece of evidence that the momentum has not been tracked carefully enough.
LikeLike
Somehow I didn’t pay attention to your mention of the W/Z mass anomaly, my brain about as frozen as our snow covered New England landscape and temps dipping to -15 C. But I finally looked it up and discovered that last Spring Fermilab’s CDF experiment had produced the tightest experimental value of the W boson’s mass. It was 0.3% higher than the Standard Model predicts or a 7 sigma deviation. Perhaps its a clue to the “UT”, even though such high energy processes are on the opposite end of the energy spectrum where cosmological phenomena deviate from Newton and Einstein. But I also noted that professional physicists like Sabine Hossenfelder and Tommaso Dorigo harbor skepticism that it’s a sign of new physics, presumably assuming that the anomaly will go away with more measurements from other facilities.
LikeLike
Yes, it is hard to predict at this stage what the final resolution of this anomaly will be. Interestingly, Tommaso Dorigo suggested that the anomaly was more likely to indicate a difference between CERN and Fermilab, than new physics. I am not sure if he was serious about this, or just pouring cold water on the suggestions of new physics. But I feel his suggestion should be taken more seriously that he himself appears to take it! If the momentum of neutrinos in CERN and Fermilab differs for subtle reasons to do with the gravitational field, the rotation of the Earth, or anything else that the particle physicists are not taking into account, then there may be some real physical evidence here of something important like quantum gravity.
LikeLike
Well. that’s the heart and soul of the “crisis in physics” alright. That debate has to be reopened in light of the utter failure of the deductive method. You can’t do science that way. It doesn’t work; it hasn’t worked. It is the antithesis of science and has produced a set of physically inane standard models that bear scant resemblance to physical reality.
Correcting the situation will present a huge problem in that the deductive method has been taught as the proper way to do science for at least four decades. It’s a mess.
LikeLike
It does seem a bottom up, versus top down dichotomy.
So the real problem is the essential Western presumption of reality being monistic, when it functions dualistically.
Then each side finds ways to burrow into their respective rabbit holes, especially the top down side.
While the bottom up side keeps questioning and therefore undermining their own models and can’t build a sufficiently authoritative structure. No high priests.
Feedback loops, all the way down. And up.
LikeLike
Revisiting the SEP article, the authors write:
“For nearly twelve years, the new cosmology appeared to be going nowhere. Then Hubble at California’s Mt. Palomar made public his astonishing observations of a cosmic Doppler shift, a shift toward the red in the color of light coming from the most distant star systems.
2.2 Hubble’s Expanding Universe
Most cosmologists—with the interesting exception of Hubble himself—came to the immediate conclusion that the red shift could only mean that the universe was expanding.”
The authors of the article did not say what Hubble’s views about cosmological redshift are. Does anybody have references for Hubble’s views?
LikeLike
Click to access 0806.4481.pdf
LikeLike
Regarding the observation of too much structure in the early universe, I am reminded of Alice watching Bob fall into a black hole. Their clocks don’t tick at the same rate. Doesn’t Bob appear to have his structure intact, when he really should be spaghetti?
LikeLike
Mike Boylan-Kolchin recently gave a talk about two tensions in the FLRW Lambda CDM model, the discovery of very massive galaxies at high redshift by JWST, and the Hubble tension. The former tension presents a problem for the CDM portion of the model, while the latter tension is likely to present a problem to the FLRW part of the model.
LikeLike
Mike Boylan-Kolchin talks about the FLRW Early Dark Energy model in his presentation, and there are a number of slides where he details exactly how basic “facts” about the universe, such as the age of the universe and the age of the various astronomical objects in space would change drastically when moving from the FLRW Lambda CDM to the FLRW Early Dark Energy model. For example, a difference of 0.7-1 billion years between Lambda CDM and Early Dark Energy is already out of range of the estimated error bars of the age of the universe for Lambda CDM. This highlights that concepts like the “age of the universe” are model dependent and should be taken with a grain of salt.
LikeLike
Distance measures are another example of model dependent functions:
https://en.wikipedia.org/wiki/Distance_measure
The current distance measures used rely on the FLRW metric and on the cosmological constant.
LikeLike
Regarding the age of the universe, it would be interesting to see what the original FLRW SCDM model says about the age of the universe with current data. I do not remember off the top of my head but I think it is different from the age of the universe in the FLRW Lambda CDM model.
LikeLike
SCDM with a mass density parameter of unity has an age of 2/3 of a Hubble time (1/H0), so the age of such a universe would be 9 Gyr for H0=73. That’s clearly less than the age of the oldest known stars (13 Gyr, give or take). So such a model is excluded, and this age problem was one of the drivers to rehabilitate Lambda in the ’90s.
Note that there are lots of stars of great age ~13 Gyr then nothing older (within the errors). So it looks like cosmic dawn was a sudden onset, and it would be weird if the universe formed many Gyr before that but did nothing before suddenly deciding to do everything.
Indeed, Mike B-K raised exactly this point as an objection to Sanders’s prediction that galaxies would form at z=10. This is a real-world example of taking the background model too seriously, and being unwilling or unable to think outside the box. The only thing that matters to Sanders’s prediction is the sudden onset of structure formation after decoupling; z=10 is just how long it takes to assemble an L* mass galaxy. Doesn’t matter what the background expansion is doing once the collapse starts.
LikeLike
My first thought was: “like the Cambrian explosion.”
LikeLike
Let there be light! / multicellular life.
LikeLike
To circle back around, would you say he used more inductive or deductive reasoning to arrive at that conclusion?
LikeLike
I should have been clearer in my statements – the conclusions above are largely my own, not Mike’s – such as the statement about the Hubble tension affecting FLRW metric and taking model dependent calculations of the age of the universe with a grain of salt.
My belief that solving the Hubble tension would require getting rid of the FLRW metric is largely due to the various other observational evidence, such as the CMB dipole, the KBC void, the peculiar motion towards the Shapley supercluster, et cetera, which indicate that the universe is not homogeneous and isotropic and that the FLRW metric is already falsified at very large scales. And having a non-FLRW metric is mentioned as a solution to the Hubble tension in the Snowmass 2021 paper on cosmological tensions and anomalies in one of the comments above.
My conclusion that one should take model dependent calculations with a grain of salt is more philosophical, and comes from the fact that if one switches between different models and gets widely different answers, that really means that the calculation doesn’t reflect reality/truth but rather a model’s subjective opinion. Mike bringing up the difference in the calculated age of the universe between two cosmological models (Lambda CDM and Early Dark Energy) indicates that the usual statement of the age of the universe (13.7 billion years old) as fact throughout the cosmological literature is suspect, because it is based upon a particular model (Lambda CDM) rather than measured through observations.
LikeLike
Mike Boylan-Kolchin later mentions a model independent way of measuring a lower bound for the age of the universe, by measuring the age of globular clusters. This is good because then one is no longer bound by the FLRW Lambda CDM model in calculating the age of the universe.
Currently, there are globular clusters whose ages are estimated to be about 13.4 billion years old, close to the age of the universe according to the FLRW Lambda CDM model, but those ages have a large error bars of + or – 1.9 billion years, which means that it is also possibly consistent with the FLRW Early Dark Energy model or other new physics in the early universe.
I predict that as more observations are made and the error bars are reduced, the ages of these globular clusters would stay about the same, and slowly but surely the early universe solutions to the Hubble tension like the Early Dark Energy model would be found inconsistent with the new observational evidence.
LikeLike
I should note that Mike Boylan-Kolchin’s “model independent” method of measuring a lower bound of the age of the universe is really only independent of cosmological models. It is still very dependent on models of the internal dynamics and estimated lifetime of stars – if those ever need to be revised due to new observations, the resulting estimates of the lower bound of the age of universe would shift.
LikeLike
“… Mike Boylan-Kolchin’s “model independent” method of measuring a lower bound of the age of the universe is really only independent of cosmological models.”
That is not a model independent method at all. If you claim to be measuring the age of the universe then you are assuming a certain type of cosmological model. The FLRW model is not just the FLRW metric with its assumption of homogeneity and isotropy. The model also assumes a universal metric can be meaningfully applied to the Cosmos as a whole, and that metric can be the basis of a solution to the General Relativity equation (despite the fact that GR does not work well on scales and complexities much beyond the solar system.} Call the Universal Metric + GR approach the FLRW framework. You and MBK are retaining that FLRW framework and just swapping out the FLRW metric. The models you get are all just variations on a theme – the Cosmos is a Universe.
Mathematically you can do that but the resulting models have no physical meaning or significance – they are metaphysical models unrelated to the Cosmos. Why unrelated? Because any such model has a “universal age”, which means it has a “universal now”, and as far as we know it is a law of physics that the maximum speed of light in the Cosmos is @ 3X10^8 m/s and nothing can go faster than that. That Maximum Light Speed Law means there is no physical meaning to the concept of a “universal now.”
Imposing a “universal metric” on the Cosmos violates MLSL. And that’s because the MLSL limits the information that you can have about “now” to any 3-dimensional observer’s immediate locale. That is a hard physical limit, not a technological constraint.
Now is always and only a local condition. In physical reality there is no such thing as a “universal now” that applies to everywhere and everywhen, and therefore there is no meaning to the idea that the Cosmos has a “universal age.”
Any modeling methodology that imposes a universal frame on the Cosmos therefore violates a known law of physics. The FLRW framework is such a methodology. All FLRW Universes are imaginary mathematical constructs unrelated to physical reality.
https://commons.wikimedia.org/wiki/File:World_line.svg#/media/File:World_line.svg
In the context of this discussion, only the Past Light Cone is relevant and the model is misleading in the way it is labeled. Observers aren’t involved with Future Light Cones unless they are also omnidirectional emitters of light. Observers only observe the Past Light Cone which is an aggregate of all the omnidirectionally sourced electromagnetic radiation that can reach the observer. An observer only receives a time slice of any remote emitter’s Future Light Cone.
LikeLike
Thank you for this clear explanation of why the concept of the “age of the universe” is meaningless. In colloquial language I would say that there is not a concept of “now”, but only a concept of “here and now”. it is simply meaningless to try to separate the “now” from the “here”.
LikeLike
Thanks Robert. Good point about the expression “here and now”. Although familiar with the usage, I never noticed it’s clear and concise relevance to the topic of a universal now.
LikeLike
I think there are larger issues relating to time and our experience of it, that physics doesn’t address.
As these mobile organisms, we have/are this sentient interface between our body and its situation, that functions as a sequence of perceptions, in order to navigate, so our experience is of the now going past to future. Though the evident reality is that change is turning future to past. Tomorrow becomes yesterday, because the earth turns.
There is no literal “dimension” of time, because the past is consumed by the present, to inform and drive it. Causality and conservation of energy. Cause becomes effect.
Energy is “conserved,” because it manifests this presence, creating time, as well as temperature, pressure, color and sound. Frequencies and amplitudes, rates and degrees.
So the energy goes past to future, because the patterns generated coalesce and dissolve, future to past. Energy drives the wave, the fluctuations rise and fall.
The information is not conserved, because its constant changing is time.
As these sentient organisms, our consciousness also goes past to future, while the perceptions, emotions and thoughts giving it form and structure go future to past. Though it is the digestive system processing the energy, the nervous system sorting the information and the circulation system as feedback between the two.
It’s also worth noting that galaxies are energy radiating out, as structure coalesces in.
Our little blue dot exists in the feedback between the two.
LikeLike
Your proposed name “FLRW framework” for your concept does not make any sense. For your proposed framework is more general than either the FLRW metric/cosmological principle or the metric expansion of the universe/big bang, while the term FLRW explicitly refers to the metric of an expanding, homogeneous, and isotropic universe. Perhaps the term “universal metric framework” is a better name, since the name doesn’t imply the cosmological principle or an expanding universe, but merely the mere existence of a universal metric.
LikeLike
I disagree with your identification of the big bang with the mere concept of the metric expansion of the universe. The “big bang” was coined by Fred Hoyle to distinguish it from the “steady state” model. Both models assumed the metric expansion of the universe and the cosmological principle; they only disagree on whether the perfect cosmological principle holds. You can’t retroactively redefine “big bang” to also include models which do not accept the cosmological principle.
LikeLike
I’m not the one who has redefined the term “big bang” from the original meaning; that was society writ large who has redefined “big bang” away from Fred Hoyle’s original meaning of a particular model to the more general concept of the “big bang” as the putative beginning of the universe and the metric expansion of the universe. If you don’t believe me just read Pavel Kroupa’s article on our current concordance model being falsified: to quote, “The CMB suggests that matter was nearly evenly distributed in the universe, about 400,000 years after the Big Bang.”
https://iai.tv/articles/our-model-of-the-universe-has-been-falsified-auid-2393
The use of the “Big Bang” clearly refers to a specified time in the history of the universe, specifically the time where the universe began from a singularity and then expanded to its current state now – indicating its relation to the metric expansion of the universe, and not the cosmological principle. And in fact, nowhere in the article does Pavel Kroupa state that the big bang theory is falsified, only that the current standard model of cosmology, FLRW Lambda CDM, is falsified.
Throughout the Snowmass 2021 paper, they mentioned getting rid of the FLRW metric as a solution to the Hubble and S8 tensions – but they do not say that getting rid of the FLRW metric would imply the end of the big bang theory, which implies that merely getting rid of the FLRW metric is not enough to abandon the big bang theory (otherwise the hundred or so authors would have explicitly said so):
https://arxiv.org/abs/2203.06142
In addition, when people like Eric Lerner say that the Big Bang theory is wrong, they are explicitly referring to the concept of the expansion of the universe itself being wrong, and not the cosmological principle/FLRW metric:
https://iai.tv/articles/the-big-bang-didnt-happen-auid-2215
LikeLike
Just because a term is being used in a different way doesn’t mean its original definition stopped being used. Yes, the Big Bang also refers to the singularity at the beginning of the universe. No, the Big Bang theory hasn’t been redefined away from the original class of models defined by Fred Hoyle which accept the metric expansion explanation for cosmological redshift and the cosmological principle but reject the perfect cosmological principle. The Pavel Kroupa article is thus useless in this discussion because he never talks about the “Big Bang theory”, only the “Big Bang” as a singularity back in time. Similarly, the Snowmass paper is useless in this discussion because the uses of the “big bang” refer to either “Big Bang singularity” or “Big Bang Nucleosynthesis”, rather than a general Big Bang theory.
Most uses of the term “Big Bang theory” regard the Big Bang theory as the same as Fred Hoyle’s original definition of the class of models consisting of metric expansion of the universe + cosmological principle. This is because up until the past few years with tensions popping up like the Hubble Tension and the misalignment with the CMB dipole with the dipole detected in quasars and type 1a supernovae which hint at the failure of the cosmological principle, cosmologists have had no reason to consider cosmological models which do not satisfy the cosmological principle because known evidence for much of recent history was consistent with a homogeneous and isotropic universe.
And even now, I still have not seen any use of the “Big Bang theory” to refer to anything but the class of models which have the metric expansion of the universe and the cosmological principle, except for you. I find it very likely that if the cosmological principle is overturned, that the mainstream media, scientific community, and society writ large would say that the Big Bang theory itself is overturned, because they cannot imagine a Big Bang theory without the cosmological principle – and it is society as a collective which decides on the definition of terms.
LikeLike
“I find it very likely that if the cosmological principle is overturned, that the mainstream media, scientific community, and society writ large would say that the Big Bang theory itself is overturned, because they cannot imagine a Big Bang theory without the cosmological principle – and it is society as a collective which decides on the definition of terms.”
I think we will have to just agree to disagree, because even if cosmologists give up on FLRW, I find it unlikely that they will give up on the metric expansion of the universe and the implied singularity when the expansion is extrapolated back in time. And thus, cosmologists need a name for this broader class of theories, and its opponents like Eric Lerner still need a name for this class of expanding universe theories to attack. The Big Bang theory still works as well as any other name, and since it is already in use, it would make sense to simply continue using the “Big Bang theory” name for the non-FLRW models.
LikeLike
“If you claim to be measuring the age of the universe then you are assuming a certain type of cosmological model.”
From what I could tell that is not what Mike is doing. He is simply giving a lower bound to the age of the universe using non-cosmological methods. Mike’s method is similar to saying that the age of the universe is greater than 108 years old because the theory of general relativity has been around for 108 years.
Of course, the real issue, as you hinted at, is that any attempt to measure the age of the universe runs afoul of special relativity and general relativity, because there is no universal age of the universe which applies to every point in spacetime at once. But the use of non-relativistic dynamics in a relativistic regime still occurs in a lot of astrophysics and cosmology. Modeling galaxy dynamics for example still relies on Newtonian gravity rather than general relativity, which is most likely one of the reasons why astrophysicists have to either assume dark matter in their models or modify Newtonian gravity to get MOND, in order to get theory to fit the rotation curves.
LikeLike
If “here” is precisely defined, then “now” can be equally precisely defined. For the start of a race in the Olympics, “now” can be defined to within a millionth of a second, easily good enough in practice. For the Earth as a whole, “now” struggles to reach 1/10 second accuracy, which is a serious problem for GPS, and explains why GR is required in order to translate between different definitions of “now”. For the Solar System, you need to think in terms of hours. Anomalies like the Pioneer anomaly can easily occur on that timescale. For the Milky Way, you can only define “now” to within about 100,000 years – reasonable in comparison to the age of around 13 billion years, but long enough for MOND to arise from serious misunderstandings about what “now” means. For the universe as a whole, there is no concept of “now”, and no possibility of ever building a realistic theory on top of such a concept.
LikeLike
It may be worth mentioning explicitly that an inverse-square law, in which distances are not well-defined because of the finite speed of propagation of the force, gives rise to a “fictitious” force proportional to d/dr (1/r^2) or 1/r, exactly as Milgrom proposed 40 years ago. It’s not difficult, it’s only rocket science.
LikeLike
It does seem there is a map versus territory issue with time. What is this physical presence we refer to as now, versus the process of our perception? Our minds function a bit like a movie camera, distilling a sequence of perceptions, in order to navigate our situation.
If we think in terms of the energy/light simply moving about, the notion of now as anything more than the light itself becomes problematic. For light, there is no time, no past or future. Cause becomes effect, because it is really about the energy and being energy, the fact it is dynamically changing.
So the idea of measuring its rate of change can only be applied to the relationships and configurations that emerge.
Consider that while we conceive of time as the sequence of events and the measured rate of change, what we do generally use as a measure is some regular cycle. Coorelating it with other cycles. It really isn’t possible to use some linear dynamic as a measure of time, other than the speed of light, because the variables involved. Even “tired light” becomes a problem, using light.
If we were to ask tree scientists their concept of time, they would describe it as cycles of expansion and consolidation. With the past as some seed, the rest fertilizer and the future as the sunlight to photosynthesize. It is the fact that as fauna are mobile organisms, we have to focus our energies in a particular direction, that makes, not only time as linear important, but organizing and synchronizing as one flow, one metabolic rate. Then for societies to function, there has to be some synchronization of activities.
Yet not everything marches to the beat of the same drummer. It’s a spectrum of frequencies, not one universal frequency.
LikeLike
Indeed, nothingness turns into somethingness as an epistemic limitation.
The historic problem is then “potential infinity.”
Because Eberl has recently published a paper, the FOM mailing list is replete with discussions on the subject for the last month and a half,
https://cs.nyu.edu/pipermail/fom/2023-January/thread.html
https://cs.nyu.edu/pipermail/fom/2023-February/thread.html
[Among these posts are references to Aristotle and Thoralf Skolem which, interestingly enough, misrepresent the views one can find in their respective published works. This is, apparently, much like the situation in astrophysics.]
In any case, the problem speaks to the issue of revisiting inertia.
Inertia is an instance of a principle called the unity of opposites. The opposites correspond to “at rest” and “in motion.” The implicit introduction of an “infinity” lies with the “foreverness” of either state in the absence of a force introducing a change of state.
With respect to potential infinity, one has:
How does the finitist demonstrate “a last natural number”?
What are the explicit presuppositions involved with assuming reachability (stronger than strict finitism) without presupposing the existence of all natural numbers as a completed totality?
The “foreverness” aspect of inertia corresponds to an assumption of infinity as a totality.
Einstein claimed that general relativity unites gravitation and inertia without geometrizing physics. So, this is the universality to which you object. The implicit “foreverness” of inertia now condenses the cosmos into a unity.
Others engaged in geometrizing. And, a metric tensor is not a metric in the normal sense of a measurement. I suppose that this lies at the heart of another of your objections. The solution to differential equations are functions. So, every concrete statement ultimately rests on a specific solution, and, there are no criteria for deciding upon a correct solution.
Measurements can only exclude solutions.
Let me ask, then. Is it the fault of mathematicians that physicists sought to make the laws of physics independent of coordinate systems?
And, if you insist on respecting the epistemic limitations you are asserting, how do you account for the efficacy of the infinitesmal calculus with respect to causal predictions as the gold standard for assesing scientific theories?
You seem to hold mathematics accountable for the ways in which non-mathematicians use mathematics.
LikeLike
@all
My post mentioning potential infinity had been addressed to budrap.
LikeLike
@mls
The concepts of foreverness and infinity, no matter their philosophical, metaphysical, mathematical, or theological relevance, have no scientific meaning for the simple reason that physical reality does not exhibit those properties. They are not observables.
“Einstein claimed that general relativity unites gravitation and inertia without geometrizing physics. So, this is the universality to which you object. The implicit “foreverness” of inertia now condenses the cosmos into a unity.”
My remarks were not an objection, they constituted a statement concerning the nature of physical reality, a known law, that forecloses the possibility that the vast Cosmos we observe is a unified, coherent, simultaneous entity. If you wish to object to that point on scientific grounds, please do.
However while your rumination that, “The “foreverness” aspect of inertia corresponds to an assumption of infinity as a totality“, may hold some philosophical or mathematical significance for you, I don’t perceive it to have any physical or scientific meaning at all.
LikeLike
@budrap
Please refrain from rhetorical avoidance unless you have a clear and unambiguous definition of science that is defensible against opposing opinions.
You and Dr. Wilson just engaged in a discussion of the importance words like “here” and “now” in so far as their interpretation undermine aspects of theory which you find objectionable. In logic, such words are descibed as “indexicals.”
These are carefully studied:
https://plato.stanford.edu/entries/indexicals/
Relative to the general theory, Kaplan’s work on a logic for demonstratives is, perhaps, the most significant because the linguistic use of demonstratives is most closely link to ostensive definition (pointing at a bird to inform a child on how to use the expression ‘bird’.)
That indexicals “break” abstract deductive reasoning is not in dispute. The attempt by logicists to define mathematics so as to exclude monistic philosophies is the source of the expression, “Mathematics is extensional.” Claiming that mathematics is extensional leads to the (modern) dichotomy between extension and intension. And, relative to this historiography, the logic of indexicals and demonstratives is “not mathematical” (or, becomes empirical/inductive).
Of course, one has to be taken in by the rhetoric making knowledge claims concerning what mathematics is or is not to give credence to the assertion “Mathematics is extensional.”
I was not so gullible. And after many years I would discover that Poincare had an irreconciliable position with respect to both Russellian logicism and Hilbertian formalism.
Do you seriously think I would give credence to your personal , subjective meaning for the expression ‘science’?
I am sitting with Mandel’s book, “The Statistical Analysis of Experimental Data.” In the summary of Section 2.7, he writes:
“The methods of statistical analysis are closely related to those of inductive inference, i.e. to the drawing of inferences from the particular to the general.”
Do you see the last part of the statement? What use do you think you make of “the general” once you have discerned it?
What you do is reason with it deductively.
Mandel then goes on:
“A basic difference between deductive and inductive inference is that in deductive reasoning, conclusions drawn from correct information are always valid, even if this information is incomplete; whereas in inductive reasoning even correct information may lead to incorrect conclusions, namely when the available information, although correct, is incomplete.”
I can assure you that when anyone tries to respect a statement like this, either scientists or philosophers engage in rhetoric about postmodernist deconstruction or anti-realism. What I take offense at in your remarks is the constant abuse of mathematicians and mathematics when, in fact, mathematicians have been working hard to sort out “logic.” People are forcibly fed Goedel’s platonism in introductory logic courses and think they know what is going on in the research on logic by mathematicians and analytical philosophers.
The introductory chapter in Mandel begins with a “broad definition” of measurement from N.R. Campbell. A measurement is:
“the assignment of numerals to represent properties”
Mandel then observes that “a definition of such degree of generality is seldom useful for practical purposes.”
In order to describe measurement in terms yielding a practical use, Mandel begins to employ elements of idealized mathematics without regard to the kind of intimate details mathematicians must consider to account for reliable application.
After introducing a visual illustration (prone to error?), he speaks about measurement as a relationship between a property P and a measurement M,
M = f(P)
He then makes an interesting observation,
“In many cases P cannot be defined in any way other than as the result of the measuring process itself; for this particular process, the relationship between measurement and property then become the identity,
M=P
and the study of any new process, M’, for the determination of P is then essentially the study of the relationship of two measuring processes, M and M’.”
On how many physics blogs does one have someone regurgitating nonsense about non-circularity? 100%
In the next chapter , Mandel says the following about such relations:
“Such an equation can be considered as a mathematical model of a certain class of physical phenomena. It is not a function of a model to establish causal relationships, …”
The ellipsis refers to specifics of the particular relation of the example.
Am I to reconcile this with your remarks to conclude that you are advocating for a definition of science that has abandoned causality?
When last I commented on this blog, I spoke of meaningfulness. When Mandel turns specifically to the mathematical framework of statistics, he writes:
“The language and symbolism of statistics are necessarily of a mathematical nature, and its concepts are best expressed in the language of mathematics. On the other hand, because statistics is applied mathematics, its concepts must also be meaningful in terms of the field to which it is applied.”
Dr. McGaugh’s problem lies with the questionable application of statistics to reconcile physical theories with apparent incompatiblity. Both theories rely upon idealized mathematical relations. Both theories relate to mappings involving tetrahedra.
If we, in fact, live in a material, substantival 4-dimensional universe, then a sphere has no inside or outside. This is visualized with a sphere eversion,
The halfway model for this is called a Morin surface, and, the Wikipedia link explains its relationship to tetrahedra,
https://en.m.wikipedia.org/wiki/Morin_surface
Meanwhile, 4-dimensional tesseracts have a 3-dimensional projection to tetrahedra. One tesseract vertex becomes a point at infinity connected by four edges to the visualizable tetrahedron. Unfortunately, the Wikipedia article fails to show the point at infinity,
https://en.m.wikipedia.org/wiki/File:Tesseract_tetrahedron_shadow_matrices.svg
I can assure you that the issues do not involve being an empirical/inductive type or a hypothetical/deductive type.
In modern logic, use of the correspondence theory of truth is recognized whenever the expression,
“stands for”
connects an uninterpreted symbol to a “meaningful” expression. By contrast, Aristotle’s inductive reasoning speaks of linguistic categories as if they
“make a stand”
This is reflected in his distinction between demonstrative (pedagogical, epistemological) reasoning and dialectical (comparative, rhetorical) reasoning.
And, it is his demonstrative reasoning which is bound to potential infinity.
So, please answer for yourself affirmatively. I long ago lost patiene with rhetorical avoidance. Most of the people who respect Dr. McGaugh and Dr. Hossenfelder do so because of intellectual honesty.
Try to follow their example.
LikeLike
@mls
I’m interested in science generally and theoretical physics and cosmology in particular. That’s why I come to this site. You have responded to a suggestion I made, that the Cosmos is not a Universe, with two comments of logorrheic excess that, in the context of the suggestion made, consist of a cornucopia of incoherent philosophical bloviation. The topic is a scientific one, the nature of the Cosmos. Try to stay on topic.
Since you don’t apparently have a working definition of science let me suggest this:
Science is the open-ended investigation into the nature of physical reality employing the complimentary probes of empiricism and logic (broadly understood to include mathematics).
That is an axiomatic definition in my personal philosophy. I don’t care whether you like it or not. I’m not interested in subjecting it to some tedious hermeneutic examination. If you don’t understand it as written, I don’t care. Try to stay on topic.
LikeLike
So the “general,” from which is deductively reasoned, is always and unquestionably valid?
Along with whatever authority figures align with it?
Why did the Catholic Church ever fall quite so far out of favor?
LikeLike
Perhaps UT has observable consequences on very large scales, or a scale that is not length-based at all. What would that look like, given that we only know GR?
do scalar tensor gravity work ?
” Refracted Gravity, a novel classical theory of gravity introduced in 2016, where the modification of the law of gravity is instead regulated by a density scale. “arXiv:2301.07115
he derived acceleration scale from density
LikeLike
What about a scale in which the parameters are in inverse proportion? That it emerges from the feedback loops between the poles of the spectrum?
Then we don’t have to pin it down, but relate it to context. Nodes are as much an effect of the network, as the network is of the nodes, etc.
LikeLike
I think UT must have a better definition of mass than we currently have. Mass as currently understood is a scalar, abstracted from the concept of weight, which is a vector. But we need the whole vector in order to understand gravity properly. We wouldn’t be talking about MOND if the scalar mass concept was adequate for a complete theory of gravity. My conjecture is that UT contains a vector concept of mass, that can be quantised (unlike the scalar concept). Projecting from the vector to a scalar then makes sense on a “local” scale, and gives rise to the current standard theories. But it does not and cannot ever make sense on the scale of the entire universe.
LikeLiked by 1 person
There was a paper published earlier this month (Preferential Growth Channel for Supermassive Black Holes in Elliptical Galaxies, Farrah et al -https://iopscience.iop.org/article/10.3847/1538-4357/acac2e), whereby the authors stated: “We conclude that either there is a physical mechanism that preferentially grows SMBHs in elliptical galaxies at z ≲ 2, or that selection and measurement biases are both underestimated, and depend on redshift.”
Vice spinned it as observational evidence of a source of dark energy: “black holes were getting more massive in relative lockstep with the expansion of the universe”; going on to describe that black hole interiors should contain vacuum energy: “Traditional singularity-containing black holes would have a coupling strength of 0, while vacuum-energy black holes would have a coupling strength of 3. Ultimately, the team found the coupling strength to be around 3.11, and they ruled out the possibility of zero coupling at 99.98% confidence.”
They capped off by concluding: “This is the first observational paper where we’re not adding anything new to the Universe as a source for dark energy: black holes in Einstein’s theory of gravity are the dark energy.”
Hoping one or some of the minds tuned into this forum can expand on or refute this idea, or comment on how it does/doesn’t fit within the context of this post on gravity/GR theory.
Sorry if I violated any rules here (via citing other authors) or if I’ve taken things quite off topic, feel free to let me know so I don’t do it again.
LikeLike
These words… make little sense to me.
LikeLiked by 1 person
Stacy, I was hoping that you or someone like you would read that paper, because I tried. Now I’m a layperson trying to read a technical paper with lots of math in it, but I didn’t get very far because “these words… make little sense to me” I’m talking from a conceptual sense, with me trying to interpret, for example, what they meant by “coupling”. I got the sense right off that they didn’t realize that correlation (“coupling”?) is not causation, that they made leaps of logic (vacuum energy exists so there is vacuum energy in black holes, and someone way back when postulated that there is such a thing as vacuum black holes…) and away they went modifying models using the concept of vacuum black holes. I couldn’t figure out whether they meant that all black holes are vacuum black holes, or there are vacuum black holes out there (in cosmic voids??) vacuuming up vacuum energy. In any case I couldn’t see where this could lead to a new source of energy to funnel through these magical vacuum black holes (unless it was my “cosmic vacuum energy vacuum cleaner” idea to become dark (magically-multiplied by the vacuum energy black holes) energy. I got the sense of mathematicians and math-oriented modellers fooling around with equations in models that they didn’t understand, or even realize that if you change one thing in a model, you better re-examine its impact on the entire model before you make any further changes.
There is a NOVA article explaining their ideas, supposedly. I quote:
“What does it mean for black hole growth to be linked to the expansion of the universe? Certain physical quantities must be conserved as black holes gain mass, and as a result, the growth of black holes produces pressure that drives the acceleration of the universe’s expansion. In other words, black holes that grow as the universe expands are a source of dark energy, a long sought-after component of our models of the cosmos.”
This does not improve any on my interpretation of what they are saying. Their paper started out acknowledging that current models do not account fully for the growth of black holes (there are too few and they are too small). So where does the energy come from to funnel through these alternate black holes, in such vast quantities that it amounts to 75% of all the energy/matter in the universe.
I am blown away because this seems to be getting press, and this was accepted and published in a journal. But I repeat my disclaimer that I am only a layperson trying to make sense of this…
Nova article: Black Holes as the Source of Dark Energy
LikeLike
In my first comment, I pointed out that it seemed that if Omega=1, then the gravitational contraction of space into galaxies would seem to be in inverse proportion to the rate of expansion between galaxies, so an overall expansion of the universe would be redundant.
When it first occurred to me, I was still taking spacetime seriously as a physical fact, so it seemed that somehow the space falling into blackholes was bubbling back up between galaxies.
Though the person I referenced, having majored in astronomy and coming to a similar conclusion, pointed out that was not necessary, that it is inherent in the relationship of mass to energy, light to matter. He didn’t go much into details, but one expands, while the other contracts.
As for black holes, it does seem the constituent energy of anything actually falling into them is shot out the poles as quasars.
Here is an article, pointing out that ocean eddies are mathematically equivalent to black holes;
https://scitechdaily.com/ocean-eddies-mathematically-equivalent-black-holes/
LikeLike
First off, I am Roj who finally got around to making a WordPress account…
brodix, your first 3 paragraphs describe a steady-state universe – that is definitely not consensus, and seeing I was around when it was still being debated (at least by some) back in the 1950s, it is definitely very old school. And your conception of black holes is wrong, and definitely very dated – I was around also when quasars were discovered, and that might have been a theory when, on discovery, all they were were very bright blobs somewhere out in the universe (I phrase it that way because at first they didn’t even know whether these objects were inside our galaxy or outside). Your conception of black holes is wrong by modern consensus. Nothing “falls out” of black holes, by definition. What is being shot out is what isn’t going to fall into them – it’s stuff outside the event horizon circling around that gets caught by the magnetic fields of the (spinning) black hole and gets magnetically accelerated, so instead of falling into the black hole the magnetic acceleration shoots them out the poles.
LikeLike
As you have followed the topic for awhile, though new here, here is a point I’ve raised previously, but that hasn’t been resolved;
When it was first discovered that cosmic redshift increases proportional to distance in all directions, this created a problem with using classic doppler shift to explain, because it would mean we are at the exact center of the universe.
The one known way for it to be an optical effect was dismissed as well, because the light was not otherwise distorted, so there was no evident medium to “tire” it.
The conclusion then became that space itself must be expanding, based on the premise of spacetime.
Now to me, this seems a black box fudge factor. Consider the conceptual basis of GR and thus spacetime, is that in a moving frame, both time and distance dilate equally, so the speed of light is always a constant.
Now with classic doppler effect, when the train moves away down the tracks, lowering the tone of the whistle, it doesn’t stretch the tracks, only increases the distance along them. The tracks are the denominator, the distance is the numerator.
Yet in this theory, the specific reason given for why the light is redshifted, is because it is taking longer to cross the expanding space.
So two metrics are being derived from the same light. One based on the speed and one based on the spectrum. Rather than in GR, where there is always only one metric, the speed.
Einstein said, “Space is what you measure with a ruler.” So what is the ruler in this situation? Is it the speed, or is it the spectrum? Which is the denominator and which is the numerator?
If the spectrum were the denominator, the speed would be the numerator, the variable, so it would be a “tired light” theory, but, according to this theory, “space” is expanding, as measured against the speed of light, meaning lightspeed remains the ruler, the denominator, making the redshifted spectrum the numerator.
Remember the train doesn’t stretch the tracks and in this theory, according to its own premises, the speed of light is the tracks. The metric against which this expansion is measured.
As I’ve often been told, go back and read the textbooks, but my experience in doing so, is the black box situation. Put in the problem, pull out the answer you want and don’t worry about it. Maybe you can show me the actual logic.
LikeLike
brodix, thank you for your reply. I am not going to engage with your points because I think we are going down different paths right now. I am exploring consensus cosmology, trying to dig into all the little cracks and holes (as much as a layperson can), and it seems that you are a proponent of alternate theories (tired light?). I don’t really want to venture into alternate theories now, as it will just add more confusion to my getting-fully-stuffed brain! I am seeing theories being proposed now, like in that recent paper proposing that black holes cause cosmic expansion, that make no sense as they don’t seem to answer the actual question, but seem to be trying to apply a magic band-aid as a solution. I think that Dr Hossenfelder seems to be saying the same thing in her video, but then she throws in negative pressure energy as a solution, and it seems she is doing the same thing – applying a magic solution – neither approach seems to give a source for the energy in idea that they are proposing. Anyway I am trying to work through some of that stuff, and hoping that a (mostly) consensus cosmologist like Stacy joins in with an explanation.
Actually, to respond to one point:
“When it was first discovered that cosmic redshift increases proportional to distance in all directions, this created a problem with using classic doppler shift to explain, because it would mean we are at the exact center of the universe.”
I have been there with that thinking, but I just read somewhere recently (can’t remember where or in what context – sign of a brain overload?) that some scientist was saying they think that the universe is WAY larger than we suspect – hundreds of billions of light-years in diameter. So in that case we wouldn’t have to be in the centre of the universe to get the exact same redshift from all directions.
LikeLike
Here is a link to a paper I’ve posted here previously, as to a possible explanation for redshift as a function of distance;
Click to access 2008CChristov_WaveMotion_45_154_EvolutionWavePackets.pdf
The abstract and intro;
“Abstract
The present paper deals with the effect of dissipation on the propagation of wave packets governed by a wave equation of Jeffrey type. We show that all packets undergo a shift of the central frequency (the mode with maximal amplitude) towards the lower frequencies (‘‘redshift’’ in theory of light or ‘‘baseshift’’ in acoustics). Packets with Gaussian apodization function do not change their shape and remain Gaussian but undergo redshift and spread. The possible applications of the results are discussed.
Ó 2007 Elsevier B.V. All rights reserved.
Keywords: Jeffrey’s equation; Dissipation; Wave packets; Gaussian apodization function; Redshift 1.
Introduction
The propagation of waves in linear dissipative systems is well studied but most of the investigations are concerned with the propagation of a single-frequency wave. On the other hand, in any of the practical situations, one is faced actually with a wave packet, albeit with a very narrow spread around the central frequency. This means that one should take a special care to separate the effects of dispersion and dissipation on the propagation of the wave packet from the similar effects on a single frequency signal.
The effect of dissipation of the propagation of wave packets seems important because their constitution can change during the evolution and these changes can be used to evaluate the dissipation.
Especially elegant is the theory of propagation of packets with Gaussian apodization function.”
What this would imply, is that we are sampling a wave front, not observing individual photons that managed to travel billions of lightyears, meaning the quantification of light is more a function of the threshold required to register on our macroscopic devices.
Here is a paper arguing for such a “loading” theory of light;
Click to access Reiter_challenge2.pdf
LikeLike
I like Dr. Hossenfelder’s take on it:
LikeLike
Roj here, but now with a WordPress account. Didn’t she say what I said, but as an expert would say it?
And, she is obviously a nicer person than me – she seems to be holding back and being very nice in her explanation. She seems to want to say what Stacy said…
LikeLike
https://www.theguardian.com/science/2023/feb/22/universe-breakers-james-webb-telescope-detects-six-ancient-galaxies
“The James Webb space telescope has detected what appear to be six massive ancient galaxies, which astronomers are calling “universe breakers” because their existence could upend current theories of cosmology.
The objects date to a time when the universe was just 3% of its current age and are far larger than was presumed possible for galaxies so early after the big bang. If confirmed, the findings would call into question scientists’ understanding of how the earliest galaxies formed.
“These objects are way more massive than anyone expected,” said Joel Leja, an assistant professor of astronomy and astrophysics at Penn State University and a study co-author. “We expected only to find tiny, young, baby galaxies at this point in time, but we’ve discovered galaxies as mature as our own in what was previously understood to be the dawn of the universe.””
The light at the end of the tunnel is a freight train.
LikeLike
It could also be a passenger train that is arriving before leaving its origin.
The limits of causality being exposed again?
All aboard!!
LikeLike
I’m afraid that the light is showing a redshift, so it’s tired light, so the train is actually a good deal farther off than it appears.
LikeLike
Here is an interesting paper;
Click to access 2008CChristov_WaveMotion_45_154_EvolutionWavePackets.pdf
The link has migrated since I first came across it and while it still downloads for me, no guarantees. So here is the abstract and intro;
Abstract
The present paper deals with the effect of dissipation on the propagation of wave packets governed by a wave equation of Jeffrey type. We show that all packets undergo a shift of the central frequency (the mode with maximal amplitude) towards the lower frequencies (‘‘redshift’’ in theory of light or ‘‘baseshift’’ in acoustics). Packets with Gaussian apodization function do not change their shape and remain Gaussian but undergo redshift and spread. The possible applications of the results are discussed.
Ó 2007 Elsevier B.V. All rights reserved.
Keywords: Jeffrey’s equation; Dissipation; Wave packets; Gaussian apodization function; Redshift 1. Introduction
The propagation of waves in linear dissipative systems is well studied but most of the investigations are concerned with the propagation of a single-frequency wave. On the other hand, in any of the practical situations, one is faced actually with a wave packet, albeit with a very narrow spread around the central frequency. This means that one should take a special care to separate the effects of dispersion and dissipation on the propagation of the wave packet from the similar effects on a single frequency signal.
The effect of dissipation of the propagation of wave packets seems important because their constitution can change during the evolution and these changes can be used to evaluate the dissipation.
Especially elegant is the theory of propagation of packets with Gaussian apodization function.
The question this raises, is whether the quantization of light is fundamental to the light itself, with photons traveling billions of years, or the relationship between the macroscopic devices we use to measure require a threshold, a “loading theory” of light, and we are sampling a wave front.
The fact is that while true science is bottom up and no models are sacrosanct, politics is top down and the core totem cannot be debated. Given that the premise of quantization as fundamental goes back to atomization, as objects are most readily open to clear definition, it does go to the heart of the Western paradigm.
Consider the Eastern and Native American model of time is as the past in front and the future behind, because the past and what is in front are known, while the future and what is behind are unknown. Which accords with the fact we see events after they occur, then the energy transitions to other occurrences.
Meanwhile the Western paradigm is the future is in front and the past behind, because we see ourselves as singular entities, moving through our context/space. So it follows that we would conflate duration and distance.
Yet ideal gas laws correlate volume with temperature and pressure, which are as fundamental to our emotions and bodily functions, as sequence is to thought, but we don’t assume them to be extensions of space.
LikeLike
“the relationship between the macroscopic devices we use to measure require a threshold, a “loading theory” of light, and we are sampling a wave front.”
The concept of an Expanding Spherical Wavefront emitted by an omnidirectionally radiating source used to be part of the pedagogy. In that context it goes without saying that in observing a cosmologically remote galaxy we are absorbing (detecting) a minute portion of vast, successive ESWs as defined by their radial distance from the source. The redshift can be attributed to the aggregate absorption by intervening matter producing a net loss to the total energy of the ESW.
A crude model of this energy loss can generated by applying the GR equation for gravitational redshift to an ESW at successive cosmologically significant radial distances. At each iteration the mass term needs to be recalculated for the enclosed volume using a reasonable mass-density estimate. The result is a redshift correlated with distance.
Like the Doppler redshift, gravitational redshift is an observed effect. So there are two known causes of redshift. For 90+ years a Doppler effect has been the assumed cause of the cosmological redshift. The resulting cosmological model is, it must be said, reality-challenged. None of the standard model’s distinguishing features are observable (detectable) in any direct way which is a remarkable thing to say about a supposedly scientific model.
The GR-ESW possibility has never been given any serious consideration. With the Doppler interpretation of the redshift comes a Recessional Velocity and the not inconsiderable, (undetectable) baggage of an Expanding Universe model that the RV necessitates.
The GR-ESW interpretation does not require the standard model’s baggage since it does not invoke an EU. The only things that expand omnidirectionally in the GR-ESW model, as in the observed Cosmos, are spherical wavefronts of light emitted from omnidirectionally radiant sources.
A model using GR-ESW to account for the cosmological redshift does not require a singularity, inflation event, big bang event, expanding spacetime, dark matter or dark energy, all of which are only required by the standard model’s invocation of an EU. That Expanding Universe is a direct consequence of the Doppler-RV interpretation of the cosmological redshift.
Drop the Doppler-RV interpretation and and the undetectable baggage of the standard model dissipates like geocentric epicycles in the light of a heliocentric model. Absent all that undetectable baggage cosmology might revert to being a science again.
LikeLiked by 1 person
Fully agree. Safe to say, BBT is not to be subject to falsification. Jesus saves.
When the assumption is the quantization of the energy is fundamental, the whole wave aspect becomes statistical. So the ESW model would be heresy.
I suspect that to really get beyond BBT, QM has to be cut down to size first.
LikeLike
Astronomers have discovered what appear to be massive galaxies dating back to within 600 million years of the Big Bang, suggesting the early universe may have had a stellar fast-track that produced these “monsters.”
While the new James Webb Space Telescope has spotted even older galaxies, dating to within a mere 300 million years of the beginning of the universe, it’s the size and maturity of these six apparent mega-galaxies that stun scientists. They reported their findings Wednesday.
Lead researcher Ivo Labbe of Australia’s Swinburne University of Technology and his team expected to find little baby galaxies this close to the dawn of the universe — not these whoppers.
Labbe said he and his team didn’t think the results were real at first — that there couldn’t be galaxies as mature as the Milky Way so early in time — and they still need to be confirmed. The objects appeared so big and bright that some members of the team thought they had made a mistake.
thoughts ?
LikeLike
This is what we explicitly predicted: https://tritonstation.com/2022/01/03/what-jwst-will-see/
LikeLike
Look at the bright side – through your scientific career, you have had a front row seat with a grand view of the evolution of mythology.
LikeLiked by 1 person
I have been thinking about dark energy a lot lately (probably from reading stuff in Triton Station, which is my only consistent go-to on matters astronomical. But that often leads to other articles(I’m retired!), which leads to other articles, etc…
Anyway, while still having dark energy in the back of my mind I read an article which mentioned and then defined redshift. And unusually (I haven’t often seen this mentioned in layfolks’ articles on redshift, they mentioned in a casual way that light waves lose energy through redshift when traversing the cosmos (and I knew that, but why is this not often mentioned). And more importantly, where does that energy go??? The amounts of energy “lost”, must be, well, astronomical! What have I been missing here?
I looked around and found an Ethan Siegel explanation (sorry, I apologize :)) which concludes: Is Energy Conserved When Photons Redshift In Our Expanding Universe?
“So yes, it’s actually true: as the Universe expands, photons lose energy. But that doesn’t mean energy isn’t conserved; it means that the energy goes into the Universe’s expansion itself, in the form of work.”
So isn’t he saying that redshift energy is a (the) source of dark energy? It seems obvious, the scale of redshift energy input seems like it may be large enough. In fact he seems to imply it “Yes, photons lose energy, but that energy doesn’t disappear forever; the amount of energy loss (or gain, for that matter) adds up to exactly what it should in the expanding (or contracting) Universe.”
Yet I have never seen that mentioned as the source of dark energy – yet it seems to be the obvious first try at an explanation (surely way more obvious than that recent dark energy article). You have two things happening, equal (?) and opposite in nature, yet no one has though of connecting them for 50 years (“Dark energy is one of the great mysteries of the universe!”)?? Ethan does connect them, but doesn’t getting around to even saying “and this might be the source of that mysterious dark energy”.
What gives? Isn’t the answer blindingly obvious to what the source dark energy likely is?
LikeLike
Just from the top of my head, hopefully some professional can correct me if I’m wrong: when the photon is emitted, the universe is young. Let’s say the photon’s frequency is 3e-9 Hz to give it a wavelength of 1m. As the photon propagates, spacetime grows, and when it arrives to us let’s says spacetime has grown x10 (totally made up number). Now the photon’s wavelength is 10m (it is quite stretched!), and therefore its frequency is 10 times lower as well. And so is the “energy”, because we equal energy to frequency.
But when we do that, we assume spacetime doesn’t change! So what happened to that energy is only that is was concentrated on 1m, so to speak, and now it’s all over 10m. But as you can see, now, nothing was lost.
LikeLike
Can I correct you if you’re wrong? 🙂 (I am Roj, but with a WordPress account). The way I think about it is, you have to remember that space hasn’t changed or expanded, at least inside galaxies, clusters, or superclusters. They are gravitationally-bound objects, stuck together until the end of time. If space was expanding around us in the galaxy, the galaxy would have flown apart long ago. It’s a concept I just recently got my head around (I hope correctly) – what is expanding is everything outside these gravitationally-bound objects, which, admittedly, is somewhat hard to visualize. But, for example, our galaxy and Andromeda are bound together by gravity for all of future time (and destined to merge in 4 billion years or so).
So what you have to imagine (thought experiment?) is 2 photons arriving at a detector in non-expanded space. The only difference is how far they travelled through expanding (not bound by gravity) space. The one that has travelled a shorter distance has more energy at the detector, the other less. There is no change in reference frame in galaxies and other space bound by gravity.
Not that it necessarily matters, but that is also not what Ethan Siegel is saying. He says work is being done, he just doesn’t say WHAT work is being done, which still leaves the question of where all that energy being lost by redshift is actually going. Assuming that Ethan explains the consensus model really well to us layfolks, I am left with the impression that this is basically a Great Mystery of the Universe, but that no one actually says so. It seems to be, basically, an ignored/overlooked topic (see no evil hear no evil?) So the question still remains for me, why has no one connected this disappearing energy to dark energy?
LikeLike
I need to correct an aspect of my explanation – just found out that superclusters are gravitationally bound. But the tentative wording used to describe this implied to me that they are very complex, and parts of the supercluster may be (or are likely to be?) gravitationally bound.
LikeLike
A big assembly of quantum objects gives rise to the emergency of classical Reality where we observe everything around us, complex assemblies of quantum objects/classical objects can’t be described by quantum mechanics or even a unique more general theory that has quantum mechanics as a particular case because complex assemblies of objects present new irreducible properties that can’t be explained or predicted by theoretical constructs, only direct observations/experiments can discover them(objectivity).
It will be truly ridiculous trying to use quantum mechanics to explain the complex behaviors of living beings, or social structures that are emergent properties of complex assemblies of quantum/classical objects.
By the same token it is really not surprising that complex assemblies of starts, as galaxies or galaxy clusters exhibit emergent(system) behaviors that may be (or not) irreducible from the behavior of simple systems.
But if there is a constant in the observable Universe is that complexity introduce unpredictability (randomness) when seen from the perspective of simple systems.
It will be almost redundant to repeat P. Anderson wise words: More is different.
Imagine, if you can, a world where complexity give rise to new irreducible properties introducing a hierarchical structure at all levels, where no theoretical structure will ever be able to describe it completely because irreducible properties can only be discovered by observations/experiments, rendering theoreticians dogmatism intrinsically flawed. That world will be indistinguishable from our reality.
LikeLike
P.W. Anderson “More is Different.”
Anderson describes how the conglomerate’s new properties emerge from many of the same objects. One does not see these new properties on the individual objects.
1. but he also provides no proof that this is impossible.
2. negative proofs are to be seen with a certain caution. For example, one can prove that there is no solution formula for a 5th-degree equation. (Theorem of Galois theory + …).
But at the end of the day, one can find all 5 roots of an equation of 5th degree (Newton’s method).
LikeLike
Chaitin already showed, by extending Godel’s incompleteness theorems, that complexity is a source of incompleteness/irreducibility, in Godel’s theorems context, and even more that complexity is pervasive not a rare occurrence.
And it’s not hard to see a very strong correlation with Anderson’s claim and what we see all around us, and when applying theories (that hold in simple systems) to complex scenarios.
A system with a large number of discreet components tend to exhibit new irreducible-like properties, behaviors; hence the hierarchical nature of reality that can’t be denied.
This hierarchical structures are seen at all levels, strongly suggesting that Chaitin’s heuristic principle is at play one way or another: the results/predictions of a theory can’t be more complex than the theory itself.
Complexity appears to be a boundary for the explanatory/predictive power of any theory, this applies to Quantum Mechanics and “General” Relativity.
With this in mind a unified theory containing Quantum Mechanics and General Relativity seems to be meaningless because they have a limited range of applicability.
LikeLike
For those interested, there is another article on the philosophy of cosmology in the Stanford Plato of Encyclopedia:
https://plato.stanford.edu/entries/cosmology/
The article is slightly out of date, having been last updated in 2017, before the Hubble tension, the S8 tension, the JWST high redshift galaxies, et cetera came to the fore of cosmology, but nevertheless there are still a number of relevant issues the authors brought up about the current practice in cosmology, in particular the issue of focusing entirely upon FLRW or nearly FLRW models at the expense of alternatives in section 1.2 as well as the cosmological principle in section 2.3.
LikeLike
Now for something (not) completly different. Some new results for rich cluster in Mond : Virial theorem in clusters of galaxies with MOND M López-Corredoira, J E Betancort-Rijo, R Scarpa, Ž Chrobáková https://doi.org/10.1093/mnras/stac3117
Best wishes,
Maurice
LikeLike
Wasn’t this published 3 months ago?
LikeLike
Stacy said:
“Imagine if you are able that General Relativity (GR) is correct yet incomplete. Just as GR contains Newtonian gravity in the appropriate limit, imagine that GR itself is a limit of some still more general theory that we don’t yet know about.”
Isn’t that exactly what we might expect? In that Newton was working with a much more confined view of the universe, and his work provided an explanation/mathematical expression of how things work in the world he knew. His concepts also extended beyond that world, but in a limited way. Einstein came along and extended Newton’s work within the framework of the universe that was known at the time, and it worked very well, and again his concepts also extended beyond the current concepts of the universe at that time. But wasn’t Einstein working when “the known universe” was mostly the Milky Way? (Galaxies weren’t “discovered” until 1924 – Andromeda) When “nebulae” were just fuzzy objects in our telescopes, and we didn’t know if they were in the Milky Way or beyond? (at least in his youth and when he was developing the General and Special Theories of Relativity: 1905-1916)
So his work was focused in the universe he knew, which was all in the “deep gravity well”. And it worked very well – it explained things precisely and accurately even deeper in the well – the orbit of Mercury, and then light bending around the sun.
But did he ever consider the opposite end of things – a long way out of the deep gravity wells he was contemplating – out where there was little curvature of space/time? It seems unlikely, because we knew so little about that part of the universe, or even its existence, at the time. And Einstein definitely didn’t know that his theories/mathematical expressions didn’t work out in that direction (low gravity, low curvature of space/time), because no one knew that stars moved too quickly out at the edge of our galaxy until much, much later.
My point is that his concept of the universe based on knowledge at the time was much more limited than ours, and we shouldn’t be surprised that his theories don’t apply perfectly in low space/time curvature areas. But I get the feeling that far too many scientists take his work as “the Word of God”! – “it passes every test we’ve thrown at it”, to lazy-quote Mr Siegel (no, it hasn’t passed tests in low gravity at all – it doesn’t explain galaxy rotation curves at all). In other words dark matter folks have fixed in their minds that Einstein’s theories of gravity are fixed, immutable and perfect (like some religious fundamentalists see the bible), and that in essence explains why they are so resistant to considering that we should even consider modifying gravity – they leap to other solutions because modifying gravity is not “religiously possible” for them.
They don’t remember how well Newton’s equations worked until we expanded our knowledge of the universe. And I don’t think they realize/contemplate how limited conceptions/knowledge of the universe was in Einstein’s time. In other words I don’t think it is resistance to modified gravity so much as a religious devotion to Einstein’s theories.
Could all this help in talking to dark matter folks, if you knew where there resistance is really coming from? It may be too late at this point, but it explains to me why all the comparisons of pros/cons of dark matter vs modified gravity go nowhere. They don’t care, Einstein got EVERYTHING right, it can’t possibly be modified gravity, go away and don’t bother me with your (impossible) theories, data, graphs, and statistics!
LikeLike
It does seem to be a basic social dynamic, that the more people involved, the more it’s about maintaining the stability(read; inertia) of the system, where fresh insight becomes more a threat to the status and careers of those involved, than a clue about directions to consider. Education is about building on what our elders taught us.
The physics of politics and the politics of physics.
Occasionally though, a serious enough reality check intervenes.
LikeLiked by 1 person
That’s not what I am saying. I’ve read that, and think what you describe is a part of what’s wrong, but it is not what the driving force is in this particular instance. These folk don’t put themselves back in time and imagine Einstein’s world and his concept of the universe, which should lead to the realization that he likely didn’t even think about how his theory applied in low gravity – that didn’t really exist as something contemplated as a major part of the universe back then. And there of course was no testing of his theory of gravity on galaxy rotation curves, because that problem had not been discovered.
These folk don’t “put themselves in Einstein’s shoes”, and realize that while Newton was right within his conception of the universe, but incorrect/incomplete beyond, the same could easily be applied to Einstein and his theories. In fact, it would be most logical to apply to Einstein what happened to Newton’s theories when our view of the universe expanded. The first obvious thing to do in that case would be to think “hey, this is similar to what happened to Newton’s theories”, maybe we should look at modifying Einstein’s theories to our new concepts of the universe.
But these folks didn’t “put themselves in someone else’s shoes”, or consider past history, but rashly concluded that Einstein couldn’t possibly be wrong. They didn’t think that he could possibly have missed something because, in Einstein’s time, areas of low gravitational potential were not a thing that anyone thought about very much when thinking about the universe. Even if folks back then did think about this, they didn’t know about any potential problems that would result from extending Einstein’s theories to areas of low gravitation.
Note: “these folk” = a lot of dark matter proponents, all those that don’t give modified gravity due consideration. If “these folk” considered history and “put themselves in someone else’s shoes” (ie someone in the early 1900’s). Instead they ran off to develop what I call “magic band-aids” or “magic patches” to existing theories. Just develop a new theory that produces something wildly new which magically solves the problem, and “Bob’s your uncle”! These are especially good patches if there is likely no way to actually disprove them (as per Stacy’s comments about dark matter)
And I think I see way too many “magical patches” out there. Don’t get me going on existing theories (except for dark matter, of course) that I think are just magical patches… (Did I mention cosmic inflation? Or what Dr. Hossenfelder mentioned about dark energy just being negative pressure energy?) While I do not deny the value of these magical patches, I think we should see them for exactly what they are.
LikeLike
Keep in mind the same dynamic played out with epicycles. Which, as a model of our view of the cosmos, really was brilliant math. Then they had to invent a physical basis for it, with the crystalline spheres, as GR does with spacetime, then could just keep adding more patches/cycles.
That’s why I think we need to dig into the psychological processes at work, which do go very deep into natural processes.
In evolution it is called punctuated equilibrium.
LikeLike
In my opinion, you are painting it a bit too black and white.
In the beginning, when missing mass problems started to appear, the first (and given the performance of the technology, I’d say the logical) hypthesis was not that of dark matter as we currently understand the term, but of unobserved matter. They believed that the telescopes were missing large fractions of normal baryonic matter that was not emitting light (in the like of diffused cold neutral gas, or outside the scope’s spectral band). They did not question the theory because they still had verry good reasons to not trust 100% their measurents.
Only later we saw the shift towards the concept of dark matter but I’d say this came from a generation for which it was already known and normal that there is a “missing mass problem” . Nobody formulated the problem as an acceleration problem, or in the context of very low spacetime curvature.
I find that, in general, when trying to solve a problem, humans are already biased in their search direction by the actual enunciation of the problem.
Pose it as a missing mass problem, you get dark matter. Pose it as acceleration problem in low spacetime curvature…then you maybe get a GR modification.
The issue is that once researchers entered this path and built their carriers on it, it became a vicious circle that selfsustains.
Old researchers are basically done – they cannot change. The middle generation still hopes to catch the rabbit in their carrier’s height – they basically already see them grabbing the rabbit by its ears. And fresh ones – this is all they hear and see – this is their future world so they’d better understand really fast all the innings in order to build their carriers.
In which of these generations do you see the one who puts himself in Einstein’s shoes and to think – Hold on! GR was not built with very low spacetime curvature in mind!?
And that person to be the one to have a significant impact on the field such as to break the vicious circle?
For my part, I’m glad that I can see that there are few voices (like dr. McGaugh, dr. Kroupa, Indranil Banik (as an example of the fresh generation), just to give some names) that seem to be able to make some dents in the circle. But for now, I don’t see those dents big enough to break the loop.
LikeLike
Safe to say, the loop is spun too tight to unravel itself. So what outside event could crack it?
Wait until they find the next round of slightly further, mature galaxies and try arguing it only takes them a hundred million years to form…
Given the degrees to which theoretical physics has already gone post empirical, when the open parody becomes too obvious, possibly the funders and universities might start to realize there is some reputational risk.
Wormholes in computers seem to be a current thing.
LikeLike
No, I don’t think I am – I am just not including all that you stated, which I don’t disagree with. My point is another way to get around the fixation of dark matter scientists on dark matter only (but not likely to happen at this stage). Ie., present them with another perspective. I am not denying any of the above (which I assume everyone reading this blog already sees). This is in addition to all that – also show them that Einstein and General Relativity are not infallible even though “they have passed all tests thrown at them” (well, no, GR hasn’t passed all tests because you folk ignore galaxy rotation curves as a possible test) – (or, as Stacy would say, you folk just move the goalposts, or to be more precise, you just deny that thing that looks like a goalpost is an actual goalpost).
brodix said “That’s why I think we need to dig into the psychological processes at work”
This is my attempt to dig into another possible process, to pry minds like Ethan Siegel loose from the “infallibility of General Relativity”. I don’t deny anything you or brodix said – I assume it – this is an additional thought which you and brodix perceive as a denial.
Reminds me of a guy at work long ago – we would have tremendous arguments about anything, until we realized we always came to agreement after a couple of hours. We realized that our thinking patterns were different, that we approached things from a different perspective, but we actually agreed on the core issues – it just took us a while to get there.
LikeLike
I don’t see that I’m disagreeing, so much as expanding on the thoughts as I see fit.
If my style seems a bit blunt, that’s the company I generally keep.
LikeLike
Well, no theory is ever infallible, but we can consider new theories replacing old theories as a closer approximation to the truth. We expect that at some time in the future there will be a quantum theory of gravitation because it would be strange if gravity were the only one of the fundamental forces not to be quantised. GR replaced Newtonian gravity because at the time of its creation the advance in perihelion of Mercury’s orbit could not be fully explained by Newtonian gravitation (and we had the equivalent of dark matter in the search for the planet Vulcan inside Mercury’s orbit).
It is also essential that any replacement for GR produces very similar effects to GR where the curvature of space-time is high and in the past theories of the fundamental forces have usually broken down at high energies not at low energies. So having a theory breaking down at low energies (or in this case low accelerations) means that you cannot use perturbation techniques, one of the principal tools in the theoretical physicist’s toolbox.
LikeLike
I recently had to look up GR to re-familiarize myself (I am a retired forester), and one of the first things it said was “gravity is not a force”! I understand why quantum physicists would want to make it one to fit into their theories, but they may just be looking for the planet Vulcan all over again. It seems to me it would be equally valuable to consider alternatives, and explain why gravity is not a force as such and therefore can’t be quantised. Exploring that alternative could actually lead to quantised gravity – ie sometimes the best approach to discovery is through the back door. And keeping an open mind to alternatives.
I am really reacting here to the closed-minded approach of some dark matter folks, which Stacy has described as akin to the thinking of religious fundamentalists, an environment he apparently grew up in.
LikeLike
Actually (loosely) the low energy of higher hierarchical structures is what makes emergent properties possible, new properties that appear to be independent (“decoupled”) from the properties of the lower (elementary components) complexity structure components with higher energy.
See “Decoupling emergence and reduction in physics”.
https://www.researchgate.net/publication/280676040_Decoupling_emergence_and_reduction_in_physics
LikeLike
History is important?!
On this point, Stacy has a huge advantage.
(You’ll find that somewhere in this blog).
LikeLike
Oh I agree. I was just re-phrasing/expanding some of the things Stacey has said, or I was trying to, with more of an explanation of “why” some dark matter folks seem to act like religious fundamentalists (which he has said, but not as wordy as me!)
LikeLike
Couldn’t you also falsify FRLW cosmology if you showed that the projection of a curved UT universe onto a continuous flat FRLW universe required an amount of missing mass/energy in proportions matching that of dark energy and dark matter?
LikeLike
@budrap
Thank you for your statement concerning science.
We agree upon the fact that science requires an agnostic disposition.
You then go on to make trivializing statements concerning the uses of mathematics and logic. My entire point is that you do not know of what you speak.
And, for the record, it had been you — not I — who introduced the philosophy of “here” and “now” into that this thread. That is verifiable.
LikeLike
@mls
You then go on to make trivializing statements concerning the uses of mathematics and logic. My entire point is that you do not know of what you speak.
I don’t answer vague and unspecific complaints. If there is something I said that you object to, cite it and we can discuss the matter. Hand-wavy whining doesn’t count. As to your point, I can only tell you, based on the way you keep injecting diversionary philosophical digressions into a scientific discussion, that you do not appear to know the subject matter (cosmology) at all well since you refuse to discuss it.
And, for the record, it had been you — not I — who introduced the philosophy of “here” and “now” into that this thread. That is verifiable.
Are you not paying attention here? For the record, I introduced the term “now” in the context of a discussion of the nature of the Cosmos and Dr. Wilson added the clarifying “here” resulting in a more precise phrase “here and now”. Neither of us brought up the philosophy of those terms since their common meanings were sufficient to the task. Therefore both of your statements are false – and that is verifiable.
If you can’t address the subject matter under discussion why are you commenting? Try to pay attention and stay on topic.
LikeLike
Hear, hear!
LikeLike
@brodix
Comparison with the Catholic church?
My very first post to this thread, out of empathy for the situation in which Dr. McGaugh finds himself, had been Grothendieck’s criticism of scientific realism at the link,
https://sniadecki.wordpress.com/2021/10/24/grothendieck-church/
In conjunction with that, consider the following passage discussing reductionism,
“Unlike the physicalist reductionism that is the orthodoxy of today, the thesis of reductionism advocated by Carnap and Neurath did not require that all sciences reduce to physics.”
This is from the entry on reductionism,
https://iep.utm.edu/red-ism/
Notice the use of the word “orthodoxy” from the quoted passage.
The contempt you see within the science community for mathematics is doctrinal. When people point to blackboards of equations as justification for their knowledge claims, the only people who can question what they are being shown are people with a knowledge of mathematics.
LikeLike
@brodix
not thread, but first post to this site
LikeLike
mls,
Is the speed of light the metric against which this cosmic expansion is being measured?
As Einstein said, Space is what you measure with a ruler and it does seem this theory still uses the speed of light as the ruler.
I have a great deal of respect for Math. Less so, for equation salad.
LikeLike
mls said “The contempt you see within the science community for mathematics is doctrinal. When people point to blackboards of equations as justification for their knowledge claims, the only people who can question what they are being shown are people with a knowledge of mathematics.”
An example from my own background, where mathematicians develop equations which they never bother to see how they behave, even theoretically (without data), or even less likely, with actual data.
There is an equation called the Chapman-Richards. It was developed “from and using biological growth principles” to explain sigmoid-shaped grown patterns (the normal pattern of growth for things like trees (and interesting fact, did you know that trees represent something like 70% of all the biological mass on the planet earth??) (ok, persisting from year to year, wrapping new growth around a dead core, and living a long time really help here – sorry for the forestry aside :))
So these guys developed this 3-parameter equation “based on biological principles” (so it has to be good!). It was to replace an existing sigmoid-shaped equation, the Weibull, but that was developed for engineers, so it has to be bad!
How does this equation behave when you try to use it with data, say tree grown data? It was pointer out that 2 of the parameters seemed to express no independence at all – Fit your (nonlinear) equation, get the results, then use as your starting point the results of those 2 fitted parameters, but just for fun as the starting point multiply one by say 10^5, and divide the other by 10^5. Wow! Statistically speaking you get just as good a fit! Can you say incredible flexibility? Can you say an infinite number of low minima in nonlinear fitting? Can you say useless equation because 2 of the parameters are so highly cross-correlated that they are meaningless if you expect to find any biological meaning in them.
Ok, but surely the 3rd parameter, the asymptote of the equation, that one had meaning, right? No one had actually pointed that out, but as I was new to nonlinear sigmoid equations, I tested the Chapman-Richards against the Weibull and an equation of my own devising. I right-censored some of the data I had. If that is an unfamiliar term, what it means is that I had growth data for 150 year old jack pine trees (probably as old as they got in that area before being windthrown, rotting out at the base, or just plain dying of old age, which I fitted, then I dropped out the last 10 years of data (so, left with 140 years of data), and fitted that – all the way down to 90 years, or just past the inflection point of the data. How did the 3rd parameter of this wonderful equation behave? The stastically predicted asymptote dropped right down with the right-censored data – i.e., it always predicted an asymptote just beyond whatever data you fed it. And how did that horrible old Weibull behave – reasonably well, with no consistent drop in the predicted asymptote (just variation, which might be expected). My equation behaved slight better than the Weibull.
I am assuming here that mathematicians developed the equation, because sure a biologist would want to know how their equation behaved (I sure did when I was using any equation), and surely they would test it using data.
In summary those 2 individuals had developed a wonderful new biologically-based equation, based on biological principles!, which behaved abysmally when you fed data to it, and whose parameters were meaningless. But in its defence, it seemed to be infinitely flexible as far as 2 of its parameters were concerned! End of rant.
LikeLike
I forgot to mention that after my testing I tried to engage an expert in fitting nonlinear equations (David A Ratkowsky) in a discussion about the behaviour of equations related to local minima and nonlinear parameter fitting space (which was an optional output of the fitting process). No interest. I presume because he is (/was – this was back in the early 90’s) a mathematician/statistician, interested only in theory and not in how things worked.
LikeLike
@rojmlr
I work in construction. There is a bad joke I tell my coworkers about a salesman and an engineer. It is actually an analogy from a real life experience in different profession.
The salesman meets the engineer in a tavern. The engineer tells the salesman how he went and looked at a large sample of hot tar roofs and discerned nine rules for a good roof. So, the engineer continues, he went and designed a new roofing system that satisfied all nine rules at a cheaper price.
The salesman thought, “I can sell that!”
So they formed a partnership and sold new roofing systems far and wide. After about two years, all of these roofs started to have leaks.
So, I then ask my coworkers if they know why the roofs leaked. Usually, they say “No, why?” I respond, “The tenth rule.”
“Oh! What is the 10th rule?” they ask.
To this I reply, “I don’t know it either.”
My reply to you is that arrogance is a human trait.
Almost every mathematician I have ever dealt with is fully cognizant that mathematical objects are fictional. But, as I pointed out to budrap, if one actually implements logics that do not support “realism,” one is villified.
It isn’t as if this is not being studied,
https://plato.stanford.edu/entries/fictionalism-mathematics/
https://plato.stanford.edu/entries/logic-free/#fiction
https://plato.stanford.edu/entries/nominalism-mathematics/
It is that anything which could undermine scientific realism is dismissed (in the name of empiricism).
LikeLike
Don’t get me going on engineers! (ok, there are nitpicky folks out there – so some engineers, not all engineers). Although there seems to be a characteristic of folks who choose engineering that trends toward arrogance (they claim expertise in areas way beyond their area of competence).
First example: I worked for an environmental consulting firm, and an engineer in the firm brought me in on a case. Details aren’t important, but he basically told me nothing about it so I wouldn’t be biased, showed me the situation, then asked me what I thought happened. I told him, and it fitted exactly, down to the year it happened (and there was biological proof). So his client brought it to trial, and I testified. Then the engineer for the other side got up and testified (basically that what I had described as tree roots were an upside down tree). Our lawyer made him look like a complete idiot (“Can you say that you know how to tell the difference between a tree branch and a tree root?” “Well, no…”) Why he got up after me and said that after hearing my evidence blows my mind. Guess which side won the case.
So case one gives you a good engineer and a bad engineer (I was very impressed when I saw what “my” engineer was doing – giving me no information to bias my assessment.
Second: Behind the Parliament Buildings in Ottawa is a VERY steep slope 45 degrees?. Partially wooded or with shrubs, everything seemed to be heading toward the bottom where there is a very popular bike path/walking trail along the Ottawa River. There was an assessment done, and the engineering solution was a 20-30 wall along the hill side of the path. Luckily there was a good way, no way he wanted to build a wall along a popular and scenic path! So he came to the firm I mentioned above asking for another solution, and we gave him one based on European (biological) methods of handling steep slopes. So, good engineer!
Third: As my wife and I were moving into the condo unit where we now live, and engineer was moving out. He had been a major force behind fixing things. Unfortunately his way of fixing things was form a theory first, then start digging! No need to gather evidence! Several expensive attempts to fix a leak resulted in lots of money spent, and continued leakage. After he left they turned it over to our local handyman who had recommending all along to “pull off some drywall and trace the source of the leak first”. So he did and the leak was fixed. We are now finding that this guys approach to fixing and repairs is costing us a lot of money now, because one thing he seem to like to recommend was patching over problems. But we have another engineer in the condo who is still here. He apparently designed airports all over the world. I went to consult him as a board member on repairs and his attitude was “I’m just an engineer. I don’t know anything about fixing things. What you propose sounds reasonable, so if it makes sense to you go ahead”. (I told him I had done repairs on my 2 houses for 36 years before moving to the condo). So bad engineer, good engineer.
But I tend to rant/vent, because venting is supposedly good for you! (well, I read that somewhere once, and liked it, so I’m still running with it).
So when I say engineers/mathematicians/scientists do stupid things, what I am REALLY saying is they are all VERY SMART people, so why do some of them do this kind of thing!!! (rant, rant)
LikeLike
@rojmlr
What I would rather talk about, though, are the consequences of “hiding” mathematical details with algebraic equivalence classes.
Measure theory underlies continuous probability. The desired properties for a measure cannot all be realized. One such property, translation invariance, is incompatible with the axiom of choice. Another involving additivity can or cannot be depending upon the status of the continuum hypothesis. Indeed, Freiling’s axiom of symmetry is based upon probability intuitions and implies falsity because of this relationship.
Now, once integration with respect to Lesbesgue measure is established, the integral is *almost* enough to define “a space.” There is just the problem of “null sets” which is addressed by taking an equivalence class.
The resulting space is a “normed linear space of functions.”
What happens to physics when you unwind this little “math trick”?
In 2019, a Quanta article hailed such tricks as an accomplishment. Here is a GIF from that article,
It basically “identifies” an entire spatial region with a point “continuously” so that they are “the same.”
Believe that? Notice how any concept of trigonometry is made meaningless?
People who study category theory are indoctrinated with how “the other guys” broke science.
The problem is that you need such “fictions” to apply mathematics to “spaces” you cannot visualize.
And, you cannot flesh out details by disparaging everything that mathematicians and philosophers do.
LikeLike
@mls “What I would rather talk about, though, are the consequences of “hiding” mathematical details with algebraic equivalence classes.”
That’s nice, but I’m a retired forester, so that conversation is not going to be with me! 🙂
“And, you cannot flesh out details by disparaging everything that mathematicians and philosophers do”
I don’t disparage everything mathematicians do (I don’t address philosophy because it is a foreign language to me). My point is that some mathematicians need to broaden their horizons “to flesh out details”. Professional and/or scientific silos are not a good thing.
LikeLike