I read somewhere – I don’t think it was Kuhn himself, but someone analyzing Kuhn – that there came a point in the history of science where there was a divergence between scientists, with different scientists disagreeing about what counts as a theory, what counts as a test of a theory, what even counts as evidence. We have reached that point with the mass discrepancy problem.

For many years, I worried that if the field ever caught up with me, it would zoom past. That hasn’t happened. Instead, it has diverged towards a place that I barely recognize as science. It looks more like the Matrix – a simulation – that is increasingly sophisticated yet self-contained, making only parsimonious contact with observational reality and unable to make predictions that apply to real objects. Scaling relations and statistical properties, sure. Actual galaxies with NGC numbers, not so much. That, to me, is not science.

I have found it increasingly difficult to communicate across the gap built on presumptions buried so deep that they cannot be questioned. One obvious one is the existence of dark matter. This has been fueled by cosmologists who take it for granted and particle physicists eager to discover it who repeat “we know dark matter exists*; we just need to find it” like a religious mantra. This is now ingrained so deeply that it has become difficult to convey even the simple concept that what we call “dark matter” is really just evidence of a discrepancy: we do not know whether it is literally some kind of invisible mass, or a breakdown of the equations that lead us to infer invisible mass.

I try to look at all sides of a problem. I can say nice things about dark matter (and cosmology); I can point out problems with it. I can say nice things about MOND; I can point out problems with it. The more common approach is to presume that any failing of MOND is an automatic win for dark matter. This is a simple-minded logical fallacy: just because MOND gets something wrong doesn’t mean dark matter gets it right. Indeed, my experience has been that cases that don’t make any sense in MOND don’t make any sense in terms of dark matter either. Nevertheless, this attitude persists.

I made this flowchart as a joke in 2012, but it persists in being an uncomfortably fair depiction of how many people who work on dark matter approach the problem.

I don’t know what is right, but I’m pretty sure this attitude is wrong. Indeed, it empowers a form of magical thinking: dark matter has to be correct, so any data that appear to contradict it are either wrong, or can be explained with feedback. Indeed, the usual trajectory has been denial first (that can’t be true!) and explanation later (we knew it all along!) This attitude is an existential threat to the scientific method, and I am despondent in part because I worry we are slipping into a post-scientific reality, where even scientists are little more than priests of a cold, dark religion.


*If we’re sure dark matter exists, it is not obvious that we need to be doing expensive experiments to find it.

Why bother?

60 thoughts on “Divergence

  1. Not dark matter in the sense that most cosmologists and astronomers use the term, but I found this report on black holes in the Palomar 5 cluster interesting: https://www.nature.com/articles/s41550-021-01392-2

    It raises the question whether there are many globular clusters consisting entirely of black holes, having ejected all their remaining stars, and whether these can be detected. Microlensing and gravitational waves from merging black holes seem the most likely possibilities.

    Like

    1. As far as I understood (I don’t have direct access), that paper makes a prediction regarding the existence of binary stars in the cluster – i.e. only short period ones may be found because the rest are perturbed by the black holes.
      The thing is that MoND and LCDM should have different predictions about this, namely MoND should constrain the periods to even shorter intervals. I would say this would be a better / simpler test for MoND than wide binaries.
      On another note – the recently discovered comet 2014 UN271 seems to go in the 45 – 50k AU range aphelion distance with a period in the millions of years. I wonder (again as I already discussed about long period and/or hyperbolic orbit comets on an older post) if there are some predictions that can be made regarding the comet considering a MoND regime.
      Like it is expected that the orbital period in MoND is larger, this means that the comet spends more time away from the sun and thus its surface can show signs of this. Given the size of the comet (and maybe a space probe to it – the ESA’s comet interceptor probe??) I’d expect that some decent data for the surface composition may be gathered (or even images).

      Like

      1. It is difficult to see how a comet, not even as distant as the nearest star, could be in the MoND regime! We need galaxy size distances to get the small acceleration a_0.

        Like

        1. The acceleration given by the Sun at 45000AU is roughly 2.95×10^-12, around 40 times lower than a0 so any object that far would be in deep MoND regime if the Sun would be isolated.
          But like dr. McGaugh said in the previous discussion, the vicinity of the Sun experiences around 1.8 x a0 because of the galactic neighborhood so the external field effect plays a role.

          Like

  2. I can only imagine how frustrating this must be.
    Back when I was in the Navy someone taught me that every new idea goes through three steps:
    First: someone thinks up the idea.
    Second: someone makes the idea widely known.
    Third: we wait for all the old men to die.

    Maybe we’re stuck on step 3? Those who dedicated their lives to finding dark matter are likely to be loathe to accept it was for naught.

    Hang in there! Truth will out.

    sean s.

    Like

    1. Yes, there is an old saying in science to this effect:
      “In science, all new and startling facts must encounter in sequence the responses

      1. It is not true!
      2. It is contrary to orthodoxy.
      3. We knew it all along”.

      —L. Agassiz (paraphrased)

      Liked by 1 person

  3. Kuhn talked about incommensurability: when two ideas can become so different people are not able to communicate with each other anymore. It appears as if the two people are talking past each other.

    But more relevant here, Kuhn suggests that during crises scientists tend to suddenly become interested in philosophy.
    There must be a serious crisis in cosmology, astronomers are publishing in philosophy journals!

    Liked by 1 person

    1. I suspect that some are publishing in philosophy to justify why they are ignoring the evidence, and others are explaining why they cannot ignore the evidence.

      sean s.

      Like

  4. The 1998 SN1a mistaken interpretation of observables L+V using combined biased physical + biased empirical math mode was explained by Dr Tuomo Suntola’s Dynamic Universe (DU) extended GR/QM UNBIASED physical model -with better or equal fit of Gaussian least squares estimation. This math problem is known as Moritz-Krarup collocation when applied to Physical Geodesy of Earth gravity field in terms of extended Gauss-Markov solution of singular systems of linear and nonlinear equations by array algebra and loop inverses. It was ruled out as thought-experiments in theoretical physics as the empirical high-level pyramid of DU nested energy frames require numerical least squares equations with millions or billions parameters in the fashion of Gaia cosmic image (photogrammetric) mapping reconstruction of 4/5-D dynamic cosmic model. Eg by removing the bottle neck of all GR based cosmologies, the Big Bang fancy.

    Like

  5. It is sociological. When science was conducted by individuals and small groups of people, who were mostly motivated by curiosity, it was more fluid, but when there are lots of people and status enters the picture, then orthodoxy starts to rule.
    The question is, how deep does it go? For example, is spacetime real, or is that a modern crystalline spheres? Obviously questions like that are going to dig much deeper into the scientific cultural psyche, than MOND versus dark matter, but what if it really is a valid issue? Maybe a lot of the issues, flatness, smoothness, curve in the rate of redshift, aka, dark energy, etc, are connected and are not just separate problems.
    So we have that sociological problem, as the current generation of theorists have spent their careers building elaborate mathematical castles on a foundation that predates their lives, not just careers.
    If I may make a prediction, it is not going to change willingly. The broader economic issues bubbling up through the academic world, such as between the tenured and untenured, are going to manifest as conceptual conflicts, questioning orthodoxy in unexpected ways, as the orthodox feel inclined to double down on the inerrancy of their models. Which magnifies the tension and creates deeper fissures.
    Give it a couple of decades.
    Even academia functions according to nature.

    Like

    1. Excellent point! It seems that young scientists are being taught that orthodoxy is the only way to achieve a successful career. Their very livelihood depends on it.
      However this has been going on since at least 1990, so already three decades, and very little change.
      My best guess at this point is that some sort of huge breakthrough in instrumentation will be necessary to finally demonstrate, with no wiggle room, that DM is irretrievably flawed. There will still be die hard supporters of course, but younger scientists will seize the opening to pursue alternate paths of research.
      I am perhaps overly optimistic, but this is how it looks to me.

      Like

      1. I agree, young scientists are being coerced to orthodoxy as the only way to achieve a career, and unfortunately they are not presented with any alternative scientific method! As a result most of those who try other approaches do not have any training in science, but that doesn’t stop them thinking they can solve every problem. It’s easy to come up with a model that explains some observations, but few can make it a science. These cranks come in large numbers making it difficult to find a good idea in all that noise.

        As for instrumentation saving the day, you are certainly overly optimistic! It will never be possible to build an instrument that detects ‘zero’ to prove that DM does not exist. There is always background noise and it is always possible to increase the sensitivity/range of the detector.

        Instead, it’s time we stop trying to detect dark stuff, admit that we can’t explain what we don’t see and positively seek more accurate measurements of what we can see.

        Like

        1. What if the problems are conceptual? Are we sure we have eliminated all physiological biases?
          Is time a narrative dimension, like a book, or is it an effect, like temperature? As in time is frequency, events are amplitude.
          Are we trying to develop a static description of an inherently dynamic process, where trying to peer through all that blurry fuzziness, trying to extract some foundational clarity, some ultimate quantization, is more a function of how our minds extract images and concepts from an underlaying dynamic, than how that dynamic actually flows?
          If so, than it’s the same dead end as epicycles and we will keep adding patches for every discrepancy.
          I think it’s safe to say the current generation of theorists seem happy to do that, rather than go too deeply into the orthodoxy, but eventually that edifice will crumble. If only because the money flows to post-empirical theorizing dry up.

          Like

        2. When I mentioned improved instrumentation, I did not mean DM detectors or the like. You are certainly correct that we can never detect ‘zero’. Rather what I had in mind was things like precise far binary observations and other tests of DM predictions/non-predictions that would show up DM as the house of cards that it is.
          Yes, for the open minded there is already enough evidence to do this, but I suspect it will take something shockingly obvious to shake loose science from the death grip of DM orthodoxy.

          Liked by 1 person

  6. Maybe a story from technology student May 1 mid 1960’s newspaper of TKK = HUT= Aalto Univ illustrates this topic that was also going on in geodetic literature since 1930’s. Prof Olli Lokki of Applied Math got a new car/driver license but missed the sharp curb in front of main TKK building. A technology student of his went to help him asking;’Why did’,t you iterate?” Lokki like Prof McCaugh: Did’t converge but diverged!

    Like

  7. Why does BB cause GR based cosmology divergence of H_0 with wrong sign (acceleration vs deceleration)?

    Today’s c=C4_aver=about 300,000 km/s is assumed by GR to start from small value (despite after assumed inflation) near T4 almost 14 B years ago. So, today’s observed dC4/C4 (or dR4/R4=-2 dC4/C4) per time unit =km/s/Mpc=1/s APPEARS accelerated from C4=0 value if linearly distributed to total age span 14 B yrs – with guesses about C4 rates and R4 ranges for some inflationary/deceleration periods.

    Suntola’s DU energy balance law provides super-symmetry wrt BB epoch of C4=max, R4=min such that C4 value at z=3 was 2 times todays C4 or C4(z=3)=sqrt(1+z) C4(z=0) =600,000 km/s. Or the change of C4 was -300,000 km/s vs +300,000 km/s of GR based BB!

    Most difficult past of DU is understanding its dynamic time T4 nonlinear model making today’s age since GR based assumed 13.8 B yrs to 2/3 13.8=9.2 B yrs in terms of present atomic clock ticking rate at cosmic distance variable R4=13.8 B ly. This has caused some 26+ GR interpretations that are valid only in local Earth/Sun centered energy frames of z=0 – getting lost in cosmic dimensions z>0. This GR based BB mistake is bigger than Earth/Sun centered view change some 400-500 years ago. No wonder DU is resisted in cosmology even more than similar 4 yr short-term mistake in today’s politics/Big Lie!

    Like

  8. What is the main reason for DE/DM?

    c_emit= C4_emit=sqrt(1+z) C4_today at receival z=0. BUT- EM energy m c_emit^2 PRESERVES its 1+z times higher energy level/quanta during its travel or optical distance D=R4_today – R4_emit although its wavelength increases in proportion to R4 expansion scale. GR based prediction model of SN surface illumination therefore diluted bolometric SN values by factor (1+z) due to ‘old Planck constant’ embedding variable c_emit. Another dilution factor 1+z of distant EM energy source in GR was caused by mistaken distance R4 and time T4 concepts of spacetime vs 4/5 DU structural system. The resulting BIAS of GR physical model was compensated by the BIASED epicyclic GR ‘signal’ part of collocation & loop inverses by Einstein’s Lambda blunder – and using belief that even GR blunders can ‘brutally kill’ challengers in opinion/voting based polls using ‘peer-reviews’ of the ruling academic party or emperor without clothing.

    Array algebra function theory of my 1972-74 PhD and ScD dissertations at KTH resulted in general Theory Of Estimation (TOE) of loop inverses where the interpolation model as sum of evaluated biased physical (trend function of collocation) and biased empirical or epicycle model produces ALWAYS the unbiased estimates of adjusted observables…Fooling both physical and empirical model analysts as proven in photogrammetric self-calibration of Bundle Adjustment by D.C. Brown in 1974 ISP Comm III Stuttgart Symposium paper. And by 3 Dec 2020 Gaia eDR3 report expanding the central projective model of R.A. Hirvonen satellite photogrammetry and its computational ‘fast real-time’ 4/5-D replacement model of array algebra/calculus.

    Like

  9. A good friend has developed a model that describes both the theory of general relativity (at subgalactic distances, as he likes to stress) and the standard model of particle physics. His model (one of the very first that combines these two parts of physics) yields only the existing particle spectrum. Every time he tells about the model to particle physicists, they ask: what does the model predict about dark matter? My friend then answers: the model predicts that no elementary dark matter particle exists. The reaction usually is: then the model is wrong. My friend answers: but it agrees with all observations! This does not help. “Divergence” has reached particle physics as well.

    Like

  10. Hi Stacy, Thank you for this follow-up post on divergence.
    I have a comment about cosmology that may be slightly off topic. Many scientists say that our Universe is both isotropic & homogeneous, and argue that the clumpiness of matter disappears at very large scales, i.e. they apply isotropic & homogeneous to the matter distribution. My understanding is that, originally, isotropic & homogeneous applied to the geometry alone, and not to the matter distribution. I’m not sure whether or not observations have shown the geometry to be isotropic & homogeneous. If this is the case, then the FLRW metric is a good assumption (for cosmology) and the left-hand side of Einstein’s General Relativity (GR) equation is on reasonably solid ground. Personally, I am happy with GR (i.e. that the geometry is determined by the matter), and that the problem lies with the right-hand side of the equation, i.e. with the nature of the energy-momentum tensor and the matter distribution. It seems to me that the geometry and the matter distribution are separate concepts that somehow have been conflated together. Wearing your observational cosmologist hat, perhaps you could comment on what observations tell us about the geometry of the Universe – I understand this is probably a topic for a separate post.

    Liked by 1 person

  11. First of all, thank you for opening again de comments!
    Regarding your diagram – today’s entry (the 9th of July) in Phil Plait’s blog Badastronomy about some refinements for the distance ladder calibration (triggered by the Hubble tension) reads as “In the end the problem almost certainly isn’t with the Universe […] but with our observations of it.”
    Exactly as in your diagram, it’s not our understanding but our observations :(.

    Like

  12. Another paper much more naturally supporting modified gravity than DM although not described that way: Pavel E. Mancera Piña, et al., “A tight angular-momentum plane for disc galaxies” arXiv 2107:02809 (July 6, 2021) (accepted for publication in A&A Letters).

    Like

  13. I wonder if DM supporters think “MoND MUST be crackpot, because if it is not then that means I am a crackpot.” They may not even be aware that this frightening prospect is at the root of their thinking.
    I am not a professional in Astronomy or Physics, so I have not interacted enough with such professionals to form an opinion on whether they actually think this way or not. Perhaps the actual professionals on this forum can comment?

    Like

  14. I’ve been very keen on the idea that there might be a causal connection between small (~100 micro-g) acceleration signals detected in over 250 individual runs with a type-1 superconductor at the Austrian Research Center (ARC) and the consistently, too-high angular accelerations seen in cosmological structures. These experiments were conducted under the direction of Dr. Martin Tajmar in the 2003-2006 time frame. Because of the great care that this research group went to eliminate false positives these experiments certainly constitute the most credible evidence for a linkage between condensed matter and ‘small’ magnitude gravity-like forces that, in reality, are some 30 magnitudes larger than allowed by General Relativity.

    Others may recall the heyday of the “gravity underground” which came into prominence with claims by the Russian ceramics engineer Eugene Podkletnov in the mid 90’s of detecting a reduction of the Earth’s gravity field of some 0.05% above a spinning, high temperature, type-2 superconductor. Naturally this elicited tremendous interest both within and without the scientific community, the former being understandably very skeptical of such claims. As an engineering technician, then working in the oceanographic field, I was one of those non-scientist ‘gravity undergrounders’ intrigued by these reports and endeavored to try to replicate some of these results using hobbyist-grade, Yttrium-Barium-Copper-Oxide (YBCO) superconductors.

    By the early 2000’s Podkletnov, along with his physicist colleague Giovanni Modanese, claimed to have conducted even more sensational experiments in which a rubber pendulum, 150 meters distant, was given a push, setting it swinging, from a transitory, gravity-like, “force beam” generated by their “Impulse Gravity Generator”. This device reputedly consisted of an YBCO “emitter”, some 10 cm. in diameter in which some magnetic flux became trapped via a solenoid coil surrounding it prior to the experiment. But there was also a larger solenoid wrapped around the cylindrical chamber. Some 15 to 40 cm. behind the emitter an anode of similar diameter was placed. Two million volts would be discharged between emitter and anode resulting (allegedly) in a visible “flat” discharge that transited the space between these components. During each of these discharges the pendulum would be pushed away, setting it in oscillation.

    Continued below…

    Like

  15. This momentary separation between emitter and rubber ball pendulum is reminiscent of dark energy which accelerates galaxies away from one another, but on steroids, assuming, of course, that these experiments are not some kind of hoax. If this effect is genuine it would be consistent with a quite speculative idea that I concocted that also would explain the anomalously high accelerations in cosmic structure without the need for dark matter, which I won’t elaborate on to comply with rules. But in 2015 Martin Tajmar and I. Lorincz, carried out experiments in Dresden, Germany to investigate the Podkletnov/Modanese claims, which they detailed in a paper titled “Null Results of a Superconducting Gravity-Impulse-Generator”. However, since experiments by Claude Poher in France, and independently by myself in the United States, along the same lines, were simpler to implement the Tajmar/Lorincz team chose to replicate these, rather than the far more elaborate experiments of Podkletnov/Modanese.

    Both Poher and myself directly discharged high voltages through YBCO superconductors, my setup reaching a maximum of 1 KV. On several occasions acceleration signals were detected by the ADXL203 (1 mg. resolution) accelerometer chip, which I had enclosed in an aluminum bud box to isolate it from electromagnetic pulses (EMP’s). Poher also had similar detection events. In my system the sudden expansion of the cryofluid would generate an acoustic ‘pop’, which set off the accelerometer. So I put together a circuit using a 556 dual timer which introduces a 350 microsecond delay on the capacitor bank discharge from the moment the RF transmitter button is pressed that triggers a thyristor to dump the accumulated charge through the YBCO superconductor load. This allowed the trace on my Textronix 465B scope (set for single sweep, 100 microsec./div) to reach 3 and ½ divisions on the screen where the anomalous signal should appear (if it really exists), while the acoustic impulse should show up another 3 and ½ divisions further along.

    Unfortunately the Dresden team were unable to reproduce the ‘positive’ detections that both Poher and I reported. They concluded that our detections were the result of EMP’s for which we had inadequately shielded our sensor systems. I’m convinced that they are absolutely right, as they also encountered EMP false triggers in their earlier experiments. It was only after elaborate measures to properly isolate their electronic sensor system that they eliminated this noise source. While these negative results reduce the experimental ‘parameter space’ of any anomalous acceleration signals from superconductors, and by extension a possible link to astronomy’s dark sector, I don’t think it’s completely ruled out. However, it’s necessary to be cautious in pursuing this angle for solving the DE/DM puzzle, as one can easily fall into the same confirmation-bias trap that dark matter advocates have been in for a long time.

    Like

  16. Dr McGaugh,

    “… I worry we are slipping into a post-scientific reality, where even scientists are little more than priests of a cold, dark religion.”

    The only differences between us on this matter are one of timing and scope. It appears to me that the slippage into a post-scientific reality began in the late 1970s, with the elevation of the dark matter and quark hypotheses to established fact, without benefit of any supporting empirical evidence. At this point a post-scientific culture is well established in the academic community.

    You also seem reluctant to broaden your critique beyond dark matter, and given the post-scientific regime in place, that reluctance is entirely understandable. Having no socio-economic ties to the academy, I feel it necessary to point out, however, that the exact same reality-denying problem (perpetuating the DM issue) pervades theoretical physics across cosmology, particle-physics and quantum theory.

    All three overlapping areas are described by models that bear little, or no, resemblance to observed reality. All three are suffused with model-dependent inferences, of entities and events, that are unsupported by any empirical evidence whatsoever. None of LCDM’s structural elements, for instance, are part of observed reality; the big bang event, inflation, causally-interacting spacetime, dark matter, and dark energy are all undetected or unobservable in physical reality.

    Those structural elements of the standard model of cosmology, exist only in the model, and in the minds of those who choose, or were inculcated, to believe (without empirical evidence) in the standard model. The current situation is not worrisome though, simply because the scientific community has adopted a seriously flawed model; mistakes are part of expectations for a truly scientific inquiry, after all.

    No, the problem is that several generations of scientists have now been trained to accept a post-scientific, model-centric, post-empiricism as the norm for scientific research. That problem won’t be solved by funerals alone, since the scientific paradigm itself has been overturned at the educational level. The fetishistic study of mathematical models has now effectively superseded the study of physical reality as the fundamental basis for doing “science”. The new Dark Age is not coming, it’s here.

    Liked by 1 person

  17. budrap,

    It would be borderline understandable, if these models were somewhat coherent, or at least not self contradictory, but apparently that doesn’t matter, if patching the model requires it.
    The most glaring example is using “spacetime” to say space expands, when the reason to do so, redshift, means intergalactic light is not constant to intergalactic space.
    Whenever I point this out, the standard response is, “Go read a book.” One such book referred was Edward Harrison’s; Cosmology. In it, it seems the presumption is that lightwaves are just some wavy line, that when stretched out, becomes less wavy. It’s like once they are in the math cocoon, actual physics goes out the window.
    Kudos to Stacy though, as I’ve been banned from a number of sites, for going even this far.

    Like

  18. David Schroeder,
    Even though that particular experimental result has not been confirmed beyond reasonable doubt, it is surely worthwhile to continue to look for effects of this kind: that is, where gravity appears to act differently on a quantum system as opposed to a classical system. There is nothing in principle to prevent a quantum gravity effect being many orders of magnitude greater than the classical average gravity. In an arXiv paper I have suggested two such experiments where I believe it is possible that such an effect might be detected. I have only two data points, and no theory, so it’s a long shot at best. Both the muon g-2 experiment, and the neutral kaon decay CP experiment detected an effect whose magnitude happens to be equal to the angle by which the direction of the gravitational field changes from one end of the experiment to the other. So I just propose doing the same experiments vertically instead of horizontally, and see what happens.

    Like

  19. Robert Wilson,

    Thanks for the encouragement, I will look up your paper. It sounds very interesting and certainly relevant to my area of interest.

    Like Mulder of X-Files fame, I want to believe in the rather amateurish idea that I concocted, which posits a link between astronomy’s dark sector and the quantum realm. Then doubts arise, as even as a layman I can see obvious shortcomings in the model. As Feynman observed, the easiest person to fool is oneself. So I alternate between waxing enthusiastic, to this is the most hare-brained idea ever. The only way forward is more experiments. Should ‘positive’, repeatable results be obtained in some future experiments it won’t necessarily be confirmation of this idea. But it would be revolutionary and time for the heavy-weight, professional theorists to enter center stage to figure out what’s going on.

    Like

  20. From our perspective matter structures lie in a thin 3D strip along the voids. But on the scale of the Universe what we see as 3D may be more properly analyzed as lying on a 2D surface. MOND certainly looks like 2D gravity. And the acceleration scale of the expansion has the same order of magnitude as the MOND regime. Is this evidence of Mach’s Principle? This paper is interesting.

    Click to access 2003.05784.pdf

    “We have shown that, at least for some fundamental spherically-symmetric cases, our NFDG can reproduce the same results of the MOND-RAR models, and that the deep-MOND regime can be achieved by continuously decreasing the space dimension D toward a limiting value of D ≈ 2.”

    Like

  21. I recently came across a series of papers by Ranada and Tiemblo, one of which is in the Canadian Journal of Physics, entitled “The dynamical nature of time”, and available on the arXiv at 1106.4400. These papers claim to explain the Pioneer anomaly by separating two concepts of dynamical time (atomic and astronomical) from parametric time. Is it possible that this idea might be useful in analysing the missing mass problem? It seems to me at first glance that one might need to go further and construct atomic and astronomical spacetimes, so that one can analyse GR in terms of a dynamical (astronomical) spacetime that is logically independent of the parametric spacetime.

    Like

    1. Robert,

      My view on time is that rather than this narrative dimension, from past to future, codified as measures of duration, it is that change turns future to past. So it’s an effect, like temperature, pressure, color and sound. Time is frequency, events are amplitude.
      So different clocks run at different rates simply because they are separate actions. Think metabolism.
      Then their interacting actions tend to either synchronize and build up to one larger system, or harmonize as they spread out across space. Thus nodes and networks, organisms and ecosystems.

      Like

      1. That is a point of view. I read that paper years ago and understand exactly what it is saying. My point of view is that nothing is ever fully resolved in physics. In this particular case, the absolute minimum is for a repeat experiment to confirm the diagnosis. That repeat experiment has not been done.

        Liked by 1 person

        1. Indded, they modeled all sorts of possible forces on Pioneer that “could” add up to the anomaly. But other similar anomalies (e.g. Earth flybys http://arxiv.org/abs/0910.1321) have been observed, so similar experiments exist that have not been explained.

          Despite this issue (and others) not being resolved, many physicists will cite one paper to support their claim that inconvenient problems are resolved.

          Like

          1. Yes. My point of view on the flyby anomaly is that Hafele’s claim to resolve it by analysing a gravitational force travelling at the speed of light between extended bodies moving in a complicated geometry is at least plausible. Again, however, one needs repeat controlled experiments to confirm such a theory, and as far as I am aware such experiments have not been done. And (probably a good thing!) in this case, there appears to be no consensus as to which theory best explains the observations.

            Liked by 1 person

    2. If I may, without abusing the privileges of commenting on this blog, try to steer this discussion away from the Pioneer and flyby anomalies, back towards the missing mass problem? If Ranada and Tiemblo are plausibly telling us that atomic time and astronomical time are not the same thing, then, whether or not this is true, surely we should also consider the possibility that atomic space and astronomical space are not the same thing? If so, then atomic mass, defined by atomic spacetime and its dual atomic 4-momentum, need not be the same as astronomical mass, defined by astronomical spacetime and its dual astronomical 4-momentum. At which point it becomes open season for new theories, however crackpot they may be. But the point I want to make is that this train of thought leads to the suggestion that the equivalence principle is always locally true, but is not globally true. Is this not a consistent way of looking at MOND?

      Like

    1. I prefer to think of MOND as the proof that the correct model has been found. IOW the correct model will exhibit exact MOND results. I don’t think it’s a ‘missing matter’ problem. In my opinion the halo is the hint. And I think the correct model will explain the full scope of the large scale structure of the Universe, beginning when the Inflation Era ends.

      Like

    2. The particulars of that essay are somewhat off topic, but as something of a generalist myself, I’ll add a few pointers;
      What it describes is something of an organic, physical process. The old is shed, as the new rises. Which is basically the cycle of life. So possibly rather than focusing on the particulars, it might be useful to really step back and consider how it has played out in history. The best description of this process giving rise to Western civilization is Gilbert Murray’s; The Five Stages of Greek Religion; https://www.gutenberg.org/files/30250/30250-h/30250-h.htm
      To condense some of the salient points, to the Ancients, gods were what we would call ideals today. Yes, they were assumed to be spiritually vital, as ideas must be, but this concept also gave rise to platonic ideals.
      There was no distinction between culture and civics, so monotheism equated with a monoculture. One people, one rule, one spirit. Democracy and republicanism originated in pantheistic societies, as they incorporated complexity and variety. The Romans adopted Christianity as the Empire solidified and remnants of the Republic were being erased. Though vestiges of pantheism remained, with the Trinity. This originated with the Greek year gods, as an analogy of the cycling of the seasons, the son reborn of the sky father and earth mother. Though it mutated to serve the interests of the Catholic church, as the official religion of the Empire and eternal institution.
      Consequently the default political model for Europe, for the next 1500 years was monarchy and feudalism. When the West went back to less centralized political systems, it required the separation of church and state, culture and civics.
      Essentially monotheism was the original globalization, seeking to erase and merge the myriad local cultures into one universal model. Synchronize, rather than harmonize.
      Logically though, a spiritual absolute would be the essence of sentience, from which we rise, not an ideal of wisdom and judgement, from which we fell. When ideals are assumed to be absolute, the consequence is very intolerant societies. We even see this inclination in the current woke culture.
      Unfortunately today, those of a more generalist nature tend to be treated as dabblers and lightweights, while the specialists are exalted. Given there are reasons the people leading armies are called generals and specialist is about one rank above private, this myopia has created a global Tower of Babel, with no general sense of how the myriad parts work together. Everyone seems obsessed with the details and no framework and general process, like regeneration, to make sense of them.

      Like

  22. Just a rhetorical question; Doesn’t quantization imply some contraction of the field? As in wave collapse.

    Isn’t gravity a contraction, for which there is more than can be explained as a property of mass?

    Like

  23. Near the bottom of paragraph three, on page 8 of the paper “Design and First Measurements of a Superconducting Gravity-Impulse-Generator” by Istvan Lorincz and Martin Tajmar it states: “During these 14 test discharges we observed three anomalous readings of the accelerometer with the oscilloscope.” I found this intriguing, as I had a similar experience with only a few of the test runs yielding an anomalous acceleration signal that were picked up by my ADXL accelerometer chip. In the next paragraph they speculate a possible origin of these signals as being due to: “the signature of the EMP changed after each discharge,…”. While this may very well have been the case, it does provide a glimmer of hope that non-standard physics may be occurring in some instances. I think that possibility is further buttressed by the very carefully documented experimental work of Frederick N. Rounds, whose paper “Anomalous Weight Behavior in YBa2Cu3O7-x Compounds at Low Temperature”, which is on the arXiv. The abrupt weight change of his target mass could be interpreted as a consequence of a transitory acceleration pulse emanating from his YBCO superconductor.

    It’s been six years since the Lorincz/Tajmar paper came out, along with the follow up paper by the same authors titled: “Null Results of a Superconducting Gravity-Impulse-Generator”, which I cited in an earlier comment. I have my own particular reasons to believe in a potential connection between acceleration-based anomalies that are inconsistently seen in Bose condensates, and non-Newtonian dynamics that have long been documented in astronomical structures, which I won’t elaborate on in respect of blog rules. Having now reexamined the first paper by Lorincz and Tajmar I’m inspired to reactivate my shuttered project of half a decade ago that sought out anomalous behavior from superconductors. This will certainly require new experimental strategies. While some may see such efforts as a wild goose chase, the more stones overturned the more chances there are of finding genuinely new physical phenomena, which might even shed light on our Universe’s great mysteries.

    Like

  24. Stacy, Your frustration drips from your blog. I wonder if you’ve thought about going on one of the excellent science podcasts that are around now? (Lex Fridman, Brian Keating, Sabine Hossenfelder..) Maybe some signal boost and getting this data into the minds of young people will help? (The previous post with new RAR data has a just stunningly beautiful graph, more people need to see it!)

    Like

  25. Stacy, I’m thoroughly entertained and educated by your writing, both here and your published work. Your memes are great, and if I had sent you one adult beverage for every time I’ve laughed at ‘aint no cusps here’, yours might be a less despondent Friday evening that persisted through the weekend. But maybe The Bones tops it. .. just a flesh wound..

    I wish more scientists could laugh at themselves and imagine how they could be – might be – a character in your Bones meme, or Aint No Cusps.. I mean, in certain circumstances, I could totally be that guy from the skit, pooh-pooing at The Bones, but somehow knowing in advance that this won’t end well.. Funny stuff, thanks much.

    Like

  26. I have a problem. BAHAMAS and MICE begin with one basic assumption. Namely, that gravity is the dominant force in structure formation. Therefore, both begin with the assumption that Dark Matter must exist. But, H. Sato and K. Maeda had already formulated a theory of structure formation in the early eighties that began with a different assumption. They assumed that the growth of the Voids had initially powered structure formation by pushing matter outward and into smaller volumes along their surfaces, where gravitational collapse took over. The difference may seem slight but, an expanding bubble can look like collapse if one’s initial assumption is collapse. More importantly, if their theory were correct then the action of expanding bubbles could still be influencing the motion of matter. For instance, how does collapse lead to thin ribbons of structure? Does that not at least look like the Voids have constrained the volume that matter can collapse within? Couldn’t a model be constructed that included metric junctions? Why does Newton work only where matter density is greatest and MOND work where matter density is lower, AND closest to expanding Voids? If you’re frustrated imagine how they felt when the greater cosmological community decided that Cold Dark Matter had to be the seed for galaxy growth and all their work was ignored. And especially for Sato who has chaired Grossman Conferences. In my opinion, any model that begins with the assumption that Dark Matter exists is already flawed!

    Like

    1. I’m tempted to reply, “it’s old enough to take care of itself.” Ask yourself why a star is spherical, or what a cold surface does to a fusion reaction.

      Like

  27. Sorry. Big Bang nucleosynthesis is modeled after fusion inside stars, that occurs symmetrically in their interiors. Based on that model the Universe is missing lithium. But, the BBN model is flawed. It assumes that temperature variations as shown in the CMBR are just that….variations. But, the Universe is full of holes- Voids. Those aren’t modeled as primordial but emergent, resulting from gravitational collapse. Why not give the voids a dynamic role? As Sato/ Maeda has? They are the source of the expanding Universe. Why not model the temperature variations as arising from primordial voids? The snapshot doesn’t change. What does change is that BBN takes place amongst expanding Spacetime Voids that effectively regulate the nucleosynthesis of H, He, Li, and Be, .in effect, the primordial plasma contains energy draining expanding Spacetime Voids, that aren’t present in stars, that reduce the temperature of nuclear fusion along their surfaces. That is the reason measurements of primordial Li don’t match what models predict. The fact that there is less Li is really evidence that a cooling system must have been in place.

    Like

    1. I see what you mean. Those ideas match mine quite closely. Do Sato+Maeda have a mechanism for the expanding voids and/or cooling systems? Can you give me a good reference?

      Like

    2. Part of the modeling problem is the conflation of high potential energy density and heat. If we model quantum phenomena and all that entails downstream in the way of thermal interaction as an emergent phenomena, that would indicate that a more fundamental power source and therefore energy (kinetic, thermodynamic) source is ‘non-particulate and cold’ and can be modeled as a classical elastic potential energy, wave bearing substrate of some maximum finite density and not a BB singularity. As I understand it, the recently observed early universe collimated jets of ACN indicate a deficit of baryonic matter and suggest some dark matter alternative in the dynamics. Again AIUI, this is because modeling requires precursor stellar hot baryonic matter and dust to accumulate due to gravity in accretion disks around AGBH from which the jets are redirected; however, due to early age, sufficient abundance of precursors for the observed jets would appear to be an anomaly.
      IMO the mathematical modeling of active galactic black holes exclusively as sinks from which the jets are generated is the problem. AGBHs can be modeled non-stocastically as cold sources comprised of non-particulate pre-baryonic matter of neutron density from which the jets of plasmic baryonic matter are ejected from the surface of the GBH as defined by an extreme Kerr metric, but due to the dynamics of isotropic, divergent stress associated with the Hubble rate and not due to the gravitational law of Newton. This modeling is fully consistent with the field equation of general relativity, if we substitute the Hubble rate as the expression of an expansion force (acceleration) for Einstein’s cosmological constant.
      Stress and strain tensors applied isotropically to a region of maximum potential inertial density are sufficient to generate all the cosmic energy observed in nature. As a case in point, if my calculations are correct, all the baryonic matter in the observed universe would fit in a spherical shell inside the diameter of the orbit of the earth at neutron star/black hole density, assuming the density of a modeled substrate supporting and intervening between the wave form (classical and not probabilistic) of the baryon is asymptotically negligible.

      Like

      1. IMHO any attempt to model quantum phenomena as emergent from classical phenomena is doomed to failure from the start. The arrow of emergence must go in the other direction, as a matter of mathematical certainty.

        Like

          1. That is undoubtedly true. I will read your work and get back to you. If you want to understand my point of view better, you can look at my last 5 papers on the arXiv (the most recent is probably the most relevant). I suspect that Stacy would prefer it if our discussion went elsewhere, so my blog is open for a continuation.

            Like

      2. IMHO a deficit of baryonic matter does not indicate a dark matter alternative, but a failure of the model to explain where the missing baryons have gone. The literature contains a number of ideas in this direction, but none seems to be taken seriously, as far as I can tell.

        Like

Comments are closed.