It often happens that data are ambiguous and open to multiple interpretations. The evidence for dark matter is an obvious example. I frequently hear permutations on the statement

We know dark matter exists; we just need to find it.

This is said in all earnestness by serious scientists who clearly believe what they say. They mean it. Unfortunately, meaning something in all seriousness, indeed, believing it with the intensity of religious fervor, does not guarantee that it is so.

The way the statement above is phrased is a dangerous half-truth. What the data show beyond any dispute is that there is a discrepancy between what we observe in extragalactic systems (including cosmology) and the predictions of Newton & Einstein as applied to the visible mass. If we assume that the equations Newton & Einstein taught us are correct, then we inevitably infer the need for invisible mass. That seems like a very reasonable assumption, but it is just that: an assumption. Moreover, it is an assumption that is only tested on the relevant scales by the data that show a discrepancy. One could instead infer that theory fails this test – it does not work to predict observed motions when applied to the observed mass. From this perspective, it could just as legitimately be said that

A more general theory of dynamics must exist; we just need to figure out what it is.

That puts an entirely different complexion on exactly the same problem. The data are the same; they are not to blame. The difference is how we interpret them.

Neither of these statements are correct: they are both half-truths; two sides of the same coin. As such, one risks being wildly misled. If one only hears one, the other gets discounted. That’s pretty much where the field is now, and has it been stuck there for a long time.

That’s certainly where I got my start. I was a firm believer in the standard dark matter interpretation. The evidence was obvious and overwhelming. Not only did there need to be invisible mass, it had to be some new kind of particle, like a WIMP. Almost certainly a WIMP. Any other interpretation (like MACHOs) was obviously stupid, as it violated some strong constraint, like Big Bang Nucleosynthesis (BBN). It had to be non-baryonic cold dark matter. HAD. TO. BE. I was sure of this. We were all sure of this.

What gets us in trouble is not what we don’t know. It’s what we know for sure that just ain’t so.

Josh Billings

I realized in the 1990s that the above reasoning was not airtight. Indeed, it has a gaping hole: we were not even considering modifications of dynamical laws (gravity and inertia). That this was a possibility, even a remote one, came as a profound and deep shock to me. It took me ages of struggle to admit it might be possible, during which I worked hard to save the standard picture. I could not. So it pains me to watch the entire community repeat the same struggle, repeat the same failures, and pretend like it is a success. That last step follows from the zeal of religious conviction: the outcome is predetermined. The answer still HAS TO BE dark matter.

So I asked myself – what if we’re wrong? How could we tell? Once one has accepted that the universe is filled with invisible mass that can’t be detected by any craft available known to us, how can we disabuse ourselves of this notion should it happen to be wrong?

One approach that occurred to me was a test in the power spectrum of the cosmic microwave background. Before any of the peaks had been measured, the only clear difference one expected was a bigger second peak with dark matter, and a smaller one without it for the same absolute density of baryons as set by BBN. I’ve written about the lead up to this prediction before, and won’t repeat it here. Rather, I’ll discuss some of the immediate fall out – some of which I’ve only recently pieced together myself.

The first experiment to provide a test of the prediction for the second peak was Boomerang. The second was Maxima-1. I of course checked the new data when they became available. Maxima-1 showed what I expected. So much so that it barely warranted comment. One is only supposed to write a scientific paper when one has something genuinely new to say. This didn’t rise to that level. It was more like checking a tick box. Besides, lots more data were coming; I couldn’t write a new paper every time someone tacked on an extra data point.

There was one difference. The Maxima-1 data had a somewhat higher normalization. The shape of the power spectrum was consistent with that of Boomerang, but the overall amplitude was a bit higher. The latter mattered not at all to my prediction, which was for the relative amplitude of the first to second peaks.

Systematic errors, especially in the amplitude, were likely in early experiments. That’s like rule one of observing the sky. After examining both data sets and the model expectations, I decided the Maxima-1 amplitude was more likely to be correct, so I asked what offset was necessary to reconcile the two. About 14% in temperature. This was, to me, no big deal – it was not relevant to my prediction, and it is exactly the sort of thing one expects to happen in the early days of a new kind of observation. It did seem worth remarking on, if not writing a full blown paper about, so I put it in a conference presentation (McGaugh 2000), which was published in a journal (IJMPA, 16, 1031) as part of the conference proceedings. This correctly anticipated the subsequent recalibration of Boomerang.

The figure from McGaugh (2000) is below. Basically, I said “gee, looks like the Boomerang calibration needs to be adjusted upwards a bit.” This has been done in the figure. The amplitude of the second peak remained consistent with the prediction for a universe devoid of dark matter. In fact, if got better (see Table 4 of McGaugh 2004).

Plot from McGaugh (2000): The predictions of LCDM (left) and no-CDM (right) compared to Maxima-1 data (open points) and Boomerang data (filled points, corrected in normalization). The LCDM model shown is the most favorable prediction that could be made prior to observation of the first two peaks; other then-viable choices of cosmic parameters predicted a higher second peak. The no-CDM got the relative amplitude right a priori, and remains consistent with subsequent data from WMAP and Planck.

This much was trivial. There was nothing new to see, at least as far as the test I had proposed was concerned. New data were pouring in, but there wasn’t really anything worth commenting on until WMAP data appeared several years later, which persisted in corroborating the peak ratio prediction. By this time, the cosmological community had decided that despite persistent corroborations, my prediction was wrong.

That’s right. I got it right, but then right turned into wrong according to the scuttlebutt of cosmic gossip. This was a falsehood, but it took root, and seems to have become one of the things that cosmologists know for sure that just ain’t so.

How did this come to pass? I don’t know. People never asked me. My first inkling was 2003, when it came up in a chance conversation with Marv Leventhal (then chair of Maryland Astronomy), who opined “too bad the data changed on you.” This shocked me. Nothing relevant in the data had changed, yet here was someone asserting that it had like it was common knowledge. Which I suppose it was by then, just not to me.

Over the years, I’ve had the occasional weird conversation on the subject. In retrospect, I think the weirdness stemmed from a divergence of assumed knowledge. They knew I was right then wrong. I knew the second peak prediction had come true and remained true in all subsequent data, but the third peak was a different matter. So there were many opportunities for confusion. In retrospect, I think many of these people were laboring under the mistaken impression that I had been wrong about the second peak.

I now suspect this started with the discrepancy between the calibration of Boomerang and Maxima-1. People seemed to be aware that my prediction was consistent with the Boomerang data. Then they seem to have confused the prediction with those data. So when the data changed – i.e., Maxima-1 was somewhat different in amplitude, then it must follow that the prediction now failed.

This is wrong on many levels. The prediction is independent of the data that test it. It is incredibly sloppy thinking to confuse the two. More importantly, the prediction, as phrased, was not sensitive to this aspect of the data. If one had bothered to measure the ratio in the Maxima-1 data, one would have found a number consistent with the no-CDM prediction. This should be obvious from casual inspection of the figure above. Apparently no one bothered to check. They didn’t even bother to understand the prediction.

Understanding a prediction before dismissing it is not a hard ask. Unless, of course, you already know the answer. Then laziness is not only justified, but the preferred course of action. This sloppy thinking compounds a number of well known cognitive biases (anchoring bias, belief bias, confirmation bias, to name a few).

I mistakenly assumed that other people were seeing the same thing in the data that I saw. It was pretty obvious, after all. (Again, see the figure above.) It did not occur to me back then that other scientists would fail to see the obvious. I fully expected them to complain and try and wriggle out of it, but I could not imagine such complete reality denial.

The reality denial was twofold: clearly, people were looking for any excuse to ignore anything associated with MOND, however indirectly. But they also had no clear prior for LCDM, which I did establish as a point of comparison. A theory is only as good as its prior, and all LCDM models made before these CMB data showed the same thing: a bigger second peak than was observed. This can be fudged: there are ample free parameters, so it can be made to fit; one just had to violate BBN (as it was then known) by three or four sigma.

In retrospect, I think the very first time I had this alternate-reality conversation was at a conference at the University of Chicago in 2001. Andrey Kravtsov had just joined the faculty there, and organized a conference to get things going. He had done some early work on the cusp-core problem, which was still very much a debated thing at the time. So he asked me to come address that topic. I remember being on the plane – a short ride from Cleveland – when I looked at the program. Nearly did a spit take when I saw that I was to give the first talk. There wasn’t a lot of time to organize my transparencies (we still used overhead projectors in those days) but I’d given the talk many times before, so it was enough.

I only talked about the rotation curves of low surface brightness galaxies in the context of the cusp-core problem. That was the mandate. I didn’t talk about MOND or the CMB. There’s only so much you can address in a half hour talk. [This is a recurring problem. No matter what I say, there always seems to be someone who asks “why didn’t you address X?” where X is usually that person’s pet topic. Usually I could do so, but not in the time allotted.]

About halfway through this talk on the cusp-core problem, I guess it became clear that I wasn’t going to talk about things that I hadn’t been asked to talk about, and I was interrupted by Mike Turner, who did want to talk about the CMB. Or rather, extract a confession from me that I had been wrong about it. I forget how he phrased it exactly, but it was the academic equivalent of “Have you stopped beating your wife lately?” Say yes, and you admit to having done so in the past. Say no, and you’re still doing it. What I do clearly remember was him prefacing it with “As a test of your intellectual honesty” as he interrupted to ask a dishonest and intentionally misleading question that was completely off-topic.

Of course, the pretext for his attack question was the Maxima-1 result. He phrased it in a way that I had to agree that those disproved my prediction, or be branded a liar. Now, at the time, there were rumors swirling that the experiment – some of the people who worked on it were there – had detected the third peak, so I thought that was what he was alluding to. Those data had not yet been published and I certainly had not seen them, so I could hardly answer that question. Instead, I answered the “intellectual honesty” affront by pointing to a case where I had said I was wrong. At one point, I thought low surface brightness galaxies might explain the faint blue galaxy problem. On closer examination, it became clear that they could not provide a complete explanation, so I said so. Intellectual honesty is really important to me, and should be to all scientists. I have no problem admitting when I’m wrong. But I do have a problem with demands to admit that I’m wrong when I’m not.

To me, it was obvious that the Maxima-1 data were consistent with the second peak. The plot above was already published by then. So it never occurred to me that he thought the Maxima-1 data were in conflict with what I had predicted – it was already known that it was not. Only to him, it was already known that it was. Or so I gather – I have no way to know what others were thinking. But it appears that this was the juncture in which the field suffered a psychotic break. We are not operating on the same set of basic facts. There has been a divergence in personal realities ever since.

Arthur Kosowsky gave the summary talk at the end of the conference. He told me that he wanted to address the elephant in the room: MOND. I did not think the assembled crowd of luminary cosmologists were mature enough for that, so advised against going there. He did, and was incredibly careful in what he said: empirical, factual, posing questions rather than making assertions. Why does MOND work as well as it does?

The room dissolved into chaotic shouting. Every participant was vying to say something wrong more loudly than the person next to him. (Yes, everyone shouting was male.) Joel Primack managed to say something loudly enough for it to stick with me, asserting that gravitational lensing contradicted MOND in a way that I had already shown it did not. It was just one of dozens of superficial falsehoods that people take for granted to be true if they align with one’s confirmation bias.

The uproar settled down, the conference was over, and we started to disperse. I wanted to offer Arthur my condolences, having been in that position many times. Anatoly Klypin was still giving it to him, keeping up a steady stream of invective as everyone else moved on. I couldn’t get a word in edgewise, and had a plane home to catch. So when I briefly caught Arthur’s eye, I just said “told you” and moved on. Anatoly paused briefly, apparently fathoming that his behavior, like that of the assembled crowd, was entirely predictable. Then the moment of awkward self-awareness passed, and he resumed haranguing Arthur.

88 thoughts on “Bias all the way down

  1. It is unfortunate that theoretical prejudice gets in the way of trying to understand the data. Certainly there are some jerks who have a very unprofessional attitude. In the end, science is self-correcting and the truth will win out, but that can be sped up by not stooping to the level of the jerks. One can be right or wrong, one can be polite or not; all four combinations exist. It’s obvious who is polite and who is a jerk. It is not always obvious who is right and who is wrong.

    Like

    1. But that bias, if predominant, can affect careers, Phillip. I agree that reputation and self-interest can have a very negative effect on progress, but I’m less optimistic that science is always self-correcting — at least not within our lifetimes.

      Liked by 1 person

  2. “We know dark matter exists; we just need to find it.”
    I’ve read this line (almost word for word) in plenty of Phil Plait’s blog entries (badastronomy). While he’s no longer actively conducting research, he is a known science popularizer, so many non-scientists or scientists-to-be are exposed to this discourse. It’s true, he mentions on occasions also MoND or other theories, but those are only exceptions for him.
    I already said it – I find this attitude detrimental as it can directly inoculate to students the idea the DM must exist and this, in the end, leads to the your experiences detailed here.
    I believe that it would help a lot if the discourse of science popularizers was not so categorical with respect to dark matter, but typically the excuse is that this is the prevailing theory and they popularize the prevailing theory and that makes it an endless loop which reinforces itself.
    It’s clear that if initial bias is removed, you have more chances with the future scientist to accept ideas outside of DM.
    But how can this loop be broken? I have no idea. Maybe just one, but it’s easy for me to say it as it does not involve my time: To have a science book written for the general public that presents the problem not from the perspective of MoND or DM vs MoND or just DM but from the perspective of data. The data tells this. Our current understanding needs a placeholder in order to reconcile the data with the prevailing theories.
    Of course, the authors need to be relatively known figures in order for the book to have any kind of impact (and that’s why it does not involve my time).

    On a side note, I hope that we, as readers / commenters on your blog, did not upset you in any way, but you closed, seemingly out-of-the-blue, the comments for your older blog entries and I’d like to avoid this action in the future.

    Like

  3. Remember that, in the scientific method, time is on the side of truth. It may take a great deal of time, but, eventually, Milgrom’s name, and yours, will be in the history books, not theirs.
    For what it is worth.

    Like

  4. This bias confusion of Dark Matter and Energy is common in many math fields and has a profound scientific explanation in terms of loop inverses in general Theory Of Estimation (TOE, more general concept what is understood in physics) and array (unified matrix and tensor) calculus. As inventor of array algebra/calculus in my KTH doctoral studies in 1968-1975 I know what Stacy is talking about.

    You may be aware of my posts since 2014 in various discussion sites about Suntola Dynamic Universe (DU) expansion of GR/QM that resolved the estimation problem of DE/DM in 1995. DU literature protested the 1998 SN1a data modeling/interpretation mistake in same fashion as Stacy’s MOND vs LCDM controversy. But 2011 Nobel mistake opened the flood or gold rush to waste time and past 20 year efforts in search for DE/DM. Despite continuous DU proofs and 4 edition updates of ground-breaking books about history and details of DU structural system, now proven by eg Gaia eDR3 results. Search DU literature by ‘Suntola Dynamic Universe’ or my DE/DM/GW and ‘Hubble crisis’ blogs at Physorg site.

    Like

  5. “Bias all the way down” is certainly an apt title; modern cosmology is indeed, biased all the way down. The problem extends beyond the failed dark matter hypothesis and encompasses the entirety of the Big Bang paradigm and all of its variants, inclusive of LCDM.

    The assumptions underlying the BB paradigm are not supported by any direct empirical evidence, nor are any of the subsequent hypotheses on which LCDM depends. The standard model of cosmology comprises a roundelay of interlocking circular arguments in support of two naive and simplistic assumptions made in the early 20th century that have been axiomized; functionally they are treated as true by definition,

    Fundamental though axioms may be to mathematics, they have no place in an open-minded scientific investigation of physical reality. Math has axioms that are definitionally true, the most science can offer without direct observational evidence are hypotheses, which are only held provisionally true, pending direct observation. At least that’s how science was taught when I first encountered it in the 1950s-60s.

    Something changed in the late 70s – early 80s. Suddenly, hypotheses were elevated to established fact (dark matter, quarks) without benefit of any supporting empirical evidence, solely on the basis of their theoretical convenience. I’m sure this may have been done occasionally in the past but what transpired 40 years ago was a paradigm shift wherein physics (the study of physical reality), became subservient to the study of mathematical models of physical reality. Physics became the neglected stepchild of theoretical physics. The results have not been pretty from a scientific perspective.

    The people screaming at any mention of MOND’s successes are theoretical physicists who only understand physics through the lens of their model and are incapable of understanding or processing its failures. They are fundamentalist adherents of a secular belief system, and are impervious to contradictory evidence or argumentation. As it stands now, modern cosmology is not a scientific endeavor.

    The only way out of this mess is to go back to the beginning and reconsider the BB’s foundational assumptions:

    1. The Cosmos is a unified, coherent, simultaneous entity – a Universe.
    2. The cause of the redshift-distance relationship is some form of recessional velocity.

    Reconsidering those assumptions could consist of nothing more than making counter-assumptions and attempting to devise a model employing and interpreting the latest cosmological observations in light of those counter-assumptions. If nothing else, that would seem a reasonable exercise in intellectual curiosity for a cosmologist to undertake. To suggest that the tribe of cosmologists howling at Arthur Kosowsky are capable of such an intellectual endeavor is, of course, fatuous.

    Virtually all of theoretical physics (cosmology/particle physics/quantum theory) is an irrational exercise in mathematicism inspired model-fitting. The field is populated by people who only understand the math of the model they learned in graduate school and are incapable of thinking beyond that. They don’t know how to think about physical systems as they actually exist, and they don’t know how to do math beyond what they already know.

    Do the models agree with observations? Of course they do, that’s the beauty of freely parameterizable models where you stuff the models with unobservable entities and events as needed. But the agreement only comes with a willingness to believe in things that aren’t there. People babbling about dark matter, quarks, superposition of states, virtual particles, & etc. are no different than theologians babbling about the number of angels that can dance on the head of a pin. The only difference is that modern theorists have a much larger palette of nonsense to work with.

    Liked by 1 person

    1. Well said budrap. I’ve found that the biggest challenges are the inertia of physicists beliefs in half true but only partially understood priors, and as mentioned, the framing of a search for a cause with presuppositional memes (DM, DE, BB, etc.). I love the idea of encouraging physicists to create their own Apollo 13 moments and spend quality time thinking about tricky interpretations like what if LeMaitre had projected backwards in time to many distributed inflationary mini-bangs or what if spacetime expands outward from each galaxy but in opposition to each other neighboring galaxy. Stipulate for the discussion these are true and they reproduce the same BBN and redshift-distance effects. Then, and only then, think how things might work quite differently but essentially result in the same math and observations.

      Like

      1. J Mark Morris,

        Setting aside the foundational assumptions means that any subsequent model would not have spacetime or any other form of “universal” expansion to account for. There would also be no cosmological dark matter, dark energy, inflation, BB or BBN. All of those undetectables are entirely dependent on the foundational assumptions of the standard model. Changing the priors means you can start building a cosmological model without any of the inescapable, invisible baggage of the BB model(s).

        Like

        1. I believe physicists have done amazingly well given the priors they were dealt. So well that everything will snap into place once they figure out the right way to turn, flip, twist and bend the transformer. But some frozen joints and incorrectly placed structural members have gummed up the whole works!

          Like

          1. I’ve said this before and to continue the intellectual honesty theme, there is actually nothing new. All the ideas are in the bonepile of physics. All of them are probably covered six ways from Sunday. Many discarded ideas would come back into prominence with slight adjustments. So net-net it’s kind of a wash. They are all the physicists ideas. It’s just which ones to pick up and how to assemble the transformer. Nature is a trickster. Those who think deeply can see the rich space of interconnected models and should understand the multidimensional combinatorial problem of the puzzle of nature and the universe. You already know this. Admit it.

            Like

    2. Epicycles were brilliant math, as a model of our view of the cosmos, but the crystalline spheres were lousy physics, as explanation. Map and territory.
      I read an interesting paper some years ago, pointing out that multi-spectrum wave “packets” redshift over distance, because the higher frequencies dissipate faster, but that would mean we are sampling a wave front, not observing individual photons traveling billions of lightyears.
      My theory is that information and energy are not synonymous, as energy goes to the future, while information goes to the past. In terms of a wave, the energy drives it, while the fluctuations rise and fall.
      It’s just that as these biological organisms, it’s our gut processing the energy, while the brain sorts the information, so the conflict is presumed to be between order and chaos, rather than energy and form.
      I’m waiting to see whether the James Webb actually makes it up and works. My prediction is they find the cosmic background radiation to be the light of ever further sources, shifted off the visible spectrum, not evidence of a singular event.

      Like

      1. Epicycles were good science in the sense that they made testable predictions.

        As to what the JWST finds, are you interested in a public bet for a substantial amount of cash?

        Like

        1. Well, I also thought the Hubble would find evidence of structures that wouldn’t fit within the time frame of the theory and they managed to shoehorn everything into it, so maybe several hundred.
          Here was a post Zeeya Merali put up from me, at FQXI, where I’d post what I saw as evidence;
          https://fqxi.org/community/forum/topic/1578

          Like

        2. “Epicycles were good science in the sense that they made testable predictions.”

          Which only demonstrates the fatuous nature of the “testable predictions” requirement, as typically deployed by standard model cosmologists. The Ptolemaic model was a scientific dead-end, that persisted for more than a millennium, mostly because of the utilitarian value of its “testable predictions”, while foreclosing any scientific progress in understanding the actual physical nature of the solar system and the cosmos.

          Ptolemy’s geocentric model was not good science because it was a complete misrepresentation of the nature of physical reality – in the most fundamental scientific sense it was W-R-O-N-G – despite making testable predictions. And for the record, the standard model of cosmology is notoriously bad at predictions, but it gets graded on a curve for its post-dictions, and seems quite analogous to geocentrism.

          Like Ptolemy’s model, the SMoC rests on a couple of simplistic assumptions that misrepresent the nature of physical reality. For Ptolemy, it was geocentricism and perfect circles. For the SMoC it’s the unitary assumption (the Cosmos is a universe) and the redshift=recessional velocity assumption. As was the case with the Ptolemaic model, no further progress in cosmology is going to be made until those erroneous foundational assumptions are scrapped.

          Like

          1. I didn’t claim that making testable predictions imply that a theory is correct. The whole point of testable predictions is that it allows it to be shown to be wrong.

            Testable predictions are necessary for a scientific theory, but not sufficient.

            It didn’t persist so long because it made predictions, but for other reasons, such as those disagreeing being burned at the stake by the Church.

            We now know that it was wrong, but we know that because it made testable predictions.

            The standard model of cosmology has made many predictions, many of which have been confirmed.

            If you have a theory which derives everything with no free parameters and no input from observations, please publish it.

            Your first complaint is too vague to be meaningful, the second is just plain wrong.

            Like

            1. In reverse order:

              “Your first complaint is too vague to be meaningful, the second is just plain wrong.”

              Since you don’t make it clear what you are objecting to, it’s impossible to know what you’re talking about. Is that the point?

              “If you have a theory which derives everything with no free parameters and no input from observations, please publish it.”

              Whether or not I have an alternate theory, let alone one that meets your arbitrary and capricious standards, is irrelevant to the question of whether or not the “expanding universe” paradigm that underlies all Big Bang models, is anything but a belief system with no more scientific merit than Ptolemaic cosmology.

              “The standard model of cosmology has made many predictions, many of which have been confirmed.”

              OK, I’ll bite. Name a couple of those confirmed pre-dictions.

              Like

              1. “Cosmos is a universe” is too vague. You severely misunderstand the redshift—distance and velocity—distance laws.

                My standards are not arbitrary and capricious. There is much more evidence for the big-bang paradigm than for Ptolemaic cosmology.

                Spectral index of CMB perturbations close to, but slightly less than than 1. Standard prediction years or even decades before it was observed. The fact that today’s very accurate CMB measurements require no parameters other than those already in use before even the fist peak had been observes shows that the basic framework is more or less correct. Temperature of the CMB at high redshift as observed in molecular spectra. And so on.

                Like

              2. When analyzing the CMB, how much of it is non-prior assuming? Every paper, blog, and podcast I absorb causes my mental alerts go off for how many incorrect interpretations permeate physics, cosmology, and astronomy as priors that new interpretations are built upon. It’s so severe now that I just can’t bear to read or watch much of it anymore. The solution is so close to all of you. It really is just like the transformer action figure turns inside out and everything just makes sense. Or to say it this way, the underside of what you are trying to learn is incredibly simple and you would be better off starting there and working back towards the higher levels. So to Stacy and Phillip and the other scientists here, I will say it is absolutely amazing you got as far as you did as off track as were your priors. Also – as persistent as I am, I love scientists, and am just so incredibly crushed and offended by how I have been treated by your community (not you) for the last three years. Just think of how many papers say “early” universe or get all woo-ish about the quantum (it’s just a dipole circuit folks – basic undergraduate courses). Arghhh. Imagine if the flip-flop had been discovered as a black box and then folks said it’s magic. It has two states. And it is fundamental. There is nothing else to see here, move along. Well, say goodbye to your electronic toys if those scientists and engineers had done that!

                How do astrophysicists differentiate the CMB observations from a steady state universe expectation where galaxies implement inflation and banging on an irregular basis?

                Like

            2. It’s also required various enormous patches, such as Inflation, Dark Matter and Dark Energy.
              As I pointed out in a previous post on this blog, it also still assumes a stable speed of light as the metric against which this expansion is determined. The theory itself presumes there to be more lightyears, not that the speed of light increases, as space expands, in order to remain constant.That makes light speed the “ruler.” Which means it is an expansion in space, as defined by the speed of light. Not an expansion of space.
              Consequently we are either at the center of the entire universe, or that redshift is an optical effect, given we are at the center of our point of view.
              If this effect compounds on itself, that would explain the curve in the rate, as it starts off slowly, from our point of view out, then eventually goes parabolic. So no Dark Energy necessary.

              Like

              1. None of the three is a patch to the basic big-bang model. It is still not clear whether inflation occurred, so it is hardly an incontrovertible part of the model. (Yes, many people believe it, and I write papers to convince them otherwise.). Dark matter and dark energy could just be described as new discoveries, not patches. Linnaeus didn’t know about gorillas. Did their discovery somehow invalidate the binomial system?

                As for your second paragraph, please take my advice and read a good cosmology textbook. I recommend the one by the late, great Edward Harrison.

                Like

            3. Dark Matter may not be, directly, but Inflation and Dark Energy are.

              Inflation serves to smooth out various aspects, originally background radiation, which might otherwise be explained if this background radiation is essentially the solution to Olber’s paradox, the light of infinite sources, shifted down the spectrum.

              Dark energy originated from the original assumption that the expansion of the initial event subsided gradually, but what was observed what that it appeared to drop off rapidly, then flatten out to a more gradual rate. It was as if the universe were shot out of a cannon, then after slowing down, a rocket motor kicked in sustain a more stable rate.
              Yet if we look out from our point of view, rather than projecting in, from the edge of the visible universe, what we do see is that redshift starts off slowly and builds gradually, eventually going parabolic and by the edge of the universe appears to be racing away, at close to the speed of light.
              So my point is that rather than BBT and looking in, from the edge, if it is an optical effect, it compounds and that explains the curve in the rate.

              As for reading, I’ve read a fair amount over the last forty five years, since my mid teens and every source seems to skip over this point.
              To review, doppler effect is due to increasing, or decreasing distance between source and receiver. For example, as the train moves down the tracks, the tone of the whistle drops. This is due to increasing distance, not expanding space. The train tracks are not stretched by the train moving down them.
              Similarly the theory itself states this intergalactic light is being redshifted because it is taking the light longer to cross, as these sources move away. Which means the metric is the speed of light. It is the train tracks. Einsteins’s “ruler.” It doesn’t expand. That is the reason given for the redshift.
              That means, according to the logic of the theory itself, the speed of light is the denominator. If the speed were the numerator, then it would be a “tired light” theory.
              I should not be the one pointing this out, because I am just some guy off the street, but this is simple, basic physics and this theory has it wrong. It’s not, not even wrong, it is completely and totally wrong. It’s like saying the theory works, if 1+1=3, so 1+1 must equal 3, because we like the theory.

              Like

              1. I was going to reply, but I might as well bang my head against a brick wall.

                The cosmological constant (call it what it is; there is zero evidence that dark energy is anything more complicated) has been discussed in cosmology for more than 100 years. Anyone who claims that it was invented in order to explain specific observations which came later is clearly ignorant of the history of cosmology.

                Like

            4. How about a theory that is just a lay down winner? Seven no trump stone cold. All the aces, all the kings, all the queens, all the jacks, all the tens, all the nines, and two of the eights. Also what if the theory comes with an explanation of the history that caused physicists to be blind to it? Does it really need new evidence if it explains pretty much everything that was confusing and fits with all the existing observations and math? C’mon, we are talking about an energy carrying dipole that is in every standard matter particle at least once and sometimes 9 times (Neutron, Proton), and said dipole implements a stretchy ruler and variable clock when transacting units of h-bar angular momentum, i.e., the woo-ish “quantum”. I am very serious when I say that in the future the entire theory of nature will be taught in high school if not earlier to some degree.

              Like

              1. By the way, the best outcome in my opinion would be for physicists to say, “ok, we finally get what you have been saying, even though you refused to do it the hard way like us.” Then, I would say, “Phew, I am looking forward to your future outreach material and happy to help if there is anything else I can do.”

                Like

            5. The cosmological constant was proposed to balance gravity and maintain a stable universe! Basically that what caused space to curve inward, due to gravity, would be balanced by an outward curvature. Which is basically what Hubble discovered. The space between galaxies, measured in radiation, curves out, proportional to the space within galaxies, measured in mass, curves in. It does balance out.Omega=1.
              Think in terms of the ball on a rubber sheet analogy of gravity and consider the sheet over water, that where it is pushed down by the ball, is equalized by it being pushed up in the empty spaces. That’s why space appears flat, not just Inflation causing it to expand really, really fast.
              People just started bringing it back up, after the expansion rate originally proposed was so far off.
              It’s called a patch job. If your accountant tried this, you would go to jail.

              Like

              1. Yes, that is why Einstein proposed it. But for that to work, it has to be infinitely fine-tuned. That is the main reason why Einstein abandoned the static model, not because expansion had been discovered. Not sure why you bring radiation into this. Whether Omega (I guess you mean Omega_total) is 1 is another question; it doesn‘t have to balance.

                In your rubber-sheet picture, there would be no gravity detectable. That‘s not how it works.

                The expansion rate was not far off; at least that measured now. Yes, with the cosmological constant, the expansion rate was slower in the past.

                It‘s called updating one‘s understanding of the world in light of observational evidence, a.k.a. science.

                Like

              2. This is in reply to Phillip’s next comment but the threading limit has been reached. The irony of discussing rubber sheets with those who truly believe that spacetime is a stretchy/curvy geometry with no physical basis is not lost on me, but as I object to to the c-words I will stick to pointing out the weaknesses in your arguments. GR and QM have never been unified (by physicists). Scientists don’t have a solid theory on what happens inside black holes, especially where the math breaks down. Yet they will trust that the math of spacetime describes something fundamental. Then we could go into the long list of all the math physicists have that describe particles with dipole moments and DeBroglie wavelength and the Planck equations and all these particles transmute and there is are virtual particles and why is energy transaction in h-bar units of angular momentum. It all comes down to physicists threw away the actual solution of point charges and left them on the discard pile because they did not imagine a field effect that would establish the Planck scale. The astrophysicists here may be in the best position to understand orbiting point charges. Imagine a bang of point charges that had been at the density such that the point charges were as close as possible with the closest approach being twice the radius of immutability. So what are they going to do in the maelstrom that ensues when they can breach containment? They are going to start following Maxwells’ equations and classical mechanics. And dipoles will form. And those dipoles are going to have some pretty big magnetic fields as they begin inflating and expanding from the Planck core density. Does no one else have the feel for what happens with an electrino and positrino chasing each other in a circular orbit (if isolated). You have the opposite charge attraction of course. You have the kinetic energy in momentum and velocity of the point charges. And you have a B field that each exerts on the other and that takes some delta time to propagate and that right there is what makes the magic underneath all your theories. For a more advanced treatment, consider all 10^44 stable states of the dipole geometry.

                Like

  6. This behavior explains a lot. When this paper came out https://link.springer.com/article/10.1140/epjc/s10052-021-08967-3 , I could not believe these calculations had not been done at least a decade or three earlier. I mean it is easy to see that GR frame-dragging has the correct sign to fix galactic rotation curves (creating angular velocity without angular momentum), although it is non-trivial to calculate the magnitude. After all, MTW was first published in 1970 and Roy Kerr published his work in 1965!

    Like

    1. Did you miss the fact that none other than Stacy McGaugh very recently called bullshit on this paper?

      As some of the comments here indicated, bias among some opponents of mainstream cosmology is much stronger.

      Like

        1. Someone put a link to ResearchGate in the comments on the other post; from there you can download a PDF.

          This is a typical example: someone hears about something, hasn’t even read it, but it is not mainstream and claims to explain MOND effects so it is pretty much uncritically accepted. THAT is bias.

          Like

  7. Retorts twenty years late:

    “As a test of my intellectual honesty, you can go dunk your head in a toilet, you condescending puke.”

    Like

    1. I see no need for them to spin at this point. They just are. And the immutability is perhaps a field effect if that makes you happy. All the way down means packing at maximum density with Planck energy HCP or FCC matching that 120 order of magnitude issue. Thimk!

      Like

  8. I’m so glad that Stacy commented on that paper, over at the other thread, that proposes gravitomagnetism as the source of the missing mass problem in galaxies. To be sure it is an extraordinary tour-de-force of mathematical analysis way over my head. But having had a keen interest in gravitomagnetism (GM) for years I knew that the frame-dragging effect that Gravity Probe B was designed to detect for the rotating Earth was exceedingly tiny, like parts in a trillion, if I remember correctly. So it didn’t seem remotely plausible that (GM) could account for the observed mass discrepancy observed in galaxies, despite the enormous intellectual effort the author put into that paper.

    My interest in gravitomagnetism was stimulated by a proposed explanation for the Cooper-pair mass anomaly reported by Janet Tate in 1990. Martin Tajmar, C. J. deMatos, postulated that Tate’s anomaly resulted from an increase in the mass of the graviton within superconductors, in analogy with the mass increase of the photon within superconductors that in QFT accounts for the Meissner Effect and the London Moment within rotating superconductors. This proposed mass increase of the graviton they speculated to be the source of a hugely enhanced gravitomagnetic field (by some 30 magnitudes) in a rotating superconductor, than standard theory predicts. Tajmar and his group conducted hundreds of runs with a rapidly spun-up niobium ring at the Austrian Research Center (ARC). In each of these runs accelerometers, inboard and above the ring, picked up acceleration signals of around 100 micro-g’s, but as large as 277 micro-g’s.

    Unfortunately, a replication of the ARC experiment by the most sensitive gyro-ring-laser facility in the world, in Canterbury, New Zealand (2007), found no evidence of a gravitomagnetic field within measurement resolution. So thinking about the ARC experiment and other claims of anomalous acceleration signals from superconductors subjected to various conditions, I came up with an alternative explanation for such signals. To be sure this hypothesis is very much in the amateur league. This extremely speculative model posits an underlying reason for MOND’s acceleration scale (a0). In principle, it would also extend MOND’s domain into the cosmological arena enabling that theory to accommodate the CMB power spectrum, the Universe’s rapid, large scale structure formation, while providing a mechanism to account for gravitational lensing that exceeds the Newtonian expectation. However, I think MOND already can account for gravitational lensing, but am not entirely sure.

    In any case, I’ll post the paper at viXra when it’s completed. Papers at that archive range from quite good to crackpot, to my knowledge. This paper with its pretty far out speculations will probably land somewhere in between those extremes. Another avenue of investigation I plan to pursue is laboratory experiments with niobium. I will need to contact a local college here in New England to see if they would allow experiments with their liquid helium facilities to be conducted.

    Like

      1. Philip, I had a recollection of a comment somewhere on the net, probably a year or two ago, where the individual, a professional scientist, stated that not every paper at viXra was bad science. It would be hard to find that comment now. But that is what I based my remark on.

        Like

        1. Well, “a comment on the net”, even one by a “professional scientist”, could probably found to support any position one cares to name. But more troubling is that you wrote “to my knowledge”, while now you admit that a) it is just hearsay and b) from a blog comment. :-(. Also, just because something is not “bad science” does not mean that it is “quite good”.

          All the same, a huge fraction of the stuff on viXra is crackpot. Anyone posting something non-crackpot there who is not able to conclude from that that probably no serious scientist will ever read his paper there is so far removed from reality that it is probably fair to question his judgement in general.

          It would probably be difficult fo find a stronger critic of arXiv than myself. But the alternative shouldn’t be a site which accepts everything with no quality control whatsoever.

          Like

          1. I think you’re being unnecessarily harsh on viXra. I just googled “Are there any decent papers at viXra.org?”, and there were positive reviews. Even before dialing that query my memory was jogged, and I remembered reading a FAQ on the site by its founder. His name is Phillip Gibbs, and perhaps this is where I drew my vague memory of a “professional scientist”. I don’t know if he is a scientist and honestly don’t care. Anyway, that FAQ by Gibbs spells out the situation at that repository very clearly. I do not doubt that viXra is full of crank papers as you said. But I think to broad brush treat every single paper there as crackpot is going too far. True, I can’t point to a specific paper there that would meet the most rigorous scientific standards, and I’m not going to bother to try to find one. All I care is that it is a venue to air my ideas.

            And, let’s face it, Dark Matter hasn’t been detected yet in any laboratory experiment and DM’s parameter space keeps shrinking. And as Stacy has pointed out the goalposts for it keep changing. At some point theorists may run out of real estate to plant those goalposts. I’m not sure that Dark Matter is any better an idea than some of the ideas posted at viXra. I really like the attitude that Phillip Gibbs expresses in his FAQ about viXra. It’s definitely worth reading.

            Like

            1. I certainly didn’t write in this thread that “every single paper there” is crackpot. As to airing your ideas, sure, you can do it there, but just don’t expect any serious scientist to read them.

              Well, the parameter space always shrinks until something is found or it is down to zero, no surprise there. It took a couple of decades to detect neutrinos, and we knew where they were coming from and how many there were. And dark matter doesn’t have to be a particle at all. So absence of evidence is not evidence of absence for dark matter in general. As I’ve mentioned here before, it is a fallacy to rule out a special case of something and then claim that the general case has been ruled out.

              Like

              1. What if they are looking at it backward? That gravity is not so much a property of mass, as mass is an effect of a “curvature” that starts all the way out where information can first be extracted/quantized?
                Say any measurement/absorption of light quantizes it and this starts an inward curve, that goes all the way to the edge of the black holes, by which time, all energy/light is radiated back out, or shot out the poles, such that black holes are more an eye of a storm, than some tunnel into another dimension.

                Like

              2. Here are a bunch. I don’t know when technology will make them testable. All standard matter particles are powered by one or more immutable point charge dipoles in groups of up to 3, providing 3 dimensions of containment. The canonical stable form is the three electrino, three positrino energy core made from three nested (or captured) dipoles at different energy scales. The 3:3 energy core can decay to 2:2 but then it is missing a dimension of containment. Basically, generation + dipoles = 4. So tau neutrinos and tau leptons have one dipole. A muon has only two dipoles in its energy core, so it wobbles.

                continued at https://johnmarkmorris.com/2021/04/20/npqg-april-20-2021-triton-station-discussion/

                which concludes with this zinger ……

                Don’t bring a metaphorical knife to a magnetic field fight. I have 15 orders of magnitude on you and the math is not in your favor. 🙂

                Like

              3. That wasn’t supposed to completely download. Here is the link, with a gap;
                h ttps://fqxi.org/data/essay-contest-files/Reiter_challenge2.pdf

                Like

  9. I left out an important detail in my previous comment that may lead to confusion for those not familiar with the niobium ring superconductor experiments conducted at the Austrian Research Center (ARC) between 2003 and 2006. The object of their experiment was to induce a gravitoelectric (acceleration) current in four accelerometers placed at the cardinal points of the compass inboard, and above, the niobium ring. To achieve that, just as with electromagnetism (inducing a current in a coil of wire via changing magnetic flux), it was necessary to create a changing gravitomagnetic field. Accelerating the niobium ring, which was spun-up in each experiment from 0 to 6500 rpm in one second provided the (primary) changing gravitoelectric current (from the postulated massive gravitons) that should then have induced a changing gravitomagnetic field. That, in turn, was expected to induce a secondary gravitoelectric (acceleration) current in the sensors.

    But, as stated, the Canterbury facility in New Zealand wasn’t able to detect a gravitomagnetic field at the level expected from the ARC group’s theory. Actually, to be more precise, the Canterbury group rotated a lead superconductor at constant speed, rather than accelerating it from a stationary start to some rpm. Basing their experiment on the equations presented in the ARC group’s paper they should have detected a static gravitomagnetic field with their ring-laser gyro. From the noise level of their ring-level gyro they determined that any such field was 21 times smaller than predicted by the ARC group.

    Like

  10. Yes, for GR frame-dragging, the ratio of the induced angular velocity to the angular velocity of the rotating object (of characteristic size R, mass M), will be of order GM/(Rc²), which is significant for, say, Sagittarius A, but small,say <10e-5, for the Milky Way as a whole, so that paper looks to be wrong

    Like

    1. By the way, calculating GM/(Rc²) (i.e. the ratio of the gravitational potential energy to rest mass energy for a test particle at radius R from the rotating object) for the Earth gives 6.95e-10 at the Earth’s surface.
      Gravity probe B (eventually) measured the ratio of frame-dragging induced angular velocity to the Earth’s angular velocity to be 8.30e-11, but the satellite was orbiting at ~10 Earth radii (R), so the approximation seems reasonable.

      Like

      1. Oh, made a mistake, the satellite was orbiting at only 640 km above the Earth’s surface. The factor of 10 will mostly be due to not using the correct moment of inertia for the Earth.

        Like

      2. I’m open minded to GM playing a role in ‘the scientific inquiry known as DM’, but I suspect there are at least three more factors, although I have no feel for the contribution percentages. That would require a ton of data from many disparate sources, a lot of math and modeling, and probably a new subfield to study. My question is just how much is known about frame dragging at the poles of galaxy center SMBH? We’re talking about ‘stuff’ moving at or near the speed of light and that Lorentz factor curve gets pretty steep and high at those energies. Also, do we know how extreme frame dragging at the poles of an SMBH interacts with the SMBH itself? What is the energy held in that frame dragged spacetime per unit of contracted length? (that should come from GR, right?) Has all this been studied and simulated?

        Like

        1. Good questions! I would also like to know the answers to these. Back of the envelope calculations may well be misleading in this field.

          Like

  11. Hi Stacy. It’s so nice to see a new post from you. Thank you. Before commenting on your post, and with no disrespect meant to the other scientists commenting here, I really would like you to strongly consider a commenting policy similar to that as on Peter Woit’s blog. Maybe this is hard because you have to review the comments? I’d enjoy seeing comments that are related to your post, and not everyone’s pet idea, or knocking down someone else’s pet ideas. Take that discussion somewhere else. To me it’s a lot of noise getting in the way of talking about the data. Speaking of data, I hope you do another EFE post. That seems to me one of the most compelling pieces of data. I also wonder if you/we can find a nearby galaxy with the right geometry and right magnitude of EFE to see a ‘velocity dipole’ (at the same R) in the rotation curve. By velocity dipole, I mean that velocity as a function of radius, will have a dependence on it’s angular position within the galaxy. (angle measured from the nearby galaxy/ mass that causes the EFE). This all needs some sort of ideal nearby galaxy. (And also assumes the EFE adds as vectors.)
    Re: biases. I think there will also be a tendency in academia to stay with the common view. To come out and in anyway support what is considered a ‘wacky’ idea by most of the community is to risk funding, tenure and your career. I get the impression that ‘keeping ones head down’ and not speaking ones mind is more common now in academia… but I’m only on the fringe(of academia), and could be wrong.

    Like

    1. The whole point of tenure is that one can work on things without having to worry about one’s livelihood. OK, perhaps one has to work on other things in order to get there, but that seems a sensible price to pay as long as one expects the academic establishment to support one in the first place. Giving money to someone just because they have a wacky idea won’t cut it. Once one has tenure, worrying about one’s career is a first-world problem. Funding? Salary should be enough for a theoretician. The only possible worry is observing time. However, at least for MOND, the main problem at the moment is not lack of data, but lack of an elegant relativistic theory which has not been ruled out. New data might shed some light on some things; for example, I think that the wide-binaries test is interesting, but I believe that that can be done from publicly available data.

      Like

      1. I think you over-estimate the power of tenure. At least in the UK, it does not mean that you can work on what you want to work on, and it does not mean that your job is safe if you move away from the ideas that you were working on when you were appointed. I know this from personal experience.

        Like

      2. Are you in academia? Colleges and Uni’s want to hire people with funding so they can skim some off the top. There is perhaps a negative incentive to hire some theoretician who brings in no outside funding. The whole money funding thing is part of what breaks the ‘pure science/ truth’ dream of academia. Dang, science turn out to be a human activity.

        Like

        1. Perhaps I wasn’t sufficiently explicit. I thought I had made it clear that I was in academia (for many years) but am no longer. Theoreticians are quite capable of bringing in outside funding, and I did. But when a theoretician follows their ideas to a place outside the mainstream, then it is bias all the way down, and the funding stops.

          Like

          1. I’m not sure, but my understanding is that, at least in some places, tenure as such no longer exists in the UK. In other places, such as Germany, professors are required to do teaching (around 8 hours per week) and of course related stuff such as preparing for teaching, carrying out exams, and so on, but Forschungsfreiheit (research freedom) is a very strong principle.

            But look at, say, Max Tegmark. He was hired by MIT even though (or because?) he had some non-mainstream ideas, but definitely hired as a cosmology professor. He still teaches physics, of course, but his research is now mainly on AI.

            Like

          2. exprofessor, yeah sorry my question was to Phillip H. I’ve been out of the funding stream for many years, but ~20 years ago there were things that were funded, others not. You mostly have to work on what is funded, you have no students otherwise, at the least. (I’m a solid state experimentalist and am mostly ignorant about astronomy.)

            Like

            1. I was paid to work in academia for a while (but not with a permanent job), then I worked elsewhere for a while but continued to write papers, now I am in voluntary paid early retirement so I can do what I want. I am not a typical case (in several respects).

              Yes, there might be some pressure to hire people who can bring in third-party funding so that something can be skimmed off, but that is intended for overhead and whether or not that is actually profitable probably varies from place to place. But keep in mind that the model “external funding pays for students and postdocs” is not the case everywhere. In most of the world, people are surprised (assuming they know about it at all) that some professors in the States are paid only during teaching time. Some institutes have their own funding, even for students and postdocs, and don’t rely on external funding.

              Yes, if one is not independently wealthy, one does have to think about money, but I don’t think that that is one of the bigger problems in academia. Of course, there are fewer people working on MOND than on non-MOND stuff, but my impression is that the fraction with permanent jobs or whatever is about the same as elsewhere.

              Like

    2. I’d enjoy seeing comments that are related to your post, and not everyone’s pet idea, or knocking down someone else’s pet ideas. Take that discussion somewhere else. To me it’s a lot of noise getting in the way of talking about the data.

      I will second this.

      Liked by 1 person

  12. I have some questions related to the CMB power spectrum. If I understood Wayne Hu’s tutorial the photons we see as CMB were released circa a recombination event, right? Then accoustic pressure waves resulted in a pattern in frequency range for the released photons. I get that we detect these in the microwave band now, but what was the photon frequency for each peak in the CMB when they were released? Wikipedia says “Deep surveys with X-ray telescopes, such as the Chandra X-ray Observatory, have demonstrated that around 80% of the cosmic X-ray background is due to resolved extra-galactic X-ray sources, the bulk of which are unobscured (“type-1”) and obscured (“type-2″) active galactic nuclei (AGN).” I wonder what we calculate the emitting/releasing photon frequency for those AGN photons. Anyone know? I did google for these, but found nada.

    Like

    1. The CMB is at a redshift of about 1100, so the wavelength then was about 1/1100 what it is today, so in the optical/UV range (of course, it is a Planck spectrum, so covers an infinite range, but the peak is relatively strongly localized).

      Background is just anything which is not resolved by a particular observation. The X-ray background is indeed due mostly to discrete sources at much lower redshift and has nothing to do with the CMB.

      Like

      1. We see hydrogen being produced and large quantites and moving away from the galactic center in swarms as described in this finding reported at AAS 231 (https://greenbankobservatory.org/hydrogen-clouds-from-galaxy-center/). Given that we can’t resolve sources at the farthest reaches of our instruments, how are such processes differentiated? Are scientists confident that the hydrogen clouds emanating from galaxy centers were not released in a recombination event in an AGN?

        Like

  13. As long as we are making requests of our host, Stacy, who by all knowledge is a most excellent human bean, I would like to make the following request. Could the commenters please eliminate biting personal words like “crank”, and “crackpot”? It really does have an effect on me and hence my productivity. Perhaps I have a thin skin for personal insults to my thinking, but it is really insulting to be personally attacked as if I am doing something bad or evil. Maybe I’m soft, but I’d much rather that after consideration of my ideas you offer your opinion on the ideas and perhaps some helpful gentle guidance – like a coach. Like you have some idea of where I am going. I’m not sure you all realize how much the enthusiasts or creative thinkers rely on all of the outreaching professionals technical papers and presentations, books, blogs, youtubes and comments to learn as quickly as we can. It’s just crushing to be continually subject to disdain like this. Especially when you are right! 🙂 https://johnmarkmorris.com/2021/04/18/npqg-april-18-2021/

    Like

    1. I won’t comment on the legume.

      The words crank and crackpot are used because people know what they mean. I haven’t noticed anyone attacking you personally. Criticizing someone’s work is not a personal attack. Asking for a testable prediction is part and parcel of science. If you expect to be taken seriously at all, you have to meet the same standards which everyone else does.

      Like

      1. These are bullying words aimed at the enthusiast. These days we know the importance of words and I find it disturbing that you don’t think use of such words is a personal attack. Check out this page (http://insti.physics.sunysb.edu/~siegel/quack.html) or this page (https://math.ucr.edu/home/baez/crackpot.html) and you can easily see it is personal and calling into question the enthusiasts sanity. I’m sure physicists avoid using such words on colleagues within the field (except when sniping behind their colleagues back). Enthusiasts are people too. I’m ok with folks calling ideas nonsense but then I have that same opinion of the mainstream narrative/interpretation below the standard model all the way through to inflationary bang. As always, I am cool with the math and observations for the most part.

        Like

        1. Where have I used such words in a personal attack? I stand by the claim that most papers at viXra are crackpot, but that is hardly a personal attack. If it is, then we descend into the realm of cultural relativism, where everyone can have their own truth. Why bother seeking the truth, if everyone defines it for themself?

          Like

    2. As someone who takes perverse pride in being a crank, I’d say grow thicker skin. Keep in mind that it is simply the flip side of being on the outside. Back in the day, you would be a heretic. At least the stoning is only verbal.

      Liked by 1 person

  14. John Mark, thank you for support of Stacy’s excellent story about bias concepts all the way to hell and heaven. Even the definition of u.e or unbiasedly estimable functions of Gauss-Markov model parameters X was confused in math theories of inverse problems (such as generalized matrix inverses) of math authorities like Sir Penrose or Gauss Award winner A. Bjerhammar in surveying sciences. Mathematicians and physicists like Einstein and his mentor Levi-Civita in tensor calculus could barely invert a full-rank matrix larger than 3×3-5×5 nor overdetermined rectangular matrix of Gaussian least squares estimation in surveying. Or Nordstrom (decades before Kaluza-Klein) ‘extra dimension’ of added constraints and redundant Kalman observations of condition or sequential least squares adjustment. Not to speak about modern analytical and digital photogrammetry of image and range (GPS) sensing with millions or billions of parameters. Such as Gaia image sensing survey of closest MW galaxy in datum of z=2-3 quasars at optical (emit-to-receive travel) distance D>10B ly.

    But understanding the math theory in fitting a physical or empirical model to real observables of universe at various resolution levels of energy frames from CMB-to-galaxy groups-to-galaxies-to-stars-to-planetary etc more local systems is not enough – as shown by Suntola’s Dynamic Universe expansion of locally restricted GR/QM cosmology concepts of FLRW/LCDM and 100-200 other GR based versions. At least you have to expand the static or constant c of EM propagation speed IN space with the contraction/expansion speed C4 OF entire mass M enclosed in space. Suntola DU starts by enforcing ONE single constraint or condition equation for energy balance between the motion of M along R4 barycenter direction and its opposing gravitational energy. Producing all known and already proven local results of GR/QM as a special case and removing their bias term (1+z)^2 that prompted BIASED GR (vs Ptolemy) epicycle correction terms of Dark Energy and Matter – the biggest mistake in history of sciences! Despite the lecture of Sun vs Earth centered world view controversy some 500 years ago…

    Like

  15. @ Phillip Helbig

    “Cosmos is a universe” is too vague. You severely misunderstand the redshift—distance and velocity—distance laws.

    Let me explicate for you. The foundational assumptions of all Big Bang models are:

    1. The vast Cosmos we observe constitutes a singular, coherent, simultaneously-existing entity – a Universe.
    2. The cause of the observed cosmological redshift-distance relationship is some form of recessional velocity.

    Neither of those assumptions have any basis in direct empirical observation. They were adopted when the known scale of the cosmos barely extended beyond the Milky Way. In the context of our current observational knowledge, they are both naive and simplistic and they have resulted in a physically absurd and illogical cosmological model.

    “My standards are not arbitrary and capricious.”

    I quoted your standards for competing cosmological theories accurately. Again: ““…a theory which derives everything with no free parameters and no input from observations…” Not only are those standards capricious and arbitrary, it should also be pointed out that you do not hold your own beloved cosmological belief system to them.

    “Spectral index of CMB perturbations close to, but slightly less than than 1…”

    Talk about grasping at the straws of minutiae. Let’s get this CMB nonsense cleared up for you. It is a matter of historical record, that in the decades immediately prior to the direct detection of the CMB (in 1965), estimates by Big Bang cosmologists for the CMB temperature ranged over an order of magnitude. That range did not encompass the observed value.

    At the same time numerous estimates, using only thermodynamic considerations, for the ambient temperature of the cosmos, neatly clustered around the observed value. So all the BB predictions were, both individually and in the aggregate, wrong.

    No problem of course, the model was fit to the data and the BB rolled merrily along claiming a triumphant success. Since then, it’s been nothing but model-fitting all the way down – with the occasional exception, where even a blind squirrel finds a nut sometimes, around the margins of our observational limits.

    Like

    1. Do you even know what the spectral index is?

      I agree with the other commentator who suggested that Stacy moderate comments such that obviously crackpot (yes, that is the word) stuff doesn’t show up. That doesn’t mean a Ministry of Truth, it just means rational discussion where people might actually learn something.

      Like

  16. I’ve got to hand it to you @Phillip Helbig. Your last comment is quite a demonstration, inadvertent though it may be, of the pseudo-scientific, closed-minded certitude that characterizes fundamentalist believers of all stripes, not just those of the Big Bang cult. As to the crackpot label, I use it myself sometimes, though I do not apply it to people who merely disagree with me. I like to reserve it for those who spout blithering inanities:

    “Dark matter and dark energy could just be described as new discoveries, not patches. Linnaeus didn’t know about gorillas. Did their discovery somehow invalidate the binomial system?”

    — Phillip Helbig (https://tritonstation.com/2021/04/16/bias-all-the-way-down/comment-page-1/#comment-19276)

    Like

  17. I’ve got to hand it to you @Phillip Helbig. Your last comment is quite a demonstration, inadvertent though it may be, of the pseudo-scientific, closed-minded certitude that characterizes fundamentalist believers of all stripes, not just those of the Big Bang cult.

    He’s not close-minded, not that I’ve seen, he just requires more evidence than some dude on the internet. The evidence for the Big Bang is vast, it comes from a variety of unconnected observations, and it does a very good job explaining current observations. Any alternative would need to be based on an equally large amount of evidence, and do an even better job of explaining current observations. Ideally, it would make predictions for new observations that could be used to further check it.

    Liked by 1 person

  18. “The evidence for the Big Bang is vast, it comes from a variety of unconnected observations, and it does a very good job explaining current observations.”

    There is no direct empirical evidence for any of the standard model’s hypothetical “explanations” of things that are actually observed. Fitting an ill-conceived model to observations via mathematical modeling supplemented by unverifiable postulates, is a trick as old as Ptolemy.

    Just so there is no misunderstanding here, I’m not the one positing the existence of entities and events for which there is no direct empirical evidence. That is, however, the situation with all variants of the BB model, including LCDM. There is no direct empirical evidence for, the BB event and its inexplicable original condition, the inflation event, Wheeler’s causally-interacting spacetime, dark matter, or dark energy.

    None of those structural elements of the standard model are empirically verified or verifiable. They may be inferred from empirical observations, but those empirical observations only constitute model-dependent inferential evidence. There is no empirical evidence for the standard model’s structural elements as listed. None.

    The desire, expressed by you and @Phillip Helbig, to have that argument suppressed here, is indicative that your intellectual capacity, to reason scientifically, is severely constrained by your unscientific beliefs. Further, that desire represents the epitome of close-mindedness, and is fundamentally unscientific in nature.

    Like

    1. Budrap,
      I also commend you, though it seems a fruitless exercise.
      I will ask you, as someone not enamored of the current cosmology, do you see the logic of my previous point, that cosmic redshift as evidence for expansion still assumes an otherwise stable speed of light as the metric against which this expansion is being measured. Which makes light speed the real “ruler,” the actual denominator of space.
      So either we are at the center of the entire universe, or redshift is due to an optical effect.
      For me, it’s not really a question, that the basic logic of BBT is inherently contradictory and no one actually even bothers to address it, other than refer me to various papers that also ignore the problem. Nor am I surprised by the lack of acknowledgement by the true believers, as I’ve encountered that social response to quite a number of other issues, where the group is not to be questioned, but it would be interesting to ask someone who understands the situation and is skeptical, if they see it as a serious fallacy as I do.

      Like

  19. Dr. McGaugh suggested “A more general theory of dynamics must exist; we just need to figure out what it is.” Let’s start with the questions — How does nature allow particles to transact energy in multiples of h-bar angular momentum? — How does a spacetime geometry store energy? — How do particles transmute between different forms if they are fundamental and indivisible? — How does nature decide if a single h-bar is to be transacted? — What are virtual particles and how can they be almost real particles? — Why is it that physicists can’t really explain spin? Now, transport yourself back 150 years or so and start thinking about classical point charges, but give them a model with a field effect that makes them immutable at Lp/(2pi) radius. Then, astro folks, just think about orbits of point charges with Coulombs law. You are the world’s best scientists for orbits. Give the particle physicists some outreach and help them understand orbits of point charges. Besides the forces of attraction and the kinetic centrifugal (did I get that right?) force, also model the transit time for each point charges electromagnetic field, and also the B fields. Now, imagine that every standard matter particle has at least one orbiting pair of point charges and that those orbits are determined by a system with feedback, so it transacts angular momentum in h-bar. Dr. McGaugh, this is all you need for a more general theory of dynamics : two point charge types. the electrino at -e/6 and the positrino at +e/6, the energy they carry, and a Euclidean space and time. After some thought experiments you will realize that an electrino:positrino dipole is a variable clock and a stretchy ruler. Next, you realize that both spacetime and matter-energy have variable clocks and stretchy rulers. Therefore, spacetime is not a geometry, but an aether of something also made of dipoles. Lastly, would the Michelson Morley experiment be defeated by an aether made of particles that maintain a relationship between their clock (frequency) and ruler (radius of orbit) such that c is a constant?

    Like

    1. I wrote a blog post where I contemplate symmetries of two approaching immutable point charges. C and P symmetry were easy to show. T symmetry is more confusing because of course absolute time can not run backwards, by definition. However, when you understand how nature implements local time, via the frequency of point charge dipoles, it becomes clear that T-symmetry means reversing the direction of orbit. Ok, sure, that’s possible in some reactions. It’s not really reversing time per se. This makes sense. https://johnmarkmorris.com/2021/04/27/npqg-april-27-2021-the-closest-approach/

      Like

        1. Lack of interesting discussion? The same people criticizing what they see as the establishment but not offering any scientific alternative? Calling serious, open-minded people ignorant hidebound defenders of the orthodoxy trapped in a paradigm they are too stupid to realize? You tell me.

          Liked by 1 person

Comments are closed.