I went on a bit of a twitter bender yesterday about the early claims about high mass galaxies at high redshift, which went on long enough I thought I should share it here.
For those watching the astro community freak out about bright, high redshift galaxies being detected by JWST, some historical context in an amusing anecdote…
The 1998 October conference was titled “After the dark ages, when galaxies were young (the universe at 2 < z < 5).” That right there tells you what we were expecting. Redshift 5 was high – when the universe was a mere billion years old. Before that, not much going on (dark ages).
This was when the now famous SN Ia results corroborating the acceleration of the expansion rate predicted by concordance LCDM were shiny and new. Many of us already strongly suspected we needed to put the Lambda back in cosmology; the SN results sealed the deal.
One of the many lines of evidence leading to the rehabilitation of Lambda – previously anathema – was that we needed a bit more time to get observed structures to form. One wants the universe to be older than its contents, an off and on problem with globular clusters for forever.
A natural question that arises is just how early do galaxies form? The horizon of z=7 came up in discussion at lunch, with those of us who were observers wondering how we might access that (JWST being the answer long in the making).
Famed simulator Carlos Frenk was there, and assured us not to worry. He had already done LCDM simulations, and knew the timing.
He also added “don’t quote me on that,” which I’ve respected until now, but I think the statute of limitations has expired.
Everyone present immediately pulled out their wallet and chipped in $5 to endow the “7-up” prize for the first persuasive detection of an object at or above redshift seven.
A committee was formed to evaluate claims that might appear in the literature, composed of Carlos, Vera Rubin, and Bruce Partridge. They made it clear that they would require a high standard of evidence: at least two well-identified lines; no dropouts or photo-z’s.
That standard wasn’t met for over a decade, with z=6.96 being the record holder for a while. The 7-up prize was entirely tongue in cheek, and everyone forgot about it. Marv Leventhal had offered to hold the money; I guess he ended up pocketing it.
I believe the winner of the 7-up prize should have been Nial Tanvir for GRB090423 at z~8.2, but I haven’t checked if there might be other credible claims, and I can’t speak for the committee.
At any rate, I don’t think anyone would now seriously dispute that there are galaxies at z>7. The question is how big do they get, how early? And the eternal mobile goalpost, what does LCDM really predict?
Carlos was not wrong. There is no hard cutoff, so I won’t quibble about arbitrary boundaries like z=7. It takes time to assemble big galaxies, & LCDM does make a reasonably clear prediction about the timeline for that to occur. Basically, they shouldn’t be all that big that soon.
Here is a figure adapted from the thesis Jay Franck wrote here 5 years ago using Spitzer data (round points). It shows the characteristic brightness (Schechter M*) of galaxies as a function of redshift. The data diverge from the LCDM prediction (squares) as redshift increases.
Remarkably, the data roughly follow the green line, which is an L* galaxy magically put in place at the inconceivably high redshift of z=10. Galaxies seem to have gotten big impossibly early. This is why you see us astronomers flipping our lids at the JWST results. Can’t happen.
Except that it can, and was predicted to do so by Bob Sanders a quarter century ago: “Objects of galaxy mass are the first virialized objects to form (by z=10) and larger structure develops rapidly.”
The reason is MOND. After decoupling, the baryons find themselves bereft of radiation support and suddenly deep in the low acceleration regime. Structure grows fast and becomes nonlinear almost immediately. It’s as if there is tons more dark matter than we infer nowadays.
I referreed that paper, and was a bit disappointed that Bob had beat me to it: I was doing something similar at the time, with similar results. Instead of being hard to form structure quickly as in LCDM, it’s practically impossible to avoid in MOND.
He beat me to it, so I abandoned writing that paper. No need to say the same thing twice! Didn’t think we’d have to wait so long to test it.
I’ve reviewed this many times. Most recently in January, in anticipation of JWST, on my blog.
See also http://astroweb.case.edu/ssm/mond/LSSinMOND.html… and the references therein. For a more formal review, see A Tale of Two Paradigms: the Mutual Incommensurability of LCDM and MOND. Or Modified Newtonian Dynamics (MOND): Observational Phenomenology and Relativistic Extensions. Or Modified Newtonian Dynamics as an Alternative to Dark Matter.
But you get the point. Every time you see someone describe the big galaxies JWST is seeing as unexpected, what they mean is unexpected in LCDM. It doesn’t surprise me at all. It is entirely expected in MOND, and was predicted a priori.
The really interesting thing to me, though, remains what LCDM really predicts. I already see people rationalizing excuses. I’ve seen this happen before. Many times. That’s why the field is in a rut.
So are we gonna talk our way out of it this time? I’m no longer interested in how; I’m sure someone will suggest something that will gain traction no matter how unsatisfactory.
The only interesting question is if LCDM makes a prediction here that can’t be fudged. If it does, then it can be falsified. If it doesn’t, it isn’t science.
But can we? Is LCDM subject to falsification? Or will we yet again gaslight ourselves into believing that we knew it all along?
58 thoughts on “JWST Twitter Bender”
The “impossible early galaxy problem” isn’t new, but JWST underlines even more strongly that it is not an artifact of methodological issues, now having seen a galaxy at z=13 (the bigger the z the closer a galaxy is to the Big Bang) after just a few weeks of looking for one, when as you note, using just a MOND model, Bob Sanders predicted that the earliest galaxies would arise at z=10.
As in the case of galaxy clusters, MOND makes corrections that are substantial, and in the right direction between reality and what is predicted, but reality seems to produce a result where the phenomena would require more inferred dark matter than even MOND replaces,, even though it does much better than LambdaCDM does when it comes to galaxy formation time.
I wonder if these two cases of MOND having imperfect performance arise for the same or similar reasons, since the environment in the Universe in which the first galaxies emerged probably was fairly similar to that of galaxy clusters in which active star and galaxy formation are going on today.
Maybe? Doesn’t feel like the same issue to me, though I suppose they could be related. The trouble for MOND in clusters seems related to the hot X-ray gas, which is believed to cool and deposit baryonic mass in their centers. Where the residue of these “cooling flows” ends up is a long-standing mystery, so it could conceivably be the cluster-specific missing baryons in clusters, but the current rates seem to low to deposit enough.
“… so it could conceivably be the cluster-specific missing baryons in clusters…”
I don’t get it. Plasma is composed of baryonic matter. How would its cooling down to a gaseous or solid state account for missing baryons in clusters?
The long standing problem is that we haven’t been able to detect what the plasma turns into after it cools. So there seems to be an unseen component of baryons at the centers of cooling flow clusters. If there are enough of them – and we don’t know because we can’t see them – then these *might* account for the missing baryons in clusters.
What if they keep finding them ever further out?
As well as heavier elements in the ones on the edge.
Is there a point where the entire expanding universe model starts to come into question, or is that just too impossible to consider?
Cosmic time and redshift are inversely related, so increasingly large z means less and less actual time. Crudely speaking, the age of the universe is about 6 Gyr at z= 1, 500 Myr at z = 10, and 200 Myr at z = 20. That’s about one orbit of the sun around the Milky Way. So it is hard to imagine a galaxy that big forming any sooner. Smaller things like globular clusters could, but there just isn’t much room to do things faster. On the other hand, it shouldn’t be possible for things to form at all before decoupling (z=200), so that would be too impossible to consider.
LikeLiked by 1 person
“… so it could conceivably be the cluster-specific missing baryons in clusters…”
I don’t get it. Plasma is composed of baryonic matter. How would its cooling down to a gaseous or solid state account for missing baryons in clusters?
Sorry about the misplaced comment.
What is the earliest MOND would expect full scale galaxies?
It depends on the mass. It takes about a few hundred million years to assemble a massive galaxy. A smaller object like a globular cluster might assemble in a mere 20 million years, which is pretty much right after decoupling.
LikeLiked by 1 person
There’s a similar problem in Newtonian mechanics. Dig a tunnel through the centre of the Earth to the antipodes and drop a ball down it (air resistance and change in density with depth ignored). The ball exhibits Simple Harmonic Motion with a period that is the same as an orbit at zero altitude (again ignoring air resistance) i.e. 84 minutes. If we assume LCDM, then the minimum possible time for the baryons to reach the centre is 0.25 x the orbital period at the starting point. I don’t know how to do this calculation in MOND, but I assume the time is less. It’s a useful analogy for getting a feel of timescales.
“Cosmic time and redshift are inversely related, so increasingly large z means less and less actual time.”
So basically what you are saying is that as the clock goes to zero, the ruler goes to infinity?
I used to joke the next patch will be that astronomers have discovered the edge of the universe to be mirrored, but apparently it isn’t a joke.
I still think an optical effect, that compounds, eventually going full parabolic by 13.7 billion lightyears, would be a rational solution. As I posted a paper on, in a previous thread, one way light does redshift over distance is as multi spectrum packets, as the higher frequencies dissipate faster. Which would mean the quantification of light is a function of its absorption and measurement, so we are sampling a wave front, not detecting individual photons that traveled billions of lightyears.
As I posted a paper on, in a previous thread, one way light does redshift over distance is as multi spectrum packets, as the higher frequencies dissipate faster.
If this were true, there should be any gamma-ray bursts detectable from distant galaxies.
” it shouldn’t be possible for things to form at all before decoupling” – this is true if the BB happened. But based on the previous comments and their blogs, I believe Brodix and Budrap doubt this.
So I believe the original question from Brodix was asked from this perspective – what if we will find galaxies even further than z = 200? Will, in this case, the community start to question the BB or it will try to force a different parameter fitting or add new parameters to LCDM?
If I’d be the one to answer, I doubt we will find galaxies at z > 200.
But if we do find, I doubt that LCDM would be abandoned, at least only based on this. My bet is the community will try different fitting parameters to push the BB a bit further in time, just enough to accommodate the structure found at high z.
You’re not wrong about my view of this matter, but I would state it quite a bit more broadly. I would say that the problem with the standard model is not CDM or Lambda; they are symptoms of the underlying problem. That problem is the 100 year old FLRW model.
Cosmology isn’t going to get out of the rut it’s in until it reconsiders the FLRW model and its assumptions. That reconsideration should include an open-minded investigation of models based on the accumulated knowledge of the last 100 years and such models should not share the foundational assumptions of FLRW.
What this would require is essentially for astronomers and astrophysics to provide qualitative assessments of the physical systems they observe to theorists who should then devise mathematical models limned by that vast trove of empirical facts we have acquired since 1922. New models should be based on empirical facts not on long outdated theoretical assumptions.
You’re right of course that the BB event would most likely be pushed back to accommodate any galaxies that intrude into the current version’s forbidden zone. Nothing new about that. Model fitting has been the name of the game for years now
“Is LCDM subject to falsification? Or will we yet again gaslight ourselves into believing that we knew it all along?”
Well some are saying that these z>10 galaxies weren’t unexpected and aren’t a problem for LCDM, as the first stars/galaxies community have been studying for some time now.
No surprise, let alone panic, you just have to find the right simulation.
Sorry, my previous message was gone too fast. I wanted to add some thanks for your blog. Here it is, it’s a corrected error.
LikeLiked by 1 person
It is early days, so we have to be careful what we mean by what. So first, it was absolutely unexpected that there would be massive (L*, Milky Way size) galaxies in place by z = 10 in LCDM, whereas that’s what Sanders was predicting for MOND. Anyone who says otherwise has revised the paradigm and/or is gaslighting us.
The LCDM paradigm gets revised a lot, which is why I keep asking what it really predicts. The assembly of dark mass is pretty well understood, and proceeds gradually. There is considerable room to play with putting stars in those dark matter halos, which is the rub since we can only see the stars. Still, while we might expect to see lots of stars form early, they should be divvied up into small fragments. (We called these “Searle-Zinn fragments” back before CDM.)
Like you, I see claims like this. One I saw cited simulations with lots of million solar mass (in stars) objects at z=10. Those are fragments. The JWST observations were claiming objects of billions of solar masses. A billion is bigger than a million, even to astronomical accuracy.
The recent paper you cite appears to have lots of fragments, attributing reionization to objects of 10 million solar masses in stars, which is reasonable. Their z=10 bin in Fig. 9 goes up to a few hundred million and stops. There’s just nothing bigger than that at that time, and they note that they only get those in this really big simulation. If you look at a huge enough volume, you’ll see really rare objects like unicorns. You shouldn’t see lots of unicorns leaping out at you in the first deep JWST image.
Early days still: all these things need to be counted & quantified.
A further complication is that star formation is an ongoing process from the beginning to now in galaxies like the Milky Way. So even if we manage to assemble 70 billion solar masses (my estimate for the MW) already at z=10, it won’t be all be in the form of stars. So seeing something with a billion solar masses already in stars implies a lot more of the gas fuels that star formation has already assembled: a big down payment on a bright future.
LikeLiked by 1 person
This time, just in time: many thanks for this answer.
Wow, this is amazing. MOND just keeps hitting home runs!
It does. Most scientists are simply unaware of the number of successful predictions it has made.
LikeLiked by 1 person
I’m guessing MOND could be further modified also to match z=15, perhaps by adding an a1 at which the force’s radial dependance drops to r^-0.5, or something. That seems like a natural mathematical progression to me, although I know naturalness is not a dependable criterion.
There’s very little cosmic time between z = 10 and 15 (200 Myr, about one solar orbit), so lots could (and in MOND, should) be going on in the range 10 – 20. Even in LCDM you’d expect lots of little things (see above). There’s nothing magic about z = 10; it’s just a convenient marker for when MOND would have already made lots of big galaxies and LCDM would not.
Retaking my question from the post where you made a prediction about what JWST will see – “is it possible to place some bounds [in MoND] on the expected abundance for spirals vs ellipticals at the distance observable by JWST”?
Are you aware of some work done in this regard?
I do not have a specific prediction myself for the relative abundance of spirals vs. ellipticals in either theory. We know from low redshift galaxies that spirals have ongoing star formation while ellipticals are mostly done with that – “red and dead.” So the expectation that follows from this is that the first big galaxies are ellipticals. This knowledge is older than either theory, so is kind of baked into each. The critical difference is that LCDM makes old stars in small clumps that subsequently merge together (and take their time about doing so) while MOND does the assembly more promptly.
I see there are already claims of galaxies at z 13 – 20 seen in the JWST images.
Can’t way to see how this will go!
LikeLiked by 1 person
The first galaxy at z = 16.7: https://arxiv.org/abs/2207.12356
“Assuming our constant SFH model, we find that star formation first began in this object between 120 and 220 Myr after the Big Bang (𝑧 = 18 − 26).”
LikeLiked by 1 person
Yes, this is already getting hard to keep track of. It will also take some time to sort out: some of these redshifts will turn out to be bogus, and modeling the star formation at early times is highly non-trivial. And it ain’t just the stars: to make that many stars, you first have to pool together at least that amount of gas – and probably a lot more.
When Jay Franck made the plot above just 5 years ago, it seemed wild to just magically put in place a galaxy that formed as “early” as z=10.
Could someone please explain how the galaxy mass can be inferred from the luminosity (without spectra) when we don’t really know the stellar population for these high redshift galaxies? What if they have much more than expected Population III stars that are extremely luminous? The inferred mass could be easily off by factors of thousands if one is using the wrong templates.
Also, how do they know they are not AGN? Are these sources (convincingly) spatially resolved?
Aniron, from the paper: “The object is also clearly resolved in the NIRCam imaging data, and so cannot be a low-mass star or unobscured active galactic nucleus. We have re-calculated the photometry for this object using a variety of aperture sizes, but this does not change our recovered redshift. Having searched extensively, we are currently unable to find any plausible explanation for this object, other than a galaxy at a new redshift record of 𝑧 = 16.7.”
Also “For probing beyond 𝑧 ‘ 7, and resolving such uncertainties, the capabilities of the NIRCam camera on-board JWST are transformative, with complete multi-band imaging now available out to 𝜆 ‘ 5 𝜇m with unprecedented angular resolution.”
The NIRCam does makes a credible angular resolution on galaxy CEERS 93316 thus its likely not an AGN.
Here is a photo: https://twitter.com/ACCarnall/status/1551864183230177280/photo/1
About 1 billion solar masses. A hint of a an elliptical.
Aniron, in the z=16.7 paper’s discussion there is statement about Population III stars: “By separate analysis, we recover a rest-frame UV spectral slope, 𝛽 = −2.2 ± 0.1. In combination with our Bagpipes fit finding dust attenuation, 𝐴V, consistent with zero, this suggests no evidence for an unusual (i.e., Population III dominated) stellar population.”
So they have reasoning that the luminosity is not skewed by the Population III stars.
can someone help me with something. According to some sources, the velocity v of an object is equal to the red-shift z times c:
obviously this only works if z < 1
So what does it mean when comment here talk about z=16.7 or even z=10.
Certainly I have something confused (not a first, not a surprise). Can someone clarify?
Recession is caused by the expansion of space, not the actual movement of galaxies, so it is quite possible for the relative velocity of two galaxies to be greater than the speed of light.
That’s right, spacetime is just a cover story for the fact that the Doppler shift interpretation of redshift leads to observed speeds in excess of c. Spacetime (expanding version) is another one of the invisible helpers (inflation, dark matter, dark energy) that the standard model needs in order to reconcile itself with empirical reality. The standard model is a mess.
So I did a little digging. The Sloan Digital Sky Survey (https://skyserver.sdss.org/dr1/en/proj/basic/universe/redshifts.asp) says
v=cz where 1+z = observed frequency / rest frequency. This constrains z to be less than 1.
The Las Cumbres Observatory site (https://lco.global/spacebook/light) says
z = (observed frequency – rest frequency)/rest frequency. This results in z values similar to those used on this thread and other sites (Ex: https://esahubble.org/news/heic1219/) with values as high as 19.
Can someone give a serious explanation for these two different formulas for z?
z = v/c is the low velocity approximation to the fully relativistic formula (1+z)^2 = (c+v)/(c-v). As v gets close to c, z tends towards infinity. When v is close to zero, z tends to v/c, which is an adequate approximation for many applications (including the original Hubble Law).
That fixes the faster-than-light-speed issue, sure, but the fundamental problem that troubled Hubble remains – unrealistically high velocities that only appear at large cosmological distances. For instance the reported redshift of 13 translates via the relativistic formula you give to a recessional velocity of .9898c. The fix for that is “expanding spacetime” – a just-so story that has no empirical justification, but turns the recessional velocity interpretation into the expansion of a reified “spacetime.”
So, the choice is rather stark in the current environment, either we have to believe in recessional velocities of galaxies that are unrealistic (and do not exhibit any relativistic distortions due to such high relative velocities), or you have to believe in a recessional velocity that is not a typical recessional velocity but one imparted by the expansion of an otherwise undetectable physical entity that manages to somehow subject the receding galaxies to apparently high recessional velocities without invoking any relativistic velocity effects.
We are stuck with those two unpalatable choices because we are stuck with the FLRW model and its expanding universe story. Modern cosmological theory will remain absurd as long as it clings to FLRW. As soon as you remove the FLRW frame, the need for a Big Bang, inflation, expanding spacetime, dark matter and dark energy is gone and the arduous task of constantly refitting that junk pile to reality dissipates like a night fog in the summer sun.
It’s time to move on. The standard model can’t be mathematically massaged into a realistic theoretical representation of physical reality; it needs to be scrapped just as it was necessary to scrap geocentrism 500 years ago. We need a new framing of cosmological systems based solidly on observations and well established physics. That new framing will need to be relativistic in nature, eschewing the now weird-looking, premodern cosmological assumptions of 100 years ago: the Cosmos is not a gigantic, expanding, 4-dimensional gasbag.
“subject the receding galaxies to apparently high recessional velocities without invoking any relativistic velocity effects.”
This is something that I want to ask. Up to a week ago, that was also my understanding, but Phil Plait posted several days ago about a gamma ray burst that the team reporting it said it took around a second in the galaxy’s reference frame, but for us it lasted about 2 seconds due to the relativistic recessional velocity of the galaxy.
So – do we see relativistic effects in distant galaxies that must be compensated for when measuring, for instance, the rotation speed? In other words, without these correction, do far galaxies seem to rotate more slowly?
I was thinking more along the lines of direct morphological consequences. Galaxies with a recessional velocity > .9c should exhibit significant foreshortening in the radial velocity direction due to relativistic length contraction. The effect should flatten the leading edge of any high recessional velocity disk galaxy with an inclination to the radial velocity.
That said, I wouldn’t be surprised to see relativistic consequences invoked where convenient and ignored where inconvenient. Logical consistency is not always a hallmark of theoretical physics these days.
See also http://astronomyonline.org/Science/RelativisticRedshift.asp
DM doesn’t fit the SPARC data. Mariia Khelashvili, Anton Rudakovskyi, Sabine Hossenfelder, “Dark matter profiles of SPARC galaxies: a challenge to fuzzy dark matter” arXiv:2207.14165 (July 28, 2022).
Not specifically MOND-related, but this new spectroscopic instrument (WEAVE) on the William Herschel Telescope on La Palma, will help us to understand how our Milky Way galaxy evolved into what it looks like today: https://www.bbc.co.uk/news/science-environment-62321537
Length contraction in relativistic settings happens along the direction of movement. In order to see the effect you’d have to view the object sideways. This doesn’t happen with galaxies, however. We see the galaxies along their direction of movement so we should not see the effect of length contraction.
@Apass, that’s correct and that’s the reason I specified a galaxy with an inclination to the radial direction. You could never see the full effect but the leading edge of an inclined galaxy should exhibit a degree of contraction dependent on the angle of inclination. At speeds above .9c and at a 45 degree inclination @ half the full effect should be detectable.
Wouldn’t galaxies with recessional velocity > .9c have relativistic length contraction across the entire galaxy? The entire galaxy has that high recessional velocity; not just the leading edge. Because of a pretty uniform length contraction, being able to determine the actual inclination might be quite difficult.
As I said, length contraction happens along the line of sight, so for us it will not be visible.
Take, for example a spiral galaxy with a round shape that is inclined relative to us at a certain angle about an axis perpendicular to our line sight. For simplicity, let’s say the axis about which the galaxy is inclined is horizontal.
If the galaxy is static, we will see the galaxy as an ellipse with a major axis equal with the diameter of the galaxy and a minor axis given by the diameter and the inclination angle. Let’s assume that the far point (i.e. the point that is at the largest distance relative to us) is seen above the major axis and the near point is seen below the major axis.
Now, this same galaxy moves away from us – that is, in a direction aligned with our line of sight – at relativistic speeds and experiences length contraction in the direction of travel (let’s say this direction of travel is horizontal, again, for simplicity). That means that the distance between the far point and the near point of the galaxy shrinks when measured horizontally – along the line of sight. But the distance between the far point and the near point remains unchanged vertically (perpendicular to the line of sight) so from our perspective, the minor axis of the ellipse stays the same as if the galaxy remains static.
So length contraction, from our perspective, doesn’t affect the morphology of the galaxies as we see them.
But I’m still curious if we see relativistic effects in galaxies and that they are compensated when trying to measure rotational speeds.
What you are doing here is substituting a Special Relativity analysis in a situation where General Relativity applies. Under the conditions we are talking about here where a galaxy is accelerating at speeds in excess of .9c the physical consequences of length contraction become extreme. Galaxies at that speed are essentially smashing into the wall of the light speed limitation – they are being destroyed as galaxies. Here are the numbers:
At .9c length contraction is .44
At .99c – .14
At .999c – .045
This is a physical contraction, one that has to be accounted for in collider results:
“Heavy ions that are spherical when at rest should assume the form of “pancakes” or flat disks when traveling nearly at the speed of light. And in fact, the results obtained from particle collisions can only be explained when the increased nucleon density due to length contraction is considered.”
The physics of the situation is unforgiving – a galaxy approaching the speed of light would be physically flattened like a pancake – in its own frame; it would no longer look or function like a galaxy.
The whole point of this exercise is that we do not observe such a contraction and in realistic physics we would not expect to see such a high galactic velocity if for no other reason than the enormous amount of energy that would be required to attain such speeds. The difficulties with such high velocities has long been known. The “fix” for the problem was the expanding spacetime story in which GR consequences can be ignored but a pseudo-recessional-velocity of sorts can be retained.
So nobody talks about recessional velocity anymore even though the standard model is based on the redshift = recessional velocity interpretation. The wholly imaginary spacetime explanation is just a “creative accounting” way of retaining the model despite the impossibility of one of the model’s central conceits. The Big Bang model does not make physical sense, has never made physical sense, and never will make physical sense.
There is no shame in pursuing an incorrect model; failure is a necessary component of the scientific endeavor. However, it is catastrophic that cosmology in academic science has become nothing but an exercise in justifying the BB model in spite of its repeated failures, its logical incoherence, and its physical absurdity. This situation is unsustainable and needs to be rectified.
If cosmology is to become a science, some forward-looking, accredited scientific institution or group of individuals needs to found and fully fund something like an “Institute for Modern Cosmology”. The principal purpose of that institution would be to do a close physical analysis of all the cosmological data available and devise from that data and known physics, models that are based, not on empirically baseless 100-year-old assumptions but, solely on the data available and known physics. In other words, the empirical basis of all scientific inquiry has to be effectively recovered from the rubble of modern theoretical physics which quite obviously has no such basis.
As far as I can tell (as a non-expert), there should be no visible contraction in what we observe, just extremely redshifted light. This can be interpreted in a rather straightforward way as light coming from a very distant object through spacetime with non-zero cosmological constant (and appropriate initial conditions). No enormous amount of energy is needed at all, and nothing is destroyed.
Excluding the domain where we see MOND phenomenology, General Relativity is doing well for everything else, isn’t it? This extreme velocity for an extremely distant object is completely normal, and I’m sure inhabitants there, if any, feel very much still. From their own perspective, we are the one moving away really fast.
(I promised myself I would post no more, but when I read that I just can’t help it. Sorry!)
Wait a minute! A galaxy with a recessional velocity of > .9c is (locally) moving much, much slower than that. Its recessional velocity is a consequence of cosmic expansion, NOT relativistic motion through space. We see it moving > .9c, but an observer in that galaxy would not experience that; they’d see US moving > .9c. Is that not correct?
This is why the red shift we see at very great distances is not from classical “Doppler” effects, but from the elongation of waves as they travel through expanding space: cosmological red shift
Wait a minute here! A galaxy with a recessional velocity of > .9c is (locally) moving much, much slower than that. Its recessional velocity is a consequence of cosmic expansion, NOT relativistic motion through space. We see it moving > .9c, but an observer in that galaxy would not experience that; they’d see US moving > .9c. Is that not correct?
This is why the red shift we see at very great distances is not from classical “Doppler” effects, but from the elongation of waves as they travel through expanding space: cosmological red shift.
What are your thoughts on this article which appeared two days ago on Vice?
Dark matter scientists chasing their dark matter tails.
Joost, why do you think this paper is relevant? If dark matter is inferred for galaxies nearby, in my opinion prematurely, considering MOND’s successes – does it matter that they applied the same to galaxies far away? Or do you think the inference they use is valid (extra gravitation means dark matter)?
Either LCDM’s dead or my watch has stopped.
dlb & sean samis,
Here’s what I said:
“The “fix” for the problem was the expanding spacetime story in which GR consequences can be ignored but a pseudo-recessional-velocity of sorts can be retained.”
dlb: “This can be interpreted in a rather straightforward way as light coming from a very distant object through spacetime with non-zero cosmological constant (and appropriate initial conditions).”
sean samis: “Its recessional velocity is a consequence of cosmic expansion, NOT relativistic motion through space. ”
So to the criticism that spacetime is a made-up, just-so-story to cover up the problems with the recessional velocity interpretation, you respond that the expanding spacetime story fixes the relativistic velocity problems. In some strange way we agree. The difference being that I don’t believe in spacetime except as a relational concept whereas you both seem to think of it as a causally interacting entity in the Cosmos. Unfortunately there is no empirical evidence to support your belief in such an entity.
As to the question of relativistic velocities, neither of you nor Apass seem to be clear about the different physical consequences of SR and GR. Maybe these examples will help:
It is a common thought experiment in SR to picture two spaceships passing each other with a constant velocity far from any gravitating source. There are two observers one on each ship and each has a clock. As the ships pass each observer sees the other ship’s clock running more slowly than their own. Neither clock is running slower than the other. It only appears that way to the remote observer This is the model you are attempting to apply to the recessional velocity issue; it is the Special Relativity case where neither gravity or acceleration are factors.
Now consider two observers, one at sea level, the other on a nearby mountain top. Each has a clock. The situation with the two clocks is quite different because of the presence of Earth’s gravitational field. Both observers agree that the clock at sea level is running slower and the clock on the mountain is running faster. This is a real physical difference; it is not like the SR case where the clocks each only appear to be slower to the remote observer. It is the difference between inertial (SR) systems and non-inertial (GR) systems.
GR considerations apply to non-inertial systems, especially in strong gravitational fields or at high accelerations. Time dilation and length contraction are physically meaningful when GR considerations apply as they would if a galaxy were accelerating near the speed of light. No pancaking has been observed and none will be because the redshift is not caused by a recessional velocity of any sort. That assumption has outlived its usefulness. Imaginary spacetime is just a coverup of the fact that the recessional velocity interpretation of redshift, on which the standard model of cosmology is based, has failed.
Maybe an example of what I mean, like you did, will make this discussion clearer, and you can pinpoint in what I write where the error is.
Say we have an empty spacetime and two pebbles A and B separated by 10 billions years, initially at rest relative to each other. I’m using the large distance to make sure their gravitational field can be completely neglected and only the Cosmological Constant (CC) kicks in.
Assuming CC is zero, pebble A receives light from pebble B 10 billions years after it has been emitted, and from the point of view of A, B is at rest and suffers no contraction. Light is normal, there is no redshift.
If CC is greater than 0, when A receives light from B:
. A longer time has elapsed, for example that light arrives 20 billions years later instead of 10. You can challenge that statement if you want to go technical, I’m just writing this so other readers have an idea of what’s going on.
. B appears at rest and there is no contraction (recall that B did not move in the slightest, it did not accelerate.)
. The light is redshifted. If you want you can say “redshift, therefore recessional velocity, so say Mr Doppler” or you can say “redshift, therefore non-null CC and B did not move, so say Mr Einstein”.
The reason we use Doppler’s view is because, if you tried to send a rocket from A to B, the longer we wait to send it the longer it would take for the rocket to travel to B in its own frame. For a massive rocket, it might even never be able to reach B.
You have all the consequences of a recessional velocity.
But pebble B did not accelerate, therefore did not move. Choose the observer you want, B doesn’t move.
In fact, if A knows B is a perfect sphere, it can tell, absent the contraction, that B is not moving, and can calculate the value of the CC (I think, I’m not 100% sure of that 🙂 ).
Comments are closed.