This clickbait title is inspired by the clickbait title of a recent story about high redshift galaxies observed by JWST. To speak in the same vernacular:
What they mean, as I’ve discussed many times here, is that it is difficult to explain these observations in LCDM. LCDM does not encompass all of science. Science* predicted exactly this.
This story is one variation on the work of Labbe et al. that has been making the rounds since it appeared in Nature in late February. The concern is that these high redshift galaxies are big and bright. They got too big too soon.
The work of Labbe et al. was one of the works informing the first concerns to emerge from JWST. Concerns were also raised about the credibility of those data. Are these galaxies really as massive as claimed, and at such high redshift? Let’s compare before and after publication:
The results here are mixed. On the one hand, we were right to be concerned about the initial analysis. This was based in part on a ground-based calibration of the telescope before it was launched. That’s not the same as performance on the sky, which is usually a bit worse than in the lab. JWST breaks that mold, as it is actually performing better than expected. That means the bright-looking galaxies aren’t quite as intrinsically bright as was initially thought.
The correct calibration reduces both the masses and the redshifts of these galaxies. The change isn’t subtle: galaxies are less massive (the mass scale is logarithmic!) and at lower redshift than initially thought. Amusingly, only one galaxy is above redshift 9 when the early talking point was big galaxies at z = 10. (There are other credible candidates for that.) Nevertheless, the objects are clearly there, and bright (i.e., massive). They are also early. We like to obsess about redshift, but there is an inverse relation between redshift and time, so there is not much difference in clock time between z = 7 and 10. Redshift 10 is just under 500 million years after the big bang; redshift 7 just under 750 million years. Those are both in the first billion years out of a current age of over thirteen billion years. The universe was still in its infancy for both.
Regardless of your perspective on cosmic time scales, the observed galaxies remain well into LCDM’s danger zone, even with the revised calibration. They are no longer fully in the no-go zone, so I’m sure we’ll see lots of papers explaining how the danger zone isn’t so dangerous after all, and that we should have expected it all along. That’s why it matters more what we predict before an observation than after the answer is known.
*I emphasize science here because one of the reactions I get when I point out that this was predicted is some variation on “That doesn’t count! [because I don’t understand the way it was done.]” And yet, the predictions made and published in advance of the observations keep coming true. It’s almost as if there might be something to this so-called scientific method.
On the one hand, I understand the visceral negative reaction. It is the same reaction I had when MOND first reared its ugly head in my own data for low surface brightness galaxies. This is apparently a psychological phase through which we must pass. On the other hand, the community seems stuck in this rut: it is high time to get past it. I’ve been trying to educate a reluctant audience for over a quarter century now. I know how it pains them because I shared that pain. I got over it. If you’re a scientist still struggling to do so, that’s on you.
There are some things we have to figure out for ourselves. If you don’t believe me, fine, but then get on with doing it yourself instead of burying your head in the sand. The first thing you have to do is give MOND a chance. When I allowed that possibility, I suddenly found myself working less hard than when I was desperately trying to save dark matter. If you come to the problem sure MOND is wrong+, you’ll always get the answer you want.
+I’ve been meaning to write a post (again) about the very real problems MOND suffers in clusters of galaxies. This is an important concern. It is also just one of hundreds of things to consider in the balance. We seem willing to give LCDM infinite mulligans while any problem MOND encounters is immediately seen as fatal. If we hold them to the same standard, both are falsified. If all we care about is explanatory power, LCDM always has that covered. If we care more about successful a priori predictions, MOND is less falsified than LCDM.
There is an important debate to be had on these issues, but we’re not having it. Instead, I frequently encounter people whose first response to any mention of MOND is to cite the bullet cluster in order to shut down discussion. They are unwilling to accept that there is a debate to be had, and are inevitably surprised to learn that LCDM has trouble explaining the bullet cluster too, let alone other clusters. It’s almost as if they are just looking for an excuse to not have to engage in serious thought that might challenge their belief system.
68 thoughts on “Can’t be explained by science!”
For those that see the prudence in the line of logic of Stacy’s blogposts, but find it hard to take the step of letting CDM go:
Tully-Fisher. Velocity dispersion. External field effect. No dark matter particles despite many experiments that should have found them if they existed. Departure from Newtonian dynamics in galaxies always below acceleration a0. The external field effect. Highly evolved galaxies at too high redshifts as seen by JWST. And many more important details. MOND told you all before the data confirmed it, to prevent you from saying “of course we knew”. Why put your hope in the superstition of dark matter? Did it predict any of these things? It can only go with the flow afterwards due to the united efforts of so many smart scientists trying to make it sound better than it is. LCDM? Big Bang works well in RelMOND or other UT proposals, if you’re really sure that the CMB is the big bang afterglow. What does the idea of dark matter do? Where does it help? (1) To explain the bullet cluster? (2) explain UD galaxies with only baryonic gravity? (3) Save your careers? (4) glorify Einstein’s GR? MOND can predict gravitational dynamics based only on baryonic mass, with dark matter you first need to infer a guess on where the dark matter is, and include possibilities of errors in that guess.
First off, (1) is a verification of dark matter theory, not a falsification of MOND. That there is unseen mass in the bullet cluster does not mean it is dark matter mass. And a verification of some theory does not in any way prove it is right. A flat earther can verify his flat earth theory again and again (the horizon looks flat) but that doesn’t prove him right.
The second (2) is not really true. In LCDM, there is no known phenomenon that can bring about a (small mass) galaxy without dark matter. In MOND, though, the external field effect can disturb an UD galaxy so it appears to have just Newtonian gravity. Also it’s possible that the UT for MOND has different effects in more spherically symmetric situations (Deur’s self-interacting gravity explanation, his theory is a UT candidate. That gives you “dark matter” in the form of gravitons).
Thirdly, (3) is shattered once the truth surfaces. And it will, with or without your consent.
And finally, (4) is religiously motivated. I’m a big fan of Einstein’s thoughts and theories, especially his famous quote that God does not play dice. Just, though, shouldn’t we keep up with new discoveries? Shouldn’t science be open to justly and critically examining new ideas? Did Einstein throw away quantum mechanics because he could not reconcile it with his theory and beliefs? He accepted it, though giving cautionary warnings on incompleteness. Then do like Einstein, as a true follower! And accept MOND as true while giving cautionary warnings on its implications for your favorite beliefs. I call acting otherwise religion, since it is pushing the belief in unmodified GR (which implies dark matter) without true reality experiments to back this belief. And that while it is clear that it should be modified somehow, since in any case GR still needs an underlying quantum theory even if you deny all the true predictions of MOND.
Accept and study MOND now, and your dignity and career will remain, even if things may not be easy. You’ve taken the easy path for too long already. It’s time to accept the facts. MOND has passed its 40-year tribulation period in the desert now, soon it will capture your “holy” territory. I’m not referring to this biblical story for religious ends, just wording what we feel and predicting what will follow. Prepare your weapons! They will not save you anyway.
Who am I to say all this? I’m a 31-year old C# programmer with a MSc Mathematics. I have interest in mathematical physics, especially supersymmetric string theory. In my spare time I play around with order-3 variants of involutions and with hypermatrices, in hopes of finding a use for concepts of supersymmetry to explain the 3 generations of matter. I consider myself an impartial judge on MOND vs dark matter, even if supersymmetry is somewhat on the dark matter side. I’m not sided against the LCDM idea that the universe began from one point, but after having seen many of the arguments I fully reject DM and I doubt severely what the Big Bang theory proposes before the era of recombination.
P.s. By the way, MOND is not just great at predicting reality, it also inspires more creative theoretical ideas: from entropic gravity, AeST and BIMOND to superfluid dark matter or self-interacting gravity. Take the step to enjoy this exploration adventure!
LikeLiked by 1 person
Having worked on MOND for a decade, I am not sure that I trust the prediction of galaxies forming at redshift ten naturally in MOND. The thing is that if you have no cold dark matter, then after recombination, there would be very little structure on small scales. The structure would only be in the baryons, where structure growth was severely limited before recombination by radiation pressure. After recombination, you would thus have very little power on small scales. It is true that MOND grows structure faster, but I am not sure this is anywhere near enough to form galaxies by z = 10 when there is almost no power on the relevant scale at z = 200 say. We have been exploring this issue in Bonn, Prague, and St Andrews for a while now in the context of a model with sterile neutrinos adding a hot dark matter component that addresses galaxy clusters and the CMB. It is very difficult to form these early galaxies that were predicted in MOND. The reason that this prediction does not really count is because it is based on semi-analytic calculations. In an actual hydrodynamical cosmological simulation of the MOND equations, it does not really work like that. We are still exploring this issue.
Regarding having a discussion on these issues, I am organising a conference on MOND where hopefully further results on the above will be presented:
And as for weighing different lines of evidence in an overall balance, I tried to do so in this long review:
You might like to read the sections of most interest to you. Theoretical predictions are clarified at an early stage to try and keep with the scientific method of fixing predictions prior to looking at the data. This is to avoid the obvious temptation later on to claim that your preferred model predicts the observations. I have been heavily influenced by Merritt’s Philosophy of MOND book in this regard. In the review, significant effort has been made to distinguish how flexible LCDM and MOND are when addressing observations of a certain type of astrophysical system, e.g., low surface brightness disc galaxies. In relation to the post above, I do agree that in general, LCDM is not very predictive on galaxy scales. But there are some larger scale observations it did predict, and this is mentioned in the review. The most obvious to me is the cosmic shear power spectrum, which is fairly model-independently measured. Still, not too many things were really predicted by LCDM.
A comment above has mentioned the AEST model, which is a renaming of what was previously called RelMOND. I just wanted to point out this has problems too:
And related to the Bullet Cluster, it is not that problematic for LCDM in our assessment:
This is all very exciting stuff though, as JWST is really peering into the formative era of galaxies. It has been argued that the observations are not that problematic for LCDM:
But I am not sure what the latest situation is, and whether the overall galaxy population at high redshift revealed by JWST cannot arise in LCDM.
Yes, that is what I am saying, yet again: there is very little power at z = 200, but then it grows very fast in MOND. After decoupling, the proverbial rug is pulled out from under the density perturbations, and they suddenly find themselves deep in the MOND regime. Structure formation proceeds rapidly – it has been repeatedly shown that this is fast enough to make L* mass galaxies by z=10. The problem, if anything, is not overshooting.
Bold move to say “The reason that this prediction does not really count is because it is based on semi-analytic calculations. In an actual hydrodynamical cosmological simulation of the MOND equations, it does not really work like that” in a post where I mock exactly this attitude.
First, it is not based just on semi-analytic equations. Or at all, really – I don’t recall any of these being what are usually called semi-analytic. There were purely analytic calculations, purely numerical calculations, and hydrodynamic calculations. Since you seem chiefly concerned with the latter, I refer you to Stachniewicz & Kutschera (https://arxiv.org/abs/astro-ph/0110484) who did exactly that. They found that the hydrodynamics matter a little bit on the scale of globular clusters, but larger things like galaxies basically did what the numerical solutions without hydro did. They found that globular cluster mass objects were already collapsing at z > 100.
The only interesting question here is why you have a different intuition about the results of hydro simulations. Is that developed on the basis of sterile neutrino models? Those will certainly be slower to develop structure than pure MOND because most of the mass is in that form of dark matter. In that case, you have, by construction, chosen to follow the conventional structure formation story line. That has no bearing on what happens in pure MOND.
If only there were some observational test that could be done to distinguish these possibilities.
I loved the punchline. Knocked me flat, in fits for minutes.
I am not familiar with how a purely baryonic cosmology would work in MOND, which is why I was talking about hydrodynamic sterile neutrino simulations that I am currently working on led in Bonn. The 2001 preprint you refer to seems to be analytic calculations as well, not an N-body or hydrodynamic simulation. It is also not easy to explain the CMB anisotropies in a purely baryonic MOND model given the difficulties faced by the AEST model, which tried to do that. Also, it is ridiculous to suggest I have chosen to follow the conventional structure formation story line when the basis for that is CDM and the models I am advocating and working on lack CDM. They have HDM, but of course this does not cluster much on galaxy scales, leading to quite different behaviour. The gravity law is also different.
I am not sure entirely what is meant by pure MOND, except that if this means only baryons, this is quite wrong. Which is why I was trying to at least salvage some hope for MOND with the addition of sterile neutrinos, allowing a good fit to galaxy clusters like the Bullet and the CMB anisotropies.
Can you clarify what you mean by baryons here? Is it the particle physics sense of protons and neutrons, or does it include electrons? And if so, does it also include muons?
You can include electrons and muons if you like, but their mass is not that important compared to protons and neutrons. Astronomical observations are rarely accurate enough that it matters whether electrons are included, but I think most astronomers would consider the term baryons as typically used in astrophysics to mean that electrons and other conventional standard model particles were included. Neutrinos are often not included even though we know they exist and have mass, because their properties are rather different.
Yes, that much I understand. But when you say, in effect, that “it doesn’t matter”, I take issue with you. I agree with you that people cannot see how it matters. But that is a statement of ignorance, not a statement of fact. Ignorance is not a defence in law, and it is not a defence in physics. Muons cannot be explicitly detected outside the Earth’s atmosphere. That does not mean they are not there, and it does not mean they are not important. Cherchez les muons!
Are sterile neutrinos still a viable potential candidate given the null results from the STEREO experiment? https://www.nature.com/articles/s41586-022-05568-2
The sterile neutrino STEREO is investigating is one that would explain the reactor antineutrino anomaly (RAA). In the exclusion plot shown in the paper they only even bother to look at values for Dm_41^2 of 10 ev^2, meaning at most ~3eV mass of the sterile neutrino. If I saw correctly Indranils papers propose an 11 eV sterile neutrino. That’s pretty far outside of STEREOs sensitivity.
But as far as I remember the landscape of sterile neutrinos is quite inconclusive. There are multiple anomalies which could be explained by sterile neutrinos but many are at different mass ranges/mixing angles. Additionally, different experiments see or saw hints for values of sterile neutrinos which are excluded by other experiments.
I heard a talk a few years back by C. Giunti (sterile neutrino expert). He seemed not convinced that there is much (or any) parameter space left for any simple sterile neutrino incarnation explaining many anomalies or even everything.
But in principle you can never rule out any sterile neutrino if it just has a small enough mass or mixing angle. It’s a question of how much it then explains but the “sterile neutrino” is like dark matter: not falsifiable as it is a class of theories. Each incarnation could be tested but the parameter space for the idea is infinite.
And yes, the high z observations are problematic for LCDM. As you emphasize, the thing to check is what they predicted beforehand. That’s what the plot headlining https://tritonstation.com/2022/07/21/jwst-twitter-bender/ shows. The *typical* galaxy should be much fainter than observed. This prediction had already failed before JWST; what JWST sees is just a continuation to higher redshift. Keller et al sift through simulations to find a few extremes – their Fig. 1 highlights the most massive objects from each simulation at each redshift. Those are not typical, the are the most extreme unicorns. There are always unicorns if you search through a big enough simulation box. That they then claim that there is no problem because there are a few unicorns is exactly the kind of post-hoc pseudo-justification I was mocking in advance of it happening.
Looking at that paper, it is even worse than what I mock. They aren’t even addressing the problematic cases. All the data in their plots have M* < 1E9 Msun. That's closer to the bottom edge of the plot above than to the top. What they show is that all these LCDM simulations make a few 1E9 galaxies at z ~ 10. That indeed would not be a problem. The problem are all the galaxies a full order of magnitude more massive than that.
I would classify this as an example of declaring victory after the battle is lost. Keller did the same thing with the RAR, saying it arose naturally in LCDM. This assertion has become commonplace despite being pure, unadulterated bullshit (https://tritonstation.com/2016/10/21/la-fin-de-quoi/ and https://arxiv.org/abs/1610.07538). This is my frustration with much of the LCDM simulation community: there is always someone willing (even eager) to claim that they explain every observation, including those that they manifestly do not explain. Especially those. That seems to be how you get ahead in that field – claim to solve some big problem that stumps more honest people. As long as your solution goes in the direction of the community's confirmation bias, they'll accept it uncritically.
The problem for LCDM with the bullet cluster is the collision velocity. In the link you provide, you quote that as being 3000 km/s. That, indeed, would not be problematic. The observed value that I remember but don't care enough to look up is 4700 km/s. THAT is problematic. The common story line is that difference is explained by hydrodynamics. Is that subsumed into the number you quote? The hydro effects are only a partial explanation: they only do about 500 km/s (depending on who you ask!) but not the full 1700 km/s. The rest is from non-adiabatic compression at the moment of impact. That works, but only for a dead head-on collision, so it seems they've replaced one highly improbable event with another. Problem solved! or at least concealed from view.
This is an example of how mendacity turns into received wisdom. In the early days of the bullet cluster, both the mass offset that is problematic for MOND but natural in LCDM and the collision velocity that is problematic for LCDM but natural in MOND were widely discussed. Once somebody came up with a convincing hydro simulation that seemed to show the problem was solved in LCDM, it was accepted uncritically and promptly forgotten because that's what people wanted to hear. The problem posed for MOND persists because it is real, but it is also blown out of proportion to its value because that is what people want to hear. That was long enough ago that the details have been forgotten and the rest has become received wisdom: the bullet cluster falsifies MOND STOP TALKING ABOUT IT!!! and the collision velocity isn't even remembered as having ever been a problem.
The Bullet Cluster is discussed further in section 3.4 of MNRAS, 500, 5249. The simulations of Lage and Farrar do a fairly detailed job. Just because it initially seemed problematic for LCDM, does not mean that it remains so. There are plenty of galaxies which were problematic for MOND but revised data show it works well now. There is no suggestion that we should remember that the galaxy once was problematic for MOND, or at least if we do that this memory should serve to make us like MOND less. If anything, it is the opposite – revised data agree better with MOND. One should think of simulations the same – once people add in extra physics that was only approximately included earlier, use higher resolution, etc., then situations which seemed problematic for LCDM sometimes become no longer all that problematic. I am just trying to be balanced here – there is little doubt in my mind that the Bullet Cluster is severely problematic for a purely baryonic MOND model, but is not obviously a major issue for LCDM. Our own look at it implies 2.8 sigma tension. The way science works is to perhaps revisit the simulations of Lage and Farrar or the analysis of Kraljic and Sarkar to show that the Bullet Cluster is unlikely. Perhaps the impact parameter should be considered as an additional constraint, as we tried to do for El Gordo – but in that case, it is not a major issue as you need a somewhat off-centre collision. I am not suggesting to necessarily stop talking about the Bullet Cluster, but there is an opposite problem here: it is also not good to say that one simulation showed the collision velocity is too high for LCDM, so we should stop talking about it and accept that LCDM fails. Regardless of how dodgy might be the tactics used by LCDM adherents, it is important in the MOND community not to behave the same. And obviously it is not helpful to dodge the serious issues faced by MOND in the Bullet Cluster by pointing out the somewhat high collision velocity in LCDM, which is no longer even considered problematic even by people like myself, Elena, and Pavel that advocate MOND and have done for many years. It is also not helpful to dodge the issue by pointing to the RAR, which seems to be the go-to observation for MOND advocates when faced with serious challenges to their worldview. Not only the Bullet Cluster but many other more virialised galaxy clusters are severely problematic for MOND without a dark matter component. The same can be said for the CMB once the problems for AEST are considered, which makes it really difficult to explain the high third peak any other way.
You are putting words in my mouth that I did not say. Sometimes the opposite of what I said. Please don’t do that.
What I said about clusters was that the residual mass discrepancy they have in MOND is a serious problem. I’ve said that here and on many other occasions. You can see it in the 2002 review by Sanders and myself. It is, as you say, that “virialised galaxy clusters are severely problematic for MOND without a dark matter component.” That is your phrasing of exactly what I said. Bob & I said it before lensing data made the bullet cluster a cause celebre. From that perspective, it would have been weird if the bullet cluster didn’t show that same discrepancy as every other cluster in the sky. I consider the latter to be a much bigger problem for MOND than the bullet cluster by itself. The only new thing we learned from the bullet was that whatever the unseen component is, if it even exists, is that it has to be collisionless because it did not shock like the gas.
I tire of people citing the bullet cluster at me as if I didn’t know about it, which happens a lot. When this happens, it is usually a sign that the speaker doesn’t know nearly enough, and they are using it as a tool to shutdown discussion. That’s a sociological criticism of others, not a scientific one of you.
You didn’t answer my question about the collision speed beyond citing the same paper. Chasing through the citations – thanks for that exercise on a Saturday – I find exactly what I already said. Quoting from the abstract of Springel & Ferrar (2007), the shock speed is 4500 km/s, but the subcluster centroid relative speed is only 2700 km/s, which I agree is not problematic – as I said before. The difference is 1800 km/s – that’s big, even by astronomical standards. Now, of the many hydro simulations that were done of the system, it seemed widely agreed that you’d get a ~500 km/s enhancement in the shock velocity relative to the smaller subcluster. That’s not enough. S&F find an additional 1100 km/s from the “gravitationally induced inflow velocity of the gas ahead of the shock towards the bullet.” (A phrase I’ve heard repeated by scientists so often that they sound like trained parrots. Not many are able to explain what it means.) It is this extra 1100 km/s that concerns me. Not that it can’t happen – it certainly can – but we need a very specific geometry for the effect to be that big. The collision has to be dead head-on: an impact parameter as large as 12.5 kpc is already problematic (see their section 5.1). This is a small number compared to the ~Mpc sizes of the subclusters, so my concern is that we need to throw a bullseye from across the cosmos. I have never seen an assessment of the probability of this, so I worry that we’re replacing one unlikely event (a ridiculously high collision speed) with another (a dead head-on collision). That’s different from the assessment of K&S that you cite, which appears to accept S&F’s correction to the collision speed. I haven’t read that thoroughly, but I see plots of the probability that the speed exceeds 3000 km/s, but none that approach 4500 km/s (that’s more like a 1 in 100,000 event than 1 in 10). So I don’t think we’re even talking about the same thing.
This was going to be part of the blog post I haven’t had a chance to write because I keep being asked to re-justify the obvious, or, as in this case, point out yet again that what is reported as inexplicable was exactly what was predicted – whether you like the methodology or not.
I find it weird that you keep citing the lensing paper that is problematic for AeST like it should be news to me. I am a coauthor of that paper: I am well aware of the results.
I am also aware that it is early days; if I were so quick to discount possibilities we wouldn’t have any options left – not just flavors of MOND, but dark matter of any sort.
I cited the RAR as an example of how some (many even, but not all) LCDM advocates claim to explain things they do not. I am saying the opposite: I don’t understand clusters in MOND. I am not dodging any of these issues, and I’m profoundly offended by your implication that I am.
I agree you have mentioned the problems for purely baryonic MOND in clusters before, so I apologise for making it sound like you were dodging the issue. Regarding the Bullet Cluster, table 1 of Lage & Farrar (ApJ, 787, 144) shows an impact parameter of 256 kpc. Kraljic and Sarkar cite them and seem to be using their values, but seem not to have concerned themselves much with the impact parameter. This might be because 256 kpc would not be all that small. As a result of this, I am not that concerned with the Bullet Cluster in LCDM. If you really think that the latest hydrodynamical simulations of this collision require parameters that are unlikely in a cosmological simulation, then you have to show that in a peer-reviewed publication. This is just how the scientific method works. There is no point just saying you think it is all a bit unlikely in LCDM or a little bit suspicious that their simulations work – maybe, but what is wrong with them? There have been many dodgy claims to explain the satellite planes in LCDM, but Marcel has rebutted these. If you think there is a problem with the simulations or the statistical analysis, you should show that.
Writing endless rebuttals is not how the scientific method works. Throwing endless mud at the wall for honest people to waste their time fact-checking is how disinformation campaigns work.
I have fact-checked literally hundreds of claims to falsify MOND that turn out not to hold water. Clusters are not one of them, but there have been lots. After Benoit, Moti, and I wrote the rebuttal to NGC 1052-DF2, I decided I was done spending my time fixing other people’s mistakes. It is necessary work, so I’m glad to see you took it up in your review with Hongsheng. That’s great, it really is. But you display a remarkably myopic perspective to tell me about the need to write refereed papers rebutting every conceivable claim when I’ve spent the past quarter century doing exactly that.
One problem with writing rebuttals is that people pay no attention. You mention Marcel’s efforts with the satellite planes. He has indeed been extremely diligent in pointing out flaws in counter-claims. Despite this, the same people keep writing new papers making the same sorts of claims ignoring his rebuttals. I hope he will write a rebuttal to the latest claims out of the Durham group (it is always cold and dark in Durham), but really, he already has. They simply repeat many of the same mistakes that he has previously rebutted. There is also a tendency to come to the same conclusion, but put a different spin on it: Marcel’s analysis of Gaia DR2 gave only a 3% chance of the plane of satellites in LCDM, and he says that seems unlikely. Sawala’s equivalent analysis of DR3 gives on a 2% chance, and they say everything is fine.
It is not my job to chase down every single easily debunked claim and publish a rebuttal. Even when we do that, other scientists at best see two papers making opposite claims, often based on the same data, and all too often they simply pick the one that tells the story they want to hear. The issues we face are more interpretive than factual.
An example of an easily-debunked claim that persists is Manoj Kaplinghat assertion that CDM “naturally explains” the acceleration scale a0. This is, on its face, a comically absurd claim, as I had already showed in papers he did not cite and apparently remains unaware of. Moti made an immediate and thorough rebuttal (https://arxiv.org/abs/astro-ph/0110362). And yet, many people still seem to take this seriously twenty years on. It is the problem Feynman warned against: the easiest person to fool is yourself.
For the particular case of the bullet cluster under discussion here, I am pointing out that Springel & Farrar should have made an assessment of the probability of getting a collision of the bullet cluster that gave the right morphology. They came close to doing it – they said what I quoted, that the impact parameter has to be tiny to get the morphology right. OK, so what are the odds? That is something they did not assess, and it isn’t my job to do it for them. It is absolutely part of the normal course of scientific discourse for other scientists to do what I’m doing and say, “hey, what about this?”
All I’m noting is that this looks to me like yet another case of what I call squeezing the toothpaste tube (https://tritonstation.com/2022/04/18/cosmic-whack-a-mole/): they fix one problem but create another that they don’t bother to address because they’ve solved the problem they set out to solve.
What you quote for an impact parameter from Lage & Farrar is indeed more generous, and does not sound problematic. So the question then is why did L&F get a different answer from S&F? Do they address this? Or do they simply weaken the conditions for what they consider to be an acceptable match, as people have done with the plane of satellites? Either way, it isn’t my job, and they do a disservice to the reader if they didn’t do it themselves. That might give the innocent reader like yourself the impression that everything is fine when perhaps some ugly details are getting swept under the rug.
You should realize that it is offensive to tell somebody that they should be doing something that they have, in fact, been doing. I respect you enough to provide this explanation. I don’t owe it to you.
I’m an outsider who’s fascinated by MOND research. I’m also a blogger. And as a blogger, I can say that it is really annoying having to debate complex arguments in the comments section of my blog.
Indranil, I’ve read many of your papers with fascination. Might I suggest that if you have technical comments on Stacy’s posts, you write them up in your own post? I find that the blogosphere (rather than blog comments) is a much better way to have a detailed debate.
“If all we care about is explanatory power, LCDM always has that covered.”
If FLRW is falsified wouldn’t LCDM lose explanatory power?
Yes. But how would we know to admit it was falsified?
That’s the trick. LCDM has been falsified many times. We just keep resuscitating it.
“MOND suffers in clusters of galaxies” because that is, obviously, a complexity level higher than the galaxy complexity level where MOND has real predictive power.
Once again we find, even at cosmic level, that complexity is a boundary for the predictive/explanatory power of any theory. Theories that works at a complexity level usually fail at higher complexity levels.
Reality hierarchical structure can’t be ignored.
Yes, clusters are more complex, and I haven’t worked much on them myself because the available data leave much to be desired relative to that for individual galaxies. Still, I don’t think that suffices to excuse the problem. The amplitude of the discrepancy is larger than it should be for the normal mass that we can see. There may be unseen normal mass, but that isn’t really satisfactory.
There’s hope, Stacy: LCDM enthusiasts may not like MOND, but in the last few years specifically, they sure have been spending every waking hour talking about how it’s apparently not worth talking about. For people so dismissive of the idea, they sure do spend a suspicious fraction of their days talking about it nonstop! They used to just ignore it without a word. A very common psychological trait: they’re trying to convince themselves. Just as moral crusaders always seem to be the ones caught being unfaithful to their spouses, and anti-LGBT+ advocates always seem to be caught in secret gay love affairs. Protest too much, and it reveals one’s own insecurities.
As the anti-MOND hate increases, take it as a sign that a change is getting closer. They’re trying to ignore it harder and harder, and the volume of their complaints indicate that it’s rapidly becoming too difficult to do. You will be vindicated in your lifetime, rest easy and enjoy the ride.
Yes, I think you are correct about the psychology. Has taken a long time to get this far, and there is still a long way to go.
Regardless of your perspective on cosmic time scales, the observed galaxies remain well into LCDM’s danger zone, even with the revised calibration. They are no longer fully in the no-go zone
They are also early. We like to obsess about redshift, but there is an inverse relation between redshift and time, so there is not much difference in clock time between z = 7 and 10. Redshift 10 is just under 500 million years after the big bang; redshift 7 just under 750 million years.
what Z values and clock time would completely falsify LCDM, and what Z values and clock time would falsify MOND, could JWST observe it ?
The redshift-time relation is well-known in LCDM, so any obvious deviation from that would be a problem. I’m not aware of any. The high-redshift universe is the place it hasn’t been tested yet, and you can see how that’s working out. However, the problem isn’t so much with the age of the universe at a given redshift so much as all the structure formation that has already gone on.
The whole paradigm has been motivated by data like that, so I’m also not aware of any truly independent constraints. There are fairly impressive consistency checks, in that one can see stellar populations evolving with redshift as expect. I call that a consistency check because the uncertainty on stellar population ages is large enough to be consistent with a range of t-z relations.
MOND doesn’t even make a prediction for t(z). I think that’s at the root of many objections. One person thundered at me “You have no cosmology!” as if that were a bad thing: I consider this to be something that needs to be worked out empirically in order to inform theory development.
It feels like we’re only just now reaching the point that cosmological data are both precise and accurate enough to perceive the need for something beyond FLRW. The conventionalist approach to solve this is to add yet another free parameter (e.g., “early” dark energy, on top of regular dark energy, on top of invisible mass). Once you add enough free parameters like that, you can explain any monotonic set of t-z data.
“The conventionalist approach to solve this is to add yet another free parameter (e.g., “early” dark energy, on top of regular dark energy, on top of invisible mass).”
Early dark energy makes the S8 tension even worse, so it is not an actual solution to the current crisis in cosmology. From section VII.D.2 of the Snowmass 2021 paper:
“Nevertheless, it is clear that LSS data have a potentially strong constraining power on the EDE cosmology (including future measurements of the halo mass function at high-redshift by the James Webb Space Telescope (JWST) ), and that the EDE cosmology cannot resolve the S8− and H0−tensions simultaneously.”
Most other modifications to Lambda CDM being considered, if still assuming FLRW, suffers from a similar problem: they often either worsen the Hubble tension or the S8 tension.
LikeLiked by 1 person
Yes, indeed. I am not advocating early dark energy, I’m merely pointing out that it is the kind of extra free parameter people are inclined to invoke. They also seem disinclined to be concerned when their proffered solution breaks something else. Hopefully the data are getting good enough that we’ll break that habit, but it is a long-established habit from having to pick and choose what to trust out of a wide range of underwhelming cosmological data, so it is hard to break.
I have always considered Stacy’s blog a necessary measure of my sanity. I started following him about the time of the 1st COBE release: As he stated, his MOND prediction nailed the CMB power spectrum second peak; but just as importantly; the CDM cosmology prediction of a much stronger second peak crashed and burned. Stacy noted in his blog that LCDM got the third peak correct by ‘winning ugly’ (adding a new variable); which is exactly what Ned Wright added to his cosmology primer. In fairness Ned Wright added these questions to his presentation: Should we be nervous?; and ‘What might we be missing?
The CMB is not a simple or straightforward derivation: Foreground contamination (and the removal of it) represented a large part of the reconstruction process: Foregrounds are clouded with microwaves – There are solar, galactic and intergalactic interferences. Unexpected large galaxies and clusters in deeply redshifted space are highly problematic for all current CMB reconstructions because they are not accounted for in foreground corrections.
By the way, it is easy to build a microwave background: Run any spectral distribution through either a filter or a multiplier; then age the result (redshift) via a radiation transfer equation. I used the meta-stable helium state, and Chandrashakir’s radiation transfer equation. After a large number of iterations, a microwave peak emerges at near 2.7 deg K.
“One person thundered at me, “You have no cosmology!””
Is it not the other way around?
Does cosmology not have to provide a derivation for MOND?
I see it like this.
LikeLiked by 1 person
Yes, I think cosmology is less important than dynamics in detail. Before we tackle the entire universe, ley’s start with smaller scales and more nearby stuff – because there is firstly more and more reliable data, and secondly the small scales can have effect on larger scales but the history on larger scales hardly affects the analysis made on small scales.
I myself see it like this: cosmology is often about history, like geology and archeology. We should split it in cosmological mechanics, an empirical scientific subject, and cosmological history. As for the history subject, its subjects of study are unique nonrepeatable occurences and thus not intersubjectively verifiable (https://en.m.wikipedia.org/wiki/Intersubjective_verifiability). So it is not entirely empirical, although often tests and data can give a general direction. Therefore cosmological history does not have the same degree of authority as cosmological mechanics. As it is said, history is (re)written by the survivors, in this case the humans alive today. It relaxes its importance and credibility.
I would love it if the same split was applied to geology, paleontology, evolutionary theory and so on. All the historic parts of these subjects are also very related to soft sciences: teleology, psychology (what is our identity, where do we come from, are there aliens or are we unique etc.). IMO the evidence for these historic sciences sometimes needs to be reconciled and united with knowledge of these soft sciences. We would have less people with identity crises if they at least adapt the presrntation of their results to respect the other science.
Yes, it should. It doesn’t, and as near as I can tell, it cannot. For a long time, this issue was simply ignored. More recently, it is asserted that it does – there are lots of papers claiming the RAR occurs “naturally”. These span the gamut from deeply unsatisfactory to obviously wrong. People write them anyway because they’re comfortable pounding the round peg of MOND into the square hold of cosmology than the square peg of cosmology into the round hole of MOND.
“You have no cosmology!”
This sounds awfully similar to:
“You have no Religion!”
A prejudice that I see here, that isn’t being made very explicit, is that LCDM is a physical model (dark matter), while MOND is a mathematical model (change the equations). Physicists will always prefer a (new) physical model to a (new) mathematical model, if they have the choice. They are happy to change the physics, which they think they understand, but they are not happy to change the mathematics, which they know they don’t understand.
LCDM therefore has the huge advantage that it is based on physical principles. It will take an enormous amount of evidence to convince the mainstream that it also has the slightly inconvenient disadvantage that those physical principles are wrong.
No, sorry, I got that backwards: they are happy to change the physics, which they know they don’t understand, but they are not happy to change the mathematics, that they think they do understand.
Then again, maybe I was right the first time…
Then again, maybe it is the urge to prove that we were right all along that is the problem here. I don’t care whether I was right or not. I never do: if I am right then I have taught something, if I am wrong then I have learnt something. Both are signs of progress.
The flavor of the change is very different, as you observe, and people clearly struggle with that.
There is also a lot of reluctance to admit the possibility of having been wrong.
At what point does the shoe no longer fit any of the current versions of Big Bang theory?
It seems like it’s getting to be a pretty tight fit and the possibility of finding further galaxies down in the red zone, as well as figuring out these on the edge could not have developed in the current time frame, does seem right around the corner, so it would seem logical to begin to consider alternative possibilities.
That is, if this is science.
You’re asking for a much bigger break than has so far happened. It is conceivable, of course – for example, if there were clearly stars much older than the age of the universe itself. The problem we’re seeing now is that more objects were present earlier than expected, but they are (so far) within the bounds of the total age.
My cut off point is the fact the theory still uses the speed of light as the metric against which this expansion is presumed to be occurring.
Remember, the conceptual basis for General Relativity is the speed of light is always a constant, so if space is, “expanding,” wouldn’t the speed of light have to increase, in order to remain, “constant?”
The theory still uses basic doppler shift to explain redshift. The DISTANCE is increasing, relative to the speed of light. As Einstein said, space is what you measure with a ruler and the ruler in this theory is still the speed of light.
All the years I’ve been making this point, no one has shown what I’m missing, but no one can accept the basic logic. It’s like one of the most basic concepts in math is chucked out the window, to make the theory work and no one has a problem with it.
So, for me, it’s not a question of whether they keep finding older and more developed galaxies, but simply when.
There are two yardsticks in Hubble’s V-D diagram: velocity and distance. The distances are measured in painstaking ways, many of which are independent of the speed of light. What one then notices is that the redshift velocity correlates with distance. These are often called velocities in the Doppler shift, but strictly speaking, this is not the correct cosmological interpretation, as galaxies are not moving away from each other in the sense of an explosion. Rather, the space between them is stretching, and the photon wavelength with it. Note that the latter has to happen to conserve energy in the radiation field. A universe expanding in this fashion combination predicts a linear V-D relation as Hubble (& Slipher & Lemaitre) discovered, and has been corroborated a zillion times since. One can imagine other interpretations, but the traditional Big Bang description is a good one.
So you are saying the expanding space stretches the wave length, but it still doesn’t increase the speed?
So what is the basis for light speed, if it is not actually the intergalactic ruler of space?
It still seems there are two metrics being derived from the same light. One based on the spectrum and one based on the speed.
If I was to take a rubber band and mark inches, as wave lengths, on it, then stretched it, it would basically be what you are describing. So if there was only just this one metric, wouldn’t the speed of light have to increase proportionally, in order to remain constant?
Yet no one talks about light taking much longer to cross the early universe. It seems the speed is just taken for granted.
“So you are saying the expanding space stretches the wave length, but it still doesn’t increase the speed? So what is the basis for light speed, if it is not actually the intergalactic ruler of space?”
c = 1/sqrt(e0*u0) = 2.998 X 10^8 m/s
where e0 = permittivity of free space, u0 = permeability of free space
This is derived from Maxwell’s equations.
Additionally, λν = c
where λ = wavelength, ν = frequency
Yes. Speed is not spectrum. It moves through space. They can argue spectrum is stretched by the space expanding, but that implies an effect on the light as it is crossing it, not the rate it crosses it.
Consider the inchworm on the expanding balloon analogy. If the rate the inchworm moves is also affected, wouldn’t the inchworm have to move faster? Yet that would require a proportional increase in the energy.
These early galaxy masses are being calculated assuming we know their IMF, but the latter is not well-constrained at all at high redshifts. If a higher portion of the mass is going into massive stars than the models are assuming, their mass could be much lower. I remind you that a 30 Msun star radiates about 1 million Lsun. In the end what will probably happen is that the IMF will be found to be highly redshift-dependent. Of course this is not a sexy result like claiming that the theory is broken. I think most people already know this, but are going with the flow to get further JWST observations, further Nature papers, further grants and further taxpayer-funded conferences in tropical islands.
Yes – did I not mention that? Perhaps it was in a paragraph I deleted for brevity: there are too many details that can weight down the discussion.
So, sure, one can always change the IMF. Just don’t form any low mass stars, just those that make light. Problem solved.
There used to be a common saying that seems to have gone out of use: altering the IMF is the last recourse of scoundrels. Because it is the ultimate free parameter. If we go there, we might as well not bother making observations. Don’t like the answer? Change the IMF.
We’ve been here before. Repeatedly. The first WMAP measurement of the optical depth was tau = 0.17. This was too high for LCDM. The fix was to change the IMF – basically invoke Pop III stars that were exclusively massive, were super-efficient at emitting UV photons, reionized the universe, and conveniently went away without a trace. I reviewed this in my 2004 paper on the CMB. It was a bad idea then, it is a bad idea now – though I agree that it is inevitable that some will invoke it.
There is an old saw that the IMF *should* be metallicity dependent, so it *should* be biased towards high masses at low metallicity as in the early universe. This is based on a simple, compelling Jeans-mass argument. Problem is, it has exactly nothing to do with reality.
If the IMF were Z-dependent, then we’d see a different IMF in ancient, low-Z globular clusters. We don’t. I’m not saying the IMF is the same everywhere all the time, but it does look like something very close to that on average. So I take it as extremely unlikely that stars that were forming at z > 10 are biased towards high masses when we see all around us globular clusters made of stars that formed at z > 10 with a normal IMF. Pick a lane, universe! Invoking a variable IMF wouldn’t just throw out everything we know empirically about the IMF, it would also require us to have a mode of normal IMF star formation at z > 10 alongside the hypothetical mode with a special IMF that is tailor made to explain this one problem at z > 10.
That is a violation of Occam’s razor. Cosmology has been piling on free parameters in defiance of Occam for decades now, so sure, why not pile on some more.
Of course they are still using the LCDM paradigm, but who knows what these data will reveal.
Those are a lot of observations to keep an eye on – along with the interpretive mantle of LCDM. It can be hard to keep the two separate. The one mentioned at the end would violate isotropy, which shouldn’t happen in LCDM but would certainly be interesting if observed.
Thank you to the professor and Indranil for a fascinating discussion, and here hoping they remain in speaking terms!
But about the MOND’s problems with clusters, could it bethat we got the sums wrong and that there is not missing mass after all? Lopez-Corredoira seems to think so:
Yeah, I worry about the systematic effects that they discuss. In my analysis, they haven’t been big enough to change the basic answer. Maybe I’m wrong about that, but I would certainly say so if I thought this were a viable path forward.
Another person is trying to work on cosmology without assuming an underlying cosmological model such as Lambda-CDM. Here is Luca Amendola’s talk about his work:
Luca Amendola, however, state that they are still working in FLRW, which means they aren’t fully model independent. But I believe it is a key first step in peeling away all the assumptions that have built up in cosmology over the years, and hopefully eventually professor Amendola’s work becomes comprehensive enough that they are able to get rid of the FLRW assumption completely.
Has anyone tried to break down the FLRW equations and find out how they came up with an expanding space model, where the speed of light is not constant to it, from a theory where the primary axiom is the speed of light as the constant?
My own efforts down’t get much further than wikipedia, but given the apparent size of that hole, someone could make a name for themselves.
At the end of the talk Luca Amendola provides a reference to their 2019 paper titled “Measuring gravity at cosmological scales” about trying to constrain gravitational theories using cosmological observations. The paper however primarily talks about Horndeski gravity.
It would be interesting to see what Amendola makes of the spate of relativistic MOND theories that have been popping up in the literature after his paper was published.
Hey professor, I was reading this article https://arxiv.org/abs/2303.04637 they make a “big” claim saying they can solve dark energy and dark matter problem with 0 point energy, the article sounds a little “different” than what I’m used to read, I believe as a grad student I might be unqualified to determine, I’d like to know what you think
Another blow for dark matter: superfluid dark matter is in tension with weak gravitational lensing data.
One of the authors suggests this is a problem with all hybrid dark matter models:
Hi Prof Stacy,
thanks for all these very illuminating posts.
What I would like to ask is in which paper you have predicted the peaks of the CMB.
What gets me is the physicists who say it “can’t be explained by mathematics”, when what they mean is it can’t be explained by the little bit of mathematics that they know and are willing to use. It’s a pity they are not willing to listen to mathematicians who know which bit of mathematics they actually need.
I guess it’s the physicists playing at math that tries to say two different metrics of space can be derived from the speed and the spectrum of the same light, because as physics, it isn’t even a good parody of theory and as math, it doesn’t add up.
Which is the denominator and which is the numerator, should be the logical question to ask.
Looking at clouds(1), seeing shapes and faces(2) and arguing about them(3).
One could wonder how you expect to understand the universe with, clearly, not even understanding yourself.
When it comes to fundamentals, there’s only one question you should be asking; can a fish ever figure out the ocean?
To preempt most obvious complaint, no, it’s not about intellectual capacity of said fish. It’s not even about fish. A hint maybe would be to expand the question with ” … without leaving the ocean, looking at it from the outside and stop being a fish anymore”
As much as it can be defined by just looking at it from the inside, we’ve already done it. Not explicitly, or maybe even so. In any case, it couldn’t have had stuck.
It’s a chaotic system, which can’t be deterministically described as a whole. Arbitrary small portions can be mathematically defined to various levels of approximation. Of course that leaves very little room for grandiose dreams about figuring ‘the clock’ out, does it? Gee, thank you Albert. We can barely, and often wrong, predict a simple weather system on a simple rock orbiting a simple star but somehow, we’d like to have the formula of the universe?
(1) Granted, curiosity is second level foundation of life.
(2) ‘Rustling bush’. Is there a lion, a pig or is it just the wind?
(3) Unfortunately, civilisation’s soiling of evolutionary principles. Those that stayed and argued whether it was a lion, a pig or just the wind got eaten sooner or later.
The growth of our knowledge does go through cycles, between our curiosity and desire pushing out, as the previous structures, strictures, traditions and beliefs coalesce in. Much like galaxies are the energy radiating out, as the structure coalesces in.
Religion has a bad reputation around the scientistic, but the old Greek Pantheon arose from older fertility rites, where the young god was born in the spring, from the old sky god and the earth mother, but by the age of classical Greece, it had succumbed to tradition, where Zeus hadn’t given way to Dionysus. Which was why the story of Jesus, as royal blood, crucified and risen, in the spring, was such a powerful influence around the Greek world.
To the Ancients, monotheism equated with monoculture. One people, one rule, one god. Remember the formative experience for Judaism was the forty years isolated in the desert, giving us the Ten Commandments.
While democracy and republicanism originated in pantheistic cultures, the family as godhead, it was as the Roman Empire was rising from the ashes of the Republic, that Christianity was adopted as state religion. Basically to enforce the premise of royal authority. The Big Guy rules. The original pantheistic basis for the Trinity was thus obscured, as three faces, or attributes of the one god.
Logically though, a spiritual absolute would be the essence of sentience, from which we rise, not an ideal of wisdom and judgement, from which we fell. Necessarily an entire culture founded on the principle of confusing absolutes with ideals is inherently conflicted, as everyone sees their own set of beliefs and ideals as universal and unquestioned, even the scientistic.
“When it comes to fundamentals, there’s only one question you should be asking; can a fish ever figure out the ocean?”
Actually the simple answer is yes, but you need to understand some basic physics. A fish looking upwards sees the entire hemisphere outside the ocean, but compressed into what is called the Snell Window, a circle just over 97 degrees in diameter. Beyond this and to its horizon it sees a reflection of the ocean below the surface and so will see two images of other fish, one directly and the other reflected off the underside of the water surface, what we call total internal reflection. The latter part of this article for anglers does explain the Snell Window: https://www.offthescaleangling.ie/the-science-bit/fish-vision/
For example, you can think of Bradley’s discovery of stellar aberration in 1728 as the final proof of heliocentricity because the different angular positions of a star at different times of year showed that the Earth was moving (and this observation would have been true even if it had not been preceded by Kepler and Newton). Successful stellar parallax measurements came over a century later.
Don’t underestimate what can be achieved with precision measurements when combined with intelligent analysis.
“an entire culture founded on the principle of confusing absolutes with ideals is inherently conflicted,”
For one thing, it vastly speeds up the process by which traditions and legalistic strictures impose themselves on curiosity and exploration.
The Overton window is made of steel bars.
A talk by Leandros Perivolaropoulos about tensions in Lambda CDM:
I see that Mike McCulloch’s theory of Quantised Inertia is now going to be tested in space for the first time. The launch is set for June 10th.
This is good because, even if it rules out QI, it is the sort of experiment that needs to be made.
Yes, there are not enough experiments that test non-mainstream ideas. If the mainstream theorists were really confident about their theories, they would encourage more such tests in the hope of demolishing competing theories. But what seems to happening is the opposite – perhaps they are terrified that some competing theory might *not* be demolished.
What I can’t get over is the image of me getting smaller and the universe getting bigger. Is it all me? Is my love for it of sufficient? Can I draw it all in.
First, I apologize to Dr. McGaugh for a recent post. I have been trying to avoid being here to avoid the temptation to comment.
Second, with regard to my last comments, I have no need to “be expert” on certain mathematics which is simply sitting atop finitary structures.
Looking up “quantum quasigroups” will lead to the Yang-Baxter equation. From there one will get to vertex models.
What I stated before is that there is a reduction (in some treatments of foundational physics) to Kummer surfaces and their 16 exceptional points (Kummer configuration). Among vertex models is a 16-vertex model with configurations shown in the link,
In personal work, I can map these configurations into the group,
a^2 = b^2 = c^2 = d^2 = e
coordinatized with the 4-vectors over GF(2). And, it is from this personal work that I know that the Kummer configuration maps into the Rook’s graph of order 4,
This extends the 6-sets of the Kummer configuration to 7-sets.
Now, Baxter introduced a solution for “hard hexagons,”
That is, hard hexagon models are “integrable.”
16-vertex models are not so nice.
Assis has written about them. In the paper at the link,
he and his colleagues compares the two vertex models with respect to integrability.
In the first two paragraphs, they write,
“The ability to do exact computations relies on the existence of sufficient symmetries which allow the system to be solved by algebraic methods. Generic systems do not posess such an algebra and the distinction between integrable and nonintegrable may be thought of as the distinction of algebra versus analysis.”
It seems to me, if I recall correctly, that I made a claim that the “problem” everyone on this site whines about lies with catering to calculation with algebra to the point of “defining mathematics” in such a way as to make science “true.”
That theory-ladeness in physics makes physicists prone to being lost in algebra does not mean that mathematicians are lost at all.
The introduction of the paper from Assis and his colleagues goes on to identify the problem as arising from a dense set of singularities lying on the unit circle. Of course, I had also pointed out that work on the real numbers in the first-order paradigm had identified how unrestricted extension of the theory of closed real fields with trigonometric functions introduces undecidability. I also pointed out that this is studied under the auspices of “sets of uniqueness.”
Now, I specifically mentioned the Rook’s graph earlier because of another graph with the same parameters as the Rook’s graph called a Shrikhande graph. I know of this graph because I had tried to map the Kummer configuration onto it before I found the Rook’s graph.
A Shrikhande graph,
can be mapped onto the lattice used for the hard hexagon models.
It would seem that the problem of integrability may be related to these graphs.
Again, Dr. McGaugh. I am sorry. I become angry because no matter what I do, the only thing with which the intelligentsia seem to have ability lies in regurgitating 300 year old arguments.
And, while I have no particular issue with the science community as users of mathematics, I take offense at mathematicians who work at “defining mathematics” to make science “true.”
Just stop blaming mathematics for a problem you created for yourselves.
Comments are closed.