Britain and America are two nations divided by a common language.
attributed to George Bernard Shaw
Physics and Astronomy are two fields divided by a common interest in how the universe works. There is a considerable amount of overlap between some sub-fields of these subjects, and practically none at all in others. The aims and goals are often in common, but the methods, assumptions, history, and culture are quite distinct. This leads to considerable confusion, as with the English language – scientists with different backgrounds sometimes use the same words to mean rather different things.
A few terms that are commonly used to describe scientists who work on the subjects that I do include astronomer, astrophysicist, and cosmologist. I could be described as any of the these. But I also know lots of scientists to whom these words could be applied, but would mean something rather different.
A common question I get is “What’s the difference between an astronomer and an astrophysicist?” This is easy to answer from my experience as a long-distance commuter. If I get on a plane, and the person next to me is chatty and asks what I do, if I feel like chatting, I am an astronomer. If I don’t, I’m an astrophysicist. The first answer starts a conversation, the second shuts it down.
Flippant as that anecdote is, it is excruciatingly accurate – both for how people react (commuting between Cleveland and Baltimore for a dozen years provided lots of examples), and for what the difference is: practically none. If I try to offer a more accurate definition, then I am sure to fail to provide a complete answer, as I don’t think there is one. But to make the attempt:
Astronomy is the science of observing the sky, encompassing all elements required to do so. That includes practical matters like the technology of telescopes and their instruments across all wavelengths of the electromagnetic spectrum, and theoretical matters that allow us to interpret what we see up there: what’s a star? a nebula? a galaxy? How does the light emitted by these objects get to us? How do we count photons accurately and interpret what they mean?
Astrophysics is the science of how things in the sky work. What makes a star shine? [Nuclear reactions]. What produces a nebular spectrum? [The atomic physics of incredibly low density interstellar plasma.] What makes a spiral galaxy rotate? [Gravity! Gravity plus, well, you know, something. Or, if you read this blog, you know that we don’t really know.] So astrophysics is the physics of the objects astronomy discovers in the sky. This is a rather broad remit, and covers lots of physics.
With this definition, astrophysics is a subset of astronomy – such a large and essential subset that the terms can and often are used interchangeably. These definitions are so intimately intertwined that the distinction is not obvious even for those of us who publish in the learned journals of the American Astronomical Society: the Astronomical Journal (AJ) and the Astrophysical Journal (ApJ). I am often hard-pressed to distinguish between them, but to attempt it in brief, the AJ is where you publish a paper that says “we observed these objects” and the ApJ is where you write “here is a model to explain these objects.” The opportunity for overlap is obvious: a paper that says “observations of these objects test/refute/corroborate this theory” could appear in either. Nevertheless, there was a clearly a sufficient need to establish a separate journal focused on the physics of how things in the sky worked to launch the Astrophysical Journal in 1895 to complement the older Astronomical Journal (dating from 1849).
Cosmology is the study of the entire universe. As a science, it is the subset of astrophysics that encompasses observations that measure the universe as a physical entity: its size, age, expansion rate, and temporal evolution. Examples are sufficiently diverse that practicing scientists who call themselves cosmologists may have rather different ideas about what it encompasses, or whether it even counts as astrophysics in the way defined above.
Indeed, more generally, cosmology is where science, philosophy, and religion collide. People have always asked the big questions – we want to understand the world in which we find ourselves, our place in it, our relation to it, and to its Maker in the religious sense – and we have always made up stories to fill in the gaping void of our ignorance. Stories that become the stuff of myth and legend until they are unquestionable aspects of a misplaced faith that we understand all of this. The science of cosmology is far from immune to myth making, and often times philosophical imperatives have overwhelmed observational facts. The lengthy persistence of SCDM in the absence of any credible evidence that Ωm = 1 is a recent example. Another that comes and goes is the desire for a Phoenix universe – one that expands, recollapses, and is then reborn for another cycle of expansion and contraction that repeats ad infinitum. This is appealing for philosophical reasons – the universe isn’t just some bizarre one-off – but there’s precious little that we know (or perhaps can know) to suggest it is a reality.
Nevertheless, genuine and enormous empirical progress has been made. It is stunning what we know now that we didn’t a century ago. It has only been 90 years since Hubble established that there are galaxies external to the Milky Way. Prior to that, the prevailing cosmology consisted of a single island universe – the Milky Way – that tapered off into an indefinite, empty void. Until Hubble established otherwise, it was widely (though not universally) thought that the spiral nebulae were some kind of gas clouds within the Milky Way. Instead, the universe is filled with millions and billions of galaxies comparable in stature to the Milky Way.
We have sometimes let our progress blind us to the gaping holes that remain in our knowledge. Some of our more imaginative and less grounded colleagues take some of our more fanciful stories to be established fact, which sometimes just means the problem is old and familiar so boring if still unsolved. They race ahead to create new stories about entities like multiverses. To me, multiverses are manifestly metaphysical: great fun for late night bull sessions, but not a legitimate branch of physics.
So cosmology encompasses a lot. It can mean very different things to different people, and not all of it is scientific. I am not about to touch on the world-views of popular religions, all of which have some flavor of cosmology. There is controversy enough about these definitions among practicing scientists.
I started as a physicist. I earned an SB in physics from MIT in 1985, and went on to the physics (not the astrophysics) department of Princeton for grad school. I had elected to study physics because I had a burning curiosity about how the world works. It was not specific to astronomy as defined above. Indeed, astronomy seemed to me at the time to be but one of many curiosities, and not necessarily the main one.
There was no clear department of astronomy at MIT. Some people who practiced astrophysics were in the physics department; others in Earth, Atmospheric, and Planetary Science, still others in Mathematics. At the recommendation of my academic advisor Michael Feld, I wound up doing a senior thesis with George W. Clark, a high energy astrophysicist who mostly worked on cosmic rays and X-ray satellites. There was a large high energy astrophysics group at MIT who studied X-ray sources and the physics that produced them – things like neutron stars, black holes, supernova remnants, and the intracluster medium of clusters of galaxies – celestial objects with sufficiently extreme energies to make X-rays. The X-ray group needed to do optical follow-up (OK, there’s an X-ray source at this location on the sky. What’s there?) so they had joined the MDM Observatory. I had expressed a vague interest in orbital dynamics, and Clark had become interested in the structure of elliptical galaxies, motivated by the elegant orbital structures described by Martin Schwarzschild. The astrophysics group did a lot of work on instrumentation, so we had access to a new-fangled CCD. These made (and continue to make) much more sensitive detectors than photographic plates.
Empowered by this then-new technology, we embarked on a campaign to image elliptical galaxies with the MDM 1.3 m telescope. The initial goal was to search for axial twists as the predicted consequence of triaxial structure – Schwarzschild had shown that elliptical galaxies need not be oblate or prolate, but could have three distinct characteristic lengths along their principal axes. What we noticed instead with the sensitive CCD was a wonder of new features in the low surface brightness outskirts of these galaxies. Most elliptical galaxies just fade smoothly into obscurity, but every fourth or fifth case displayed distinct shells and ripples – features that were otherwise hard to spot that had only recently been highlighted by Malin & Carter.
At the time I was doing this work, I was of course reading up on galaxies in general, and came across Mike Disney’s arguments as to how low surface brightness galaxies could be ubiquitous and yet missed by many surveys. This resonated with my new observing experience. Look hard enough, and you would find something new that had never before been seen. This proved to be true, and remains true to this day.
I went on only two observing runs my senior year. The weather was bad for the first one, clearing only the last night during which I collected all the useful data. The second run came too late to contribute to my thesis. But I was enchanted by the observatory as a remote laboratory, perched in the solitude of the rugged mountains, themselves alone in an empty desert of subtly magnificent beauty. And it got dark at night. You could actually see the stars. More stars than can be imagined by those confined to the light pollution of a city.
It hadn’t occurred to me to apply to an astronomy graduate program. I continued on to Princeton, where I was assigned to work in the atomic physics lab of Will Happer. There I mostly measured the efficiency of various buffer gases in moderating spin exchange between sodium and xenon. This resulted in my first published paper.
In retrospect, this is kinda cool. As an alkali, the atomic structure of sodium is basically that of a noble gas with a spare electron it’s eager to give away in a chemical reaction. Xenon is a noble gas, chemically inert as it already has nicely complete atomic shells; it wants neither to give nor receive electrons from other elements. Put the two together in a vapor, and they can form weak van der Waals molecules in which they share the unwanted valence electron like a hot potato. The nifty thing is that one can spin-polarize the electron by optical pumping with a laser. As it happens, the wave function of the electron has a lot of overlap with the nucleus of the xenon (one of the allowed states has no angular momentum). Thanks to this overlap, the spin polarization imparted to the electron can be transferred to the xenon nucleus. In this way, it is possible to create large amounts of spin-polarized xenon nuclei. This greatly enhances the signal of MRI, and has found an application in medical imaging: a patient can breathe in a chemically inert [SAFE], spin polarized noble gas, making visible all the little passageways of the lungs that are otherwise invisible to an MRI. I contributed very little to making this possible, but it is probably the closest I’ll ever come to doing anything practical.
The same technology could, in principle, be applied to make dark matter detection experiments phenomenally more sensitive to spin-dependent interactions. Giant tanks of xenon have already become one of the leading ways to search for WIMP dark matter, gobbling up a significant fraction of the world supply of this rare noble gas. Spin polarizing the xenon on the scales of tons rather than grams is a considerable engineering challenge.
Now, in that last sentence, I lapsed into a bit of physics arrogance. We understand the process. Making it work is “just” a matter of engineering. In general, there is a lot of hard work involved in that “just,” and a lot of times it is a practical impossibility. That’s probably the case here, as the polarization decays away quickly – much more quickly than one could purify and pump tons of the stuff into a vat maintained at a temperature near absolute zero.
At the time, I did not appreciate the meaning of what I was doing. I did not like working in Happer’s lab. The windowless confines kept dark but for the sickly orange glow of a sodium D laser was not a positive environment to be in day after day after day. More importantly, the science did not call to my heart. I began to dream of a remote lab on a scenic mountain top.
I also found the culture in the physics department at Princeton to be toxic. Nothing mattered but to be smarter than the next guy (and it was practically all guys). There was no agreed measure for this, and for the most part people weren’t so brazen as to compare test scores. So the thing to do was Be Arrogant. Everybody walked around like they were too frickin’ smart to be bothered to talk to anyone else, or even see them under their upturned noses. It was weird – everybody there was smart, but no human could possible be as smart as these people thought they were. Well, not everybody, of course – Jim Peebles is impossibly intelligent, sane, and even nice (perhaps he is an alien, or at least a Canadian) – but for most of Princeton arrogance was a defining characteristic that seeped unpleasantly into every interaction.
It was, in considerable part, arrogance that drove me away from physics. I was appalled by it. One of the best displays was put on by David Gross in a colloquium that marked the take-over of theoretical physics by string theory. The dude was talking confidently in bold positivist terms about predictions that were twenty orders of magnitude in energy beyond any conceivable experimental test. That, to me, wasn’t physics.
More than thirty years on, I can take cold comfort that my youthful intuition was correct. String theory has conspicuously failed to provide the vaunted “theory of everything” that was promised. Instead, we have vague “landscapes” of 10500 possible theories. Just want one. 10500 is not progress. It’s getting hopelessly lost. That’s what happens when brilliant ideologues are encouraged to wander about in their hyperactive imaginations without experimental guidance. You don’t get physics, you get metaphysics. If you think that sounds harsh, note that Gross himself takes exactly this issue with multiverses, saying the notion “smells of angels” and worrying that a generation of physicists will be misled down a garden path – exactly the way he misled a generation with string theory.
So I left Princeton, and switched to a field where progress could be made. I chose to go to the University of Michigan, because I knew it had access to the MDM telescopes (one of the M’s stood for Michigan, the other MIT, with the D for Dartmouth) and because I was getting married. My wife is an historian, and we needed a university that was good in both our fields.
When I got to Michigan, I was ready to do research. I wanted to do more on shell galaxies, and low surface brightness galaxies in general. I had had enough coursework, I reckoned; I was ready to DO science. So I was somewhat taken aback that they wanted me to do two more years of graduate coursework in astronomy.
Some of the physics arrogance had inevitably been incorporated into my outlook. To a physicist, all other fields are trivial. They are just particular realizations of some subset of physics. Chemistry is just applied atomic physics. Biology barely even counts as science, and those parts that do could be derived from physics, in principle. As mere subsets of physics, any other field can and will be picked up trivially.
After two years of graduate coursework in astronomy, I had the epiphany that the field was not trivial. There were excellent reasons, both practical and historical, why it was a separate field. I had been wrong to presume otherwise.
Modern physicists are not afflicted by this epiphany. That bad attitude I was guilty of persists and is remarkably widespread. I am frequently confronted by young physicists eager to mansplain my own field to me, who casually assume that I am ignorant of subjects that I wrote papers on before they started reading the literature, and who equate a disagreement with their interpretation on any subject with ignorance on my part. This is one place the fields diverge enormously. In physics, if it appears in a textbook, it must be true. In astronomy, we recognize that we’ve been wrong about the universe so many times, we’ve learned to be tolerant of interpretations that initially sound absurd. Today’s absurdity may be tomorrow’s obvious fact. Physicists don’t share this history, and often fail to distinguish interpretation from fact, much less cope with the possibility that a single set of facts may admit multiple interpretations.
Cosmology has often been a leader in being wrong, and consequently enjoyed a shady reputation in both physics and astronomy for much of the 20th century. When I started on the faculty at the University of Maryland in 1998, there was no graduate course in the subject. This seemed to me to be an obvious gap to fill, so I developed one. Some of the senior astronomy faculty expressed concern as to whether this could be a rigorous 3 credit graduate course, and sent a neutral representative to discuss the issue with me. He was satisfied. As would be any cosmologist – I was teaching LCDM before most other cosmologists had admitted it was a thing.
At that time, 1998, my wife was also a new faculty member at John Carroll University. They held a welcome picnic, which I attended as the spouse. So I strike up a conversation with another random spouse who is also standing around looking similarly out of place. Ask him what he does. “I’m a physicist.” Ah! common ground – what do you work on? “Cosmology and dark matter.” I was flabbergasted. How did I not know this person? It was Glenn Starkman, and this was my first indication that sometime in the preceding decade, cosmology had become an acceptable field in physics and not a suspect curiosity best left to woolly-minded astronomers.
This was my first clue that there were two entirely separate groups of professional scientists who self-identified as cosmologists. One from the astronomy tradition, one from physics. These groups use the same words to mean the same things – sometimes. There is a common language. But like British English and American English, sometimes different things are meant by the same words.
“Dark matter” is a good example. When I say dark matter, I mean the vast diversity of observational evidence for a discrepancy between measurable probes of gravity (orbital speeds, gravitational lensing, equilibrium hydrostatic temperatures, etc.) and what is predicted by the gravity of the observed baryonic material – the stars and gas we can see. When a physicist says “dark matter,” he seems usually to mean the vast array of theoretical hypotheses for what new particle the dark matter might be.
To give a recent example, a colleague who is a world-reknowned expert on dark matter, and an observational astronomer in a physics department dominated by particle cosmologists, noted that their chairperson had advocated a particular hiring plan because “we have no one who works on dark matter.” This came across as incredibly disrespectful, which it is. But it is also simply clueless. It took some talking to work through, but what we think he meant was that they had no one who worked on laboratory experiments to detect dark matter. That’s a valid thing to do, which astronomers don’t deny. But it is a severely limited way to think about it.
To date, the evidence for dark matter to date is 100% astronomical in nature. That’s all of it. Despite enormous effort and progress, laboratory experiments provide 0%. Zero point zero zero zero. And before some fool points to the cosmic microwave background, that is not a laboratory experiment. It is astronomy as defined above: information gleaned from observation of the sky. That it is done with photons from the mm and microwave part of the spectrum instead of the optical part of the spectrum doesn’t make it fundamentally different: it is still an observation of the sky.
And yet, apparently the observational work that my colleague did was unappreciated by his own department head, who I know to fancy himself an expert on the subject. Yet existence of a complementary expert in his own department didn’t ever register him. Even though, as chair, he would be responsible for reviewing the contributions of the faculty in his department on an annual basis.
To many physicists we astronomers are simply invisible. What could we possibly teach them about cosmology or dark matter? That we’ve been doing it for a lot longer is irrelevant. Only what they [re]invent themselves is valid, because astronomy is a subservient subfield populated by people who weren’t smart enough to become particle physicists. Because particle physicists are the smartest people in the world. Just ask one. He’ll tell you.
To give just one personal example of many: a few years ago, after I had published a paper in the premiere physics journal, I had a particle physics colleague ask, in apparent sincerity, “Are you an astrophysicist?” I managed to refrain from shouting YES YOU CLUELESS DUNCE! Only been doing astrophysics for my entire career!
As near as I can work out, his erroneous definition of astrophysicist involved having a Ph.D. in physics. That’s a good basis to start learning astrophysics, but it doesn’t actually qualify. Kris Davidson noted a similar sociology among his particle physics colleagues: “They simply declare themselves to be astrophysicsts.” Well, I can tell you – having made that same mistake personally – it ain’t that simple. I’m pleased that so many physicists are finally figuring out what I did in the 1980s, and welcome their interest in astrophysics and cosmology. But they need to actually learn the subject, just not assume they’ll pick it up in a snap without actually doing so.
63 thoughts on “Two fields divided by a common interest”
Funny and sad at the same time. Such is life, I guess.
Is Jim Peebles Canadian, or where you speculating?
I happen to know a great many Canadians, including myself and my relatives for starters. Canadians may be, on average, more polite than are U.S. born, but we can be just as arrogant, in private. We just don’t let others see it. For example while an American might say “Sorry, your theory is clearly inferior to the standard theory (mine)”, a Canadian would say “Your theory is interesting, perhaps even ingenious, but there are already well accepted theories (such as mine) in the literature”. Both statements say the same thing, but the Canadian is a bit more subtle, allowing an outside observer to infer that the Canadian is allowing for a possibility of further interest in the upstart theory. This is not really the case. The Canadian is dismissing the new theory with every bit as much prejudice as the American, we are just slightly more subtle about it.
To sum up, we Canadians can be just as much a jerk as any American, we are just better at obscuring it.
Sorry for the personal rant about damn Canadians. Totally off subject really.
Anyway, interesting article, even if not a big surprise. If someday DM is definitively shown to be a ‘epicyclic’ illusion, what will become of all these Physics “Cosmologists”?
Peebles is originally Canadian, yes.
The concept of DM is not falsifiable. So the issue is, how do we tell if it is an epicycle?
You said in a talk from 2015 that if particulate dark matter interacts only gravitationally then DM will not be falsifiable as it’s not testable. Should I understand it as DM will be indistinguishable from a modification of the gravitation – i.e. just two ways to look at the same thing? Or there will be still some potential tests / predictions that could be used to discriminate between the two situations?
There are [at least] two separate issues here. One is whether particle dark matter is a falsifiable concept. The other is the roll of predictions in distinguishing between it and something else.
The answer to the first I think is No. I do not see a way to definitively exclude the existence of a dark matter particle. The best you can hope to do is search for examples with very specific expectations about what they’re like (e.g., WIMPs), and either detect them, or failing that, exclude some set of parameter space in which they might live. WIMPs are a virtuous example in that there was a reasonably clear expectation for what their properties should be. Those weren’t detected, so the expected properties were adjusted. Those then-new now-old expectations have now also been excluded. So expectations are adjusted again. This can go on forever, and already had been going on for a long time when I wrote about it a decade ago: http://astroweb.case.edu/ssm/darkmatter/WIMPexperiments.html
As for predictions, yes, that should in principle get us out of this mess, if the predictions of some alterantive are (i) distinguishable from dark matter, and (ii) come true. Even if those conditions are satisfied, one can always make up a version of dark matter theory that mimics its alternative. So (i) can never really be satisfied, though if DM can be made to fit anything while an alternative predicts only one specific thing, then I’d say the latter is preferable if (ii) those unique predictions come true.
MOND has satisfied both (i) and (ii) many times. Yet much of the literature these days is consumed with papers asserting that DM does exactly what MOND does, “naturally.” It may be that DM causes the MONDian phenomenology, but to call this “natural” is the opposite of what this word means in science.
one issue i’d like clarified is gravitational lensing, doesn’t gravitational lensing establish dark matter, especially in regions of space with little or no baryonic matter, and can MOND reproduce it? or does MOND theorists posit black holes as the source of gravitational lensing?
Gravitational lensing establishes a mass discrepancy, like all the other data. One has to be careful about claims to see this effect where there is no baryonic matter, as the opportunity for false detections in this game is enormous. Some may be credible, but even there one has to be careful about what you’d really expect to see in a theory like MOND. See, e.g., https://arxiv.org/abs/0709.2561 https://arxiv.org/abs/0710.4935 https://arxiv.org/abs/0707.0790
“The concept of DM is not falsifiable. So the issue is, how do we tell if it is an epicycle?”
Well, it sure has the symptoms of being an epicycle based model. Every time a problem is found a solution is devised by adding a new parameter.
As a mathematician I wonder if there is some way to correlate the number of open parameters to the complexity of a problem that can be solved with the model. So you could say that a problem of complexity Order N could be solved with any model that has X open parameters. The trivial case would probably be to say Order N complexity can be solved with any model of N open parameters. I would expect that there is some sort of power relationship between number of parameters and order of complexity, e.g. any X open parameter model is sufficient to solve an Order X^2 complexity problem.
If such a relationship could be proven then an addition condition could be added to Occam’s razor: the model with the least open parameters is preferred, AND A MODEL MUST HAVE FEWER open parameters than the square root (or whatever relationship applies) of the complexity.
Assuming all that is possible, starting with some non-arbitrary way of describing complexity, I would not be surprised if some current well regarded theories could be shown to have way too many open parameters to be meaningful.
I would not be surprised if someone has already done something like this, I just haven’t heard about it.
There are too many ways to do this, none of them fair. Finding a straight-up comparison is a challenge since the ideas are so different. But the problem is deeper. We’ve been tweaking DM models for so long, doing so is considered “natural,” and only the latest tweak is “new.” For rotation curves, MOND fits the bulk of the data well with an absolute minimum of parameters. DM theories that try to emulate this have, at a minimum, two additional free parameters. But those sit on top of a stack of tweaks so deep, the students of the students of the students of the people who wrote the original codes seem to be unaware of how many layers of fudges there are. They now say with a straight face that what they spent the last 20 years denying, ignoring, and pouring scorn on is now “natural” in DM. They knew it all along.
I’m being a little pernickety here, but while Hubble’s redshift-distance relation was the convincing proof of other galaxies as being like the Milky Way, it was the last step. Slipher had measured doppler shifts of galaxies over a decade and a half earlier and the Shapley-Curtis debate in 1920 had identified the critical measurements. See https://apod.nasa.gov/diamond_jubilee/debate_1920.html
Please don’t be persnickety, especially when you’re conflating the redshift-distance relation with the nature of spiral nebulae as external galaxies coequal with the Milky Way. Those are different things. It was Hubble’s distance measurements that established the latter. That could have been true in a static, non-expanding universe.
I entirely agree that Slipher’s work on measuring the redshifts was both essential and has been unfairly overlooked. Recently I voted against the IAU’s renaming of the Hubble law to the Hubble-Lemaitre law, because I thought really we ought, if anything, to rename it the Slipher-Hubble law. The redshift-distance relation requires measurements of both redshift and distance, and Hubble only did the latter. Redshifts are relatively easy today, but they were incredibly difficult in Slipher’s time. He had to expose the same photographic plate for multiple nights to get one redshift. The technical challenges of relocating the target night after night without accidentally ruining the exposure are enormous. Hubble was very lucky Slipher had already made these measurements when he got around to measuring distances, or perhaps the discovery of the expanding universe would have been credited the other way around.
That’s not relevant here. This post was already too long, so I deleted a section about this, and also the Great Debate. All I was saying is that there are galaxies external to the Milky Way. This was established by Hubble’s distance measurement to Andromeda by detecting Cepheid variables. Full stop.
Came this way because it was mentioned in P. Woit’s “Not Even Wrong” blog. Also loved how you studied some of the largest objects (galaxies) and also in some of the smallest objects (Xenon and electrons); think about the order-of-magntiude distance-scale between those two; that is great! Your sociological observations are very interesting as well, thank you.
Nicely stated, though I feel compelled to point out that you failed to mention a key difference between astronomy and physics/astrophysics: astronomy has its own muse. (https://en.wikipedia.org/wiki/Urania) (Take that, physics!)
there was a clearly a sufficient need to establish a separate journal focused on the physics of how things in the sky worked to launch the Astrophysical Journal in 1895
Although if you read the initial editorial in the first issue of ApJ (http://adsabs.harvard.edu/abs/1895ApJ…..1…80H), there was a clear emphasis on instrumentation and observation, part of the idea being to discuss laboratory experiments and techniques (“spectroscopic, bolometric, radiometric, photographic and photometric researches conducted in the laboratory”) with astronomical implications and applications. Spectroscopy is mentioned again and again as an especially important subject. (There’s even this sentence — “It must not be supposed that The Astrophysical Journal will deal only with the astronomical applications of the spectroscope” — just to keep people from worrying that it really would be all about spectroscopy!)
The impression I get was not that ApJ was founded to discuss the theoretical physics of how things in the sky work, but rather to bringing together new methods and observations from laboratory physics — especially spectroscopy — and astronomy.
I didn’t mean to suggest the ApJ was invented for theory. The difference between it and the AJ is obscure even to practitioners; I was mostly trying to parse the modern distinction between the two. Nineteenth and early twentieth century spectroscopy certainly was a driving force in helping to turn astronomy into a physical science. Gotta interpret those spectra as well as measure them.
I didn’t mean to suggest the ApJ was invented for theory.
OK, but “there was a clearly a sufficient need to establish a separate journal focused on the physics of how things in the sky worked to launch the Astrophysical Journal in 1895” did kind of suggest exactly that.
(Personally, I would say the balance of articles is more weighted towards observations in AJ and towards theory in ApJ, though there are plenty of observations in the latter and a certain amount of theory in the former.)
You make good point. I am surrounded by DM “experts,” most of whom have never heard of the Tully-Fisher relation.
I have lost count of the number of physics seminar that I’ve attended where the speaker starts with a single slide of astronomical evidence before launching into his personal favorite idea for dark matter. This one slide inevitably contains a flat rotation curve plus either a picture of the bullet cluster or the cosmic power spectrum (or both). On those occasions, I’m pretty sure I’m the only person in the room who has written refereed papers on all three topics. Yet the attitude is “These things tell us we need dark matter; us with the BigBrains will take it from here.” There is, as you say, little awareness that there is more to the story than flat rotation curves, and even if they’ve heard of Tully-Fisher, they rarely grasp its relevance, let alone importance.
Once Rocky Kolb referred to Tully-Fisher relation (in response to a question) as “Fishy Tully” relation
… as “Fishy Tully” relation
Are you sure that wasn’t just a mis-heard “Fisher-Tully relation”? That used to be (and very occasionally still is) an alternate name for the relation.
From the sound of things Princeton physics undergrads 60 years ago weren’t this arrogant (although there were a few). Classmate Heinz Pagels was quiet, although later described as flamboyant. There were a few self-proclaimed geniuses who lapsed into obscurity with the passage of time. The real star (who I didn’t know) was Jim Hartle. The valedictorian worked for Wheeler (who taught freshman physics for pre-meds and engineers — not honors physics — he brought in Neils Bohr to talk to us one day). He had a double major in philosophy. When asked what the difference was, he said there wasn’t any. He became a psychoanalyst and died far too young.
My experience was strictly with physics grad students in the mid-80s. The undergrads were a distinct population. So much so that they might as well have been aliens.
On second thought, in retirement I audited a course taught by David A. Cox using his book Ideals, Varieties and Algorithms. He got his math PhD from Princeton in the 70s, and a less egotistical (although clearly brilliant) individual cannot be imagined.
“So much so that they might as well have been aliens.”
I’m in a mathematics department (doing computer-sciency stuff), but your description of astronomers and physicists is very reminiscent of the interaction between various fields of computer science that I’ve observed in our computer science department. Which seems to imply (unfortunately) that some aspects of the dysfunctionality of the CS department are unfixable. I don’t know how consistent the mindset of these fields of CS is between various universities, but I suspect that it’s fairly consistent, like it is in physics/astronomy.
Yeah – once these attitudes become ingrained, it is hard to repair. I have a colleague in our math department who summarizes such sociology with “It only takes a little arsenic to poison the well.”
Our math department, luckily, is pretty good about things like this.
When I started on the faculty at the University of Maryland in 1998, there was no graduate course in the subject. This seemed to me to be an obvious gap to fill, so I developed one. Some of the senior astronomy faculty expressed concern as to whether this could be a rigorous 3 credit graduate course
That’s… odd, given that cosmology was part of the standard graduate course curriculum in the Astronomy Dept. at the University of Wisconsin (Madison) back in 1992, when I started as a grad student. I’m not sure whether that means Wisconsin or Maryland was atypical, though I’d be inclined to guess the latter.
Maryland was atypical in a number of ways. Planetary science was a big part of the astronomy department; often that is a separate department. But what I found most odd was the relative dearth of extragalactic astronomers. In that way, I think Wisconsin was and probably still is more typical.
I’d be curious when and where cosmology was broken out as a separate subject worthy of a full grad level course. It came up enough at Princeton to get a heavy dose of SCDM indoctrination, but there was no course dedicated to it at the time. At Michigan in the late ’80s cosmology was a part of the extragalactic course, which was mostly about galaxies, though we did have separate seminars dedicated to cosmology topics (there was a 1 credit seminar on measuring the Hubble constant and related issues – the age of the oldest stars, and the cosmic mass density. I guess that was a sign that there was getting to be more than could be wedged into one extragalactic course.
To me, galaxies and cosmology go hand in hand. That used to be true in physics as well as astronomy, as I remember Peebles emphasizing that galaxies were the building blocks of the universe, and the characteristically ambitious yet naive application of galaxies to cosmology by Loh & Spillar (https://ui.adsabs.harvard.edu/abs/1986ApJ…307L…1L/abstract). You look out there, and galaxies are what you see as the primary constituent of the universe. Yet the attitude in physics has evolved to see galaxies as “small” things, not interesting in themselves, useful only as tracers of the large scale structure. In recent years, I’ve heard Peebles himself make the case that local galaxies do not look like what you’d expect in [L]CDM: settled disks, small bulges, few satellites, very isolated spirals much like those in groups. The audience… failed to engage with his points. Or grasp them, one suspects.
“The concept of DM is not falsifiable. So the issue is, how do we tell if it is an epicycle?”
Sorry if this is a naive question, but… If the External Field Effect, for instance was established observationally (not saying it isn’t, just established with high accuracy), or if the amount of Dark Matter measured from the rotation curve of galaxies was significantly higher than what we measure from the CMB, wouldn’t it make the Dark Matter hypothesis very unlikely, regardless of its nature?
Yes. These things have already happened.
The DM fraction required for galaxies is much higher than the cosmic fraction indicated by the CMB. This mismatch starts small for large galaxies but grows steadily to lower masses, quickly exceeding an order of magnitude. This is casually dismissed as the result of baryons getting blown out of low mass halos.
The EFE effect was essential to the successful prediction of the velocity dispersions of Crater 2, And XIX, NGC1052-DF2, and a number of other dwarf galaxies. That could be considered a detection. I don’t think it is a 5 sigma detection, yet, but it is also a prediction MOND makes, correctly, that is impossible to make in LCDM. The response is to ignore what MOND gets right and make up some story about how what is impossible to predict in LCDM is somehow completely natural. For the dwarfs, I’ve head “galactic winds!” and “tidal disruption!” and “la la la la la!”.
So – DM hit “very unlikely” a long time ago. But that’s a relative standard. It isn’t falsifiable, so it remains forever more likely than any alternative.
“The EFE effect was essential to the successful prediction of the velocity dispersions of Crater 2, And XIX, NGC1052-DF2, and a number of other dwarf galaxies.”
In the news, NGC1052-DF2 has been claimed to have incorrect distance measurements, which when corrected for, is just a regular “dark matter” galaxy, so how does EFE as an explanation change with this correct?
If the data for DF2 are wrong, then any conclusions drawn from those data will also be wrong – in any theory. IF DF2 is closer as suggested by Trujillo et al., then it is no longer a satellite of NGC1052 and not affected by its EFE. If there are no other big galaxies nearby, then the isolated MOND case applies, and the velocity dispersion should follow from whatever the luminosity is at the new distance. MOND’s prediction was within the uncertainties when last I checked, but the numbers keep changing, so see the first sentence.
In principle it is a good idea to look for galaxies that lack dark matter. We did this ourselves in https://arxiv.org/abs/1509.05404. We even found the same result – the galaxies in question seem to lack dark matter. Unfortunately, the data do not sustain that conclusion any better than those for DF2. The only difference is that we recognized that from the start, so didn’t make a huge fuss about it.
One could arguably slightly amend one sentence of the post to say instead: “To date, the ***positive empirical*** evidence for dark matter to date is 100% astronomical in nature.”
This is because while there is no positive empirical evidence for dark matter from any source other than observational evidence from astronomy, there are two other important means by which our understanding of dark matter phenomena are better understood.
One is computational work (both analytical and N-body) that looks at existing theories and select modifications of them to see what those theories predict and whether those theories are internally consistent and consistent with other laws of physics that are believed to be true.
The second is ***negative empirical evidence*** from laboratory-type experiments, such as particle collider experiments. Empirical evidence that rules out a possible explanation of something is still important empirical evidence, even though it can’t provide us with an answer all by itself. Efforts to understand dark matter phenomena benefit greatly from negative empirical evidence that rules out a wide swath of dark matter particle theories including most of the parameter space for what was initially the most popular dark matter particle candidate: the supersymmetric WIMP.
Now, in fairness to the original language, negative empirical evidence is strictly speaking evidence “against dark matter”, rather than “for it” even though it is still important evidence in conducting the overall scientific inquiry. And, arguably, the computational work is something you do with evidence, rather than evidence itself. But, the output of an analytic analysis or an N-body simulation are used in a manner very similar to that of observational evidence from astronomy and laboratory work, so maybe it is a distinction without a difference.
Right – the LHC, for example, excludes a lot of what shoulda been.
Simulations one has to be careful with. They are not evidence, but the can certainly be a helpful tool for interpreting complex phenomena. But they only help to the extent that they help us understand physically what is going on. That standard is rarely met by the simulations relevant to this discussion. All too often, they illustrate the old saw: garbage in, garbage out.
“Jim Peebles is impossibly intelligent, sane, and even nice (perhaps he is an alien, or at least a Canadian)”
N.B. Under U.S. law, Canadians are a subset of the larger set of all aliens. And the word “aliens” appears throughout the United States Code and corresponding regulations, while the word Canadians appears only rarely.. 😉
Thanks for getting that joke.
Undoubtedly, bright people considered cold stars and black holes as candidates for dark matter. Why does it not work?
Very faint stars and brown dwarfs were one of the possibilities considered early on. These immediately run into problems with Big Bang Nucleosynthesis: we need more mass than allowed in order to explain the abundances of the light elements. Even ignoring that, people looked for them – even hard-to-see things we can detect if we work at it. Major campaigns to detect microlensing were conducted in the ’90s. These detect dark objects as the drift in front of background stars, temporarily brightening the background star by gravitationally focusing their light. These events have been detected, but the rates are too low to add up to the dark matter. They are, in fact, consistent with what we expected for stars all along. So we need a form of dark matter that won’t produce a microlensing signal – that means something small (less than Pluto) or rare (larger than about ten suns). There does remain a narrow window for massive (~30 solar mass) black holes to be the dark matter, but how do you make them? They need to be in place before nucleosynthesis, or again they mess up the abundances of the light elements. That happens in the first three minutes, so God has to snap his fingers and make a bunch of giant black holes in the first seconds after the big bang. Aside from divine intervention, no plausible way is known for this to occur.
I’ve always considered explanations like cold stars, fleeting planets or black holes with a lot of skepticism. Such objects even if they start distributed in a sphere would quickly form a disk as flat as the galaxy they surround. The effect of gravity that tends to do that is much stronger then all others, so you need some kind of interaction between these objects to maintain the spherical distribution, or they won’t be the Dark Matter you need them for in the first place. Black holes are the most extreme example of this problem, since it is absolutely impossible to repel a black hole, by any mean.
When I raised this issue on another blog (a long time ago), I was told “you need to learn more about galaxy formation”. That wasn’t terribly convincing, so if someone who understand what I’m talking about could give more details, or point me to a paper that address the issue, that’d be very appreciated.
I remember watching a session about primordial black holes where the speaker explained why they can’t be detected, with one of the arguments going like “there might be only one of them in the solar system”. Nobody in the audience told him that 99% of the solar system mass ended up within the Sun when it formed, so yeah, it’s hard to detect but… is it really what you mean?
Such objects even if they start distributed in a sphere would quickly form a disk as flat as the galaxy they surround.
No, they wouldn’t. Our own galaxy has a nearly spherical halo of old stars and globular clusters — why hasn’t that formed a disk? Globular clusters themselves are excellent examples: they’re among the oldest objects in the universe, and they’re still approximately spherical. Massive elliptical galaxies are often close to spherical. And so on.
I think you’re confusing what happens to gas clouds, which do form disks if they collapse; that what gravitationally bound systems of particles with extra, non-gravitational forms of interaction (“dissipation”) do. Systems made up purely gravitationally interacting “particles” — stars, black holes, putative non-self-interacting DM — don’t behave that way.
Thank you for answering. I think this is going off-topic, but I would like to add some comments. First of all, the effect I’m talking about (I couldn’t find its name) has additional requirements, and what I wrote was too generic. I took a shortcut, when I shouldn’t have.
Second of all, you are right about globular clusters demonstrating the opposite, and it cleared my mind about what the problem really is. If we take for example our galaxy, I think we can consider it approximately flat? It has a halo indeed, but if I’m to trust Wikipedia, it contains only a few percents of the baryonic content.
So, I want to just talk about systems where the flattening effect happened. Let’s forget I started from a spherical distribution of Dark Matter, it was a mistake. The real issue I have is why, if the visible matter has moved in a configuration that is almost flat, the cold stars/planetoids dark matter did not? Why these primordial black holes did not end up inside stars like 99% of the normal matter?
Indeed, “you need to learn more about galaxy formation” is not a helpful argument, especially since galaxy formation theory has all the solidity of oaths written in water. Very broadly, gas can dissipate and settle to form a disk – that is presumably the origin of spiral disks: the gas settles gradually into a plane, and stars form within the settled disk. Stars, and black holes, once formed, are tiny compared to the space that they’re in and have practically no cross-section for interacting in ways that will alter their orbits for many Hubble times. So, once formed, most stars stay on pretty much the same orbit they formed on. Not exactly, of course, but if you manage to make a lot of black holes early on, they would be have as cold (dynamically slow moving) dark matter and form quasi-spherical halos without settling into a disk. So the bigger challenge, seems to me, is to make the darn things in the first place.
‘Very faint stars and brown dwarfs were one of the possibilities considered early on. These immediately run into problems with Big Bang Nucleosynthesis: we need more mass than allowed in order to explain the abundances of the light elements. ”
in a MOND and no dark matter universe, how does MOND then deal with Big Bang Nucleosynthesis?
I am not a physicists but I would guess that MOND does not effect nucleosynthesis and therefore the current model can be used which assumes a lowish baryon density. “Dark matter” composed of faint stars and black holes would increase the baryon density and therefore effect the results of the nucleosynthesis moving the calculations away from the observed results.
Yes. This is exactly correct. Baryon physics, including BBN, is all normal in a MOND universe. The larger Omega_matter that we infer on large scales is, in this context, simply the same over-statement of the dynamical mass that one obtains in individual galaxies.
These issues have been addressed by myself and others many times in the literature. I think it was Bob Sanders who first showed that all the usual early universe results carry over in MOND (https://arxiv.org/abs/astro-ph/9710335). See https://arxiv.org/abs/1404.7525 for a review. Arguably, BBN makes *more* sense in MOND than in LCDM because one doesn’t have to ignore Lithium. See, e.g., https://arxiv.org/abs/0707.3795
So why are we seeing so many of these with LIGO?
What is the difference between string theorist and metaphysicist? They are interchangeable and synonymous.
Particle physicists aren’t the smartest people in the world.Mathematicians are. Hilbert famously said “Physics is too hard for physicists.” Are not mathematicians the ones who invent new math that physicists later learn and use? They view physicists as slow learners and unoriginal thinkers.
I am a mathematician, Smartest person I ever knew had a PhD in physics, but was chair of the Math department where I studied. Not sure what that implies, but it is interesting.
I think the problem with physicists compared to mathematicians, at least in part, is that there are many unspoken restraints on a physicist’s career, i.e. there are systemic pressures to conform in order to get published and advance in your career. For a mathematician just the opposite is usually the case. Mathematicians advance in their career by doing things no one has done before, by trying out new, wacky, ideas.
“Hilbert famously said “Physics is too hard for physicists.””
And the upshot of that self-aggrandizing mathematicist’s viewpoint is precisely the reason for the so-called “crisis in physics”. Hilbert had it exactly backwards, physics, with its irreducibly complex and empirical nature is the last place that mathematicians should have been allowed to run amok with their reductionist proclivity for over-simplifications of mathematical convenience.
The end results, the two standard models of quantum theory/particle physics and LCDM, present accounts of physical reality that are either inconsistent with observations, or simply incoherent about the fundamental nature of physical reality, or both. The mathematicism doctrine, that math somehow fundamentally underpins reality, has produced nothing but scientific gibberish over the last 40 plus years.
LikeLiked by 1 person
Exactly! And Hilbert himself failed in his axiomatic program with Godel’s incompleteness theorems, extensions of Godel results by Chaitin and others show that incompleteness is pervasive, in a similar form as irrational numbers are pervasive in the set of Real numbers, but even more these results show that complexity is a source of incompleteness. The narrow reductionist approach pervading theoretical Physics ignores these results and implicitly assumes that their models of Reality are “complete”. Reality complexity implies that irreducible/strong emergent properties are pervasive and nothing can replace the constant observation and testing of Reality, truly new insights always will come from these observations and testing that will be beyond the always narrow preconceptions of theoreticians and any existing model or paradigms.
LikeLiked by 1 person
The String Theory fiasco of more than 40 years and the pursuit of ghost dark matter particles for a similar period of time show that scientists as a group are not different from any other group of humans: they tend to follow the flow/herd uncritically and will shun/belittle anybody that try to move away from the herd. Clearly there is a systemic problem in mainstream Science rooted in its departure from objectivity: where models of Reality take precedence over empirical evidence.
This again shows one more time that we should not take anything on faith from anybody; including scientists obviously. Accepting uncritically claims from any “authority”, including the authority of “consensus” is a clear expression of complacency.
As a mathematician, I would say the problem is not so much a takeover by mathematicians, as a takeover by the wrong sort of mathematicians. They have not been guilty of over-simplifications, but rather the opposite. To me, the problem is over-elaborate geometrisation, when the underlying algebraic problems that are obvious in the standard model of particle physics are simply not being addressed. Your description of the end results, however, is spot on. The mathematics in the standard model is just wrong, there is no other word for it. Try saying that to a theoretical physicist, and watch their reaction! It tells you a lot about the problems when you see them get incandescent with rage. Experimental physicists, in my experience, are much more open-minded.
Robert, I’m usually better at distinguishing between math/mathematicians and mathematicism/mathematicists. It is the latter subgroup, that has hijacked theoretical physics and turned it into a math-based fantasy land. As to the complexity of the standard models, your point is well taken, but the reason for the overly complex models seems, to me, the axiomatic oversimplifications that lie at their root.
I’m intrigued by your comment “the underlying algebraic problems that are obvious in the standard model of particle physics”. Could you elaborate? Thanks.
Ah, sorry, I failed to take notice of your careful distinction between mathematicians and mathematicists, with which I concur. You may be right about axiomatic oversimplification. The role of a mathematician here should be to take the experimental data, and find the best mathematical theory to fit, whereas the mathematicists seem to take their favourite bit of mathematics and try to massage the data to fit.
The underlying algebraic problems in the standard model are of this type, whereby the experiments are supposed to fit the preconceived ideas about the symmetry groups, rather than finding the right groups to describe what experiments actually see. As a group theorist, it is obvious to me that the group theory in the standard model just doesn’t do what it is supposed to do.
Stacy, great to have you back blogging. I’ve got a Will Happer story, but it only goes to show how small the atomic physics community is. (was?) So this is a bit ‘off the wall’, question; Are there any people/ groups thinking about ‘local’ measurement of gravity at very small accelerations (a~10^-10, mond range.)
There is this paper,
Click to access 0602266.pdf
Which is about the low g saddle points moving around in the solar system. I was trying to work out how to get a probe to fly through one of these points… and my simple ideas did not look promising. (A satellite orbiting the sun out near Jupiter.) And then the brute force approach, send a huge corner cube reflector way out there (0.1 light years) and see how it moves. The corner cube is like a 100 to 1,000 year mission,
Are there any better ideas?
Testing motions in special places is a good idea, as in the paper you cite. But many of those tests are theory specific. Benoit Famaey and I spent some time working with the LISA group to see if it was worthwhile to send it through one of the points suggested by Bekenstein & Magueijo. The initial answer was yet but the final answer was no: solar system data are already so good that all the effects LISA might be sensitive to were already excluded. There might be better luck further out, with some of the smaller moons of Saturn, for example. But the predicted signals are small and non-detections are not all that informative (though a positive detection would be revolutionary).
The brute force approach I have not pursued for the obvious reasons. Perhaps at this point it would be good motivation for novel propulsion systems.
There are other ideas, but I wouldn’t say they’re better.
It isn’t at all clear to me what the signal looks like when you fly through a saddle point.
(assuming that whatever causes mond/ dark matter happens fast enough to follow the saddle point around the sun.) I thought maybe send a whole bunch of ‘probes’ through the area… But it’s a tiny effect, as you say. The brute force approach is equally depressing. Phil Hobbs helped me spit ball some numbers, I’d have to go back and check, but IIRC with a 1 km corner cube at 0.1 light year, a terra watt laser would give one photon per second reflected back to earth
Complex systems may exhibit strong emergent properties that can’t be reduced/explained only by the properties of its elementary components; it is well known that these strong emergent properties can be seen as a “rigidity” of the system.
Obviously galaxies are complex systems and the “dark matter” observational effects can be an emergent property of the galaxy as a complex system.
Many physicists are unable to see anything beyond their very narrow reductionist mindset and realize that the their axiomatic/reductionist approach is intrinsically flawed when facing complex systems.
I’ve come up with a hypothesis that Planck particles may form in SMBH cores (the misnomered singularity) and as these particles are at the Planck energy they can not participate in gravity. So as matter-energy joins the Planck core the disappearance of mass impacts galaxy rotation curves. Then when the Planck core emits via jet or rupture, galaxy local inflation occurs, causing anomalous redshift. Lots of ways this mechanism could change our understanding and perhaps solve the big issues like dark matter.
An interesting study of a very early disk galaxy. The study itself is behind Nature’s paywall and I cannot find any preprint on Arxiv, but here is the article about it in Nature
Here’s the arXiv version:
This sounds like another example of the “Impossibly Early Galaxy Problem” – a name that hasn’t caught on, but it does seem to be a real issue. I would note that this is exactly the sort of thing predicted by Bob Sanders 20 years ago: https://arxiv.org/abs/astro-ph/9710335. See also http://astroweb.case.edu/ssm/mond/LSSinMOND.html
Comments are closed.