The MHONGOOSE survey of atomic gas in and around galaxies

The MHONGOOSE survey of atomic gas in and around galaxies

I have been spending a lot of time lately writing up a formal paper on high redshift galaxies, so haven’t had much time to write here. The paper is a lot more involved than I told you so, but yeah, I did. Repeatedly. I do have a start on a post on self-interacting dark matter that I hope eventually to get back to. Today, I want to give a quick note about the MHONGOOSE survey. But first, a non-commercial interruption.


Triton Station joins Rogue Scholar

In internet news, Triton Station has joined Rogue Scholar. The blog itself hasn’t moved; Rogue Scholar is a community of science blogs. It provides some important capabilities, including full-text search, long-term archiving, DOIs, and metadata. The DOIs (Digital Object Identifiers) were of particular interest to me, as they have become the standard for identifying unique articles in regular academic journals now that these have mostly (entirely?) gone on-line. I had not envisioned ever citing this blog in a refereed journal, but a DOI makes it possible to do so. Any scientists who find a post useful are welcome to make use of this feature. I’m inclined to follow the example of JCAP and make the format volume, page be yearmonth, date (YYMM, DD), which comes out to Triton Station (2022), 2201, 03 in the standard astronomy journal format. I do not anticipate continuing to publish in the twenty second century, so no need for YYYYMM, Y2K experience notwithstanding.

For everyone interested in science, Rogue Scholar is a great place to find new blogs.


MHONGOOSE

In science news, the MHONGOOSE collaboration has released its big survey summary paper. Many survey science papers are in the pipeline. Congratulations to all involved, especially PI Erwin de Blok.

Erwin was an early collaborator of mine who played a pivotal role in measuring the atomic gas properties of low surface brightness galaxies, establishing the cusp-core problem, and that low surface brightness galaxies are dark matter dominated (or at least evince large mass discrepancies, as predicted by MOND). He has done a lot more since then, among them playing a leading role in the large VLA survey of nearby galaxies, THINGS. In astronomy we’re always looking forward to the next big survey – its a big universe; there’s always more out there. So after THINGS he conceived and began work on MHONGOOSE. It has been a long road tied to the construction of the MeerKAT array of radio telescopes – a major endeavor on the road to the ambitious Square Kilometer Array.

I was involved in the early phases of the MHONGOOSE project, helping to select the sample of target galaxies (it is really important to cover the full dynamic range of galaxy properties, dwarf to giant) and define the aspirational target sensitivity. HI observations often taper off below a column density of 1020 hydrogen atoms per cm2 (about 1 solar mass per square parsec). With work, one can get down to a few times 1019 cm-2. We want to go much deeper to see how much farther out the atomic gas extends. It was already known to go further out than the stars, but how far? Is there a hard edge, or just a continuous fall off?

We also hope to detect new dwarf galaxies that are low surface brightness in HI. There could, in theory, be zillions of such things lurking in all the dark matter subhalos that are predicted to exist around big galaxies. Irrespective of theory, are there HI gas-rich galaxies that are entirely devoid of stars? Do such things exist? People have been looking for them a long time, and there are now many examples of galaxies that are well over 95% gas, but there always seem to be at least a few stars associated with them. Is this always true? If we have cases that are 98, 99% gas, why not 100%? Do galaxies with gas always manage to turn at least a little of it into stars? They do have a Hubble time to work on it, so it is also a question why there is so much gas still around in these cases.

And… a lot of other things, but I don’t want to be here all day. So just a few quick highlights from the main survey paper. First, the obligatory sensitivity diagram. This shows how deep the survey reaches (lower column density) as a function of resolution (beam size). You want to see deeply and you want to resolve what you see, so ideally both of these numbers would be small. MHONGOOSE undercuts existing surveys, and is unlikely to be bettered until the full SKA comes on-line, which is still a long way off.

Sensitivity versus resolution in HI surveys.

And here are a couple of individual galaxy observations:

Optical images and the HI moment zero, one, and two maps. The moment zero map of the intensity of 21 cm radiation tells us where the atomic gas is, and how much of it there is. The moment one map is the velocity field from which we can construct a rotation curve. The second moment measures the velocity dispersion of the gas.

These are beautiful data. The spiral arms appear in the HI as well as in starlight, and continue in HI to larger radii. The outer edge of the HI disk is pretty hard; there doesn’t seem to be a lot of extra gas at low column densities extending indefinitely into the great beyond. I’m particular struck by the velocity dispersion of NGC 1566 tracking the spiral structure: this means the spiral arms have mass, and any stirring caused by star formation is localized to the spirals where much of the star formation goes on. That’s natural, but the surroundings seem relatively unperturbed: feedback is happening locally, but not globally. The velocity field of NGC 5068 has a big twist in the zero velocity contour (the thick line dividing the red receding side from the blue approaching side); this is a signature of non-circular motion, probably caused in this case by the visible bar. These are two-dimensional examples of Renzo’s rule (Sancisi’s Law), in which features in the visible mass distribution correspond to features in the kinematics.

I’ll end with a quick peak at the environments around some MHONGOOSE target galaxies:

Fields where additional galaxies (in blue) are present around the central target.

This is nifty on many levels. First, some (presumptively satellite) dwarf galaxies are detected. That in itself is a treat to me: once upon a time, Renzo Sancisi asked me to smooth the bejeepers out of the LSB galaxy data cubes to look for satellites. After much work, we found nada. Nothing. Zilch. It turns out that LSB galaxies are among the most isolated galaxy types in the universe. So that we detect some things here is gratifying, even in targets that are not LSBs.

Second, there are not a lot of new detections. The halos of big galaxies are not swimming in heretofore unseen swarms of low column density gas clouds. There can always be more at sensitivities yet unreached, but the data sure don’t encourage that perspective. MHONGOOSE is sensitive to very low mass gas clouds. The exact limit is distance-dependent, but a million solar masses of atomic gas should be readily visible. That’s a tiny amount by extragalactic standards, about one globular cluster’s worth of material. There’s just not a lot there.

Disappointing as the absence of zillions of new detections may be discovery-wise, it does teach us some important lessons. Empirically, galaxies look like island universes in gas as well as stars. There may be a few outlying galaxies, but they are not embedded in an obvious cosmic network of ephemeral cold gas. Nor are there thousands of unseen satellites/subhalos suddenly becoming visible – at least not in atomic gas. Theorists can of course imagine other things, but we observers can only measure one thing at a time, as instrumentation and telescope availability allows. This is a big step forward.

The Eclipse Experience

The Eclipse Experience

We will return to our usual programming shortly. But first, a few words on the eclipse experience last Monday. It. Was. Awesome.

That’s a few words, so Mission Accomplished.


That’s really all I had planned to say. However, I find I am still giddy from this momentous event, so will share my experience of the day, such as words can humbly convey.

Prelude

We had good weather here in Cleveland, with the temperature reaching the upper 60s Fahrenheit. It was cloudy in the morning and many people were concerned about the prospects to see the eclipse. I was not. Having spent a lifetime as an observer with many nights at mountaintop observatories wishing for clouds to go away, and obsessively refreshing satellite maps to try to judge when they might do so, I knew there was no point in fretting about it at this juncture. Either it would clear, or clear not.

To indulge in a little superstition, I was more concerned that the date of the eclipse coincided with the home opener for the Cleveland Guardians. Opening day is always a happy, celebratory time, with people jamming the ballpark to enjoy the return of baseball and mark the coming halcyon days of summer. The weather here on opening day inevitably seems to repay that optimism with cold, clouds, and various forms of precipitation. Opening day weather is always miserable. I cannot think of a single home opener in the past quarter century for which I wanted to be in the stadium, and quite a few for which I was grateful not to have been. In local experience, a few inches or more of snow is as likely than a nice day.

A typical April in Cleveland. This was April 21, 2021.

This is a recurring theme.

April 7, 2017, seven years and a day before the eclipse.

I could go on – I have lots of epic snow pictures from Aprils past. We got a full foot of snow one Easter. But April also brings daffodils, so not all hope is extinguished.

Daffodil Hill in Lakeview Cemetery, April 27, 2020. This is also typical.

Given this sterling record of opaque skies and outright blizzards, many people traveled to Texas to see the eclipse. The climate statistics for clear skies there are better there than here, and indeed, better than most other places along the path of totality, making it the destination of choice for serious eclipse chasers. But weather is notoriously fickle: climate is what you expect; weather is what you get. The day of the eclipse it was cloudy in Texas. You make your bets and you roll the dice.

The Build Up

Here in Cleveland, the early clouds had cleared to a brilliant blue by noon. At this time there was a special lunch followed by a panel discussion that I served on, together with Prof. Paul Iverson, an expert on ancient culture and the Antikythera Mechanism, a remarkably advanced analog computer for accurately predicting planetary motions including eclipses, Prof. Aviva Rothman, an historian of the Scientific Revolution and Kepler in particular, and Prof. Chris Zorman, an engineer involved in conducting radio experiments on the reaction of the ionosphere to the passage of the moon’s shadow. I opened by describing what was happening physically, and closed with a description of what to expect to see. There were good questions form the audience; perhaps my favorite being if we could extend the eclipse experience by chasing it in a plane. Yes, but the shadow sweeps past at over a thousand miles per hour, so to keep up you’d have to go supersonic, so totality could only be extended for however long your fuel could last. Attendance was great but limited* to the largest ballroom in the Tinkham Veale University Center; I’m told it filled within minutes of registration being opened. Duh – I had been trying to impress the enormity of this event on the powers that be on campus for years without success. Most people seem incapable of thinking that far ahead. Still, they did eventually get on it, and did a good job organizing everything, albeit at a predictably desperate clip for the last few weeks. The campus event turned out well, if for a rather smaller audience than demand might have had it.

People gather on Freiberger Field to witness the eclipse. The little picket fence for the VIPs can be seen, with the Tinkham Veale University Center behind.

By design, the panel ended right as the partial eclipse started. Freiberger Field next door to Tinkham Veale had been designated for eclipse viewing, complete with portapotties and, bemusingly, a little fenced-off area$ for us VIPs who had been at the lunch. I preferred to watch it with my colleagues and students who had set up a small telescope with a projection screen nearby.

Students with one of the department’s portable telescopes. The partial eclipse has progressed far enough to begin to give things a sepia tone. Note the high-tech cardboard aperture reducer that Dr. Bill Janesh rigged for persistent observation of the sun, which can overheat optical elements.

Many of our students are wearing the t-shirt I designed to commemorate the occasion.

Commemorative eclipse t-shirt. The location and time is given in words and again as latitude, longitude, and Julian date. Trivia points for those who can identify the inspiration of the font in the middle lines. (Not the Case Astronomy part – I did that free hand.)

First, though, I walked back to my office to drop off the sports coat I had donned for the panel, as it had become downright warm in the sun. It would take a bit over an hour to reach totality, so I had plenty of time to walk across campus and back. It also gave me something to do besides mill about in anticipation: I looked up occasionally on the walk over, checking the progress of the moon through eclipse glasses; it was casually devouring the sun one bite at a time as I casually crossed campus.

I grabbed a bundle of eclipse glasses from my office. There were plenty at Freiberger, but we had started stocking up on them before that had been arranged, and what else were we going to use them for? As soon as I stepped outside, I encountered a couple of students who needed them. Immediately after that, a visitor from Pittsburgh who was originally form Armenia asked where he could get paper supplies to make a pinhole camera. I handed him a pair of eclipse glasses and pointed him towards the nearby FedEx, with directions on to the campus bookstore should that prove helpful. By this time, I could tell that the light was starting to dim.

On the way back, the weather worsened. Some murk started to roll in, and for a bit it looked like it might become completely opaque. But the clouds remained limited to cirrus clouds that amounted to only a thin veil, and which provided a rainbow halo completely encircling the sun. A few commercial jetliners left long, fat contrails+ whose shadows could be seen cast on the cirrus at lower elevation.

Rainbow halo around the sun in cirrus clouds. A few contrails cast shadows on the clouds. The sun is more than half obscured by the moon at this point, but my phone’s camera can only see that it is still really bright.

At this point, it started to cool off. One could viscerally feel the effect of the shade cast by the moon. The temperature dropped 7oF, then rebounded some afterwards. I did not regret having abandoned my jacket – it was still a pleasant spring day – but probably would have put it back on for a bit if I still had it with me. I could hear some mild grumbles in the crowd that they wanted one. You could definitely feel the difference as a mild breeze picked up.

The partial eclipse as totality nears, as seen projected by the small telescope seen above.

Partial Eclipses Past

Partial eclipses are just that: partial. In the 2014 my younger daughter and I went to the roof of a parking structure to watch one that reached about 30% coverage. And indeed, it looked like a clean bite had been taken out of the sun. But if you didn’t know when to look (and have appropriate eye protection), you wouldn’t notice. The sun was still plenty bright, and there was no perceptible change in the environment. Indeed, we noticed that people didn’t notice. We had come prepared with welder’s glass, and offered a glimpse to passers by. No one took us up on it. Indeed, every single one gave us a wide berth as obviously crazy people.

On 21 August 2017, there was a major eclipse for which the path of totality passed several hundred miles to our south. We saw about 80% coverage in Cleveland on that occasion. I figured that people who were serious about it would have left town to see it. However, there had been a lot of hype about this eclipse, so I expected that, come the day of, a lot of folks would be calling up us astronomers asking what’s up.

In anticipation, we (the CWRU Department of Astronomy) had stocked up on eclipse glasses. The day of, we sallied forth to entertain those who found a sudden interest in astronomical events. This included many people from both on and off campus, but especially new freshmen – it happened during Discovery Week, which is freshmen orientation here. On that occasion, I had difficult persuading the people running orientation that they needed to account for the eclipse in their scheduling. They were having none of it: they had a very busy schedule, it was important the the freshmen attend all orientation events, and if we wanted to host an astronomical event, we should schedule it sometime else. Eventually I had to appeal to the provost, pointing out that students would have heard of the eclipse, so were likely to walk out of whatever orientation event was running at the time, so it would be better to embrace the event than pretend it wasn’t happening. So we did.

Dr. Paul Harding (left) and Prof. Chris Mihos (right) help the gathered crowd witness the partial eclipse of 2017. Not pictured: Charley Knox, who had opened the 9″ refracting telescope bequeathed to us by Warner & Swasey to visitors. Perched atop a campus building, a line to use it promptly formed down five flights of stairs and out onto the quad.

The weather in August 2017 was clear and hot. While my colleagues operated the telescopes, I ran around handing out eclipse glasses and playing carnival barker. This included announcing the time of maximum coverage, which this time was enough to cause a perceptible dimming. It was weird – it wasn’t like a cloud blocking the sun; indeed, the sky was completely clear. Everything just seemed… tuned down. Nature stilled. The light gave everything a sepia tone; sort of a golden hour from above rather than from the horizon.

At this point, anything with a small hole acted as a pinhole camera to project an image of the partial eclipse. A colander works quite well for this. Heck, even the leaves of the trees got into it.

Images of the 2017 partial eclipse cast by the leaves of the trees acting as pinhole cameras.

We were lucky with the weather. We were hot and dehydrated, but we got through it all. After we had packed up but before I could even walk back to the department, storm clouds gathered and the heavens opened with torrential rain: a classic summer thunderstorm. I was happy to wait it out in Tinkham Veale, quite exhausted. I realized then that we couldn’t pull off a similar event for a full eclipse, which would have exponentially more interest. The partial eclipse was all we could manage, and the department is half the size now that it was then#.

Totality Approaches

The light level at the maximum of the 2017 partial eclipse returns us to the 2024 eclipse. We had reached a point that was uncharted territory for me. With some help from a filter, the phone camera could now kinda sorta make out that something was happening.

As totality neared, even a phone camera could discern that the sun was no longer round.

The light obtained the same weird, bright-yet-dim sepia tone I recalled from 2017. It continued to darken, and began to look like sunset on the horizon, only all 360o around. Then the umbral shadow swept in, the cirrus clouds above marking its path. We were in a giant dark shadow, with daylight perceptible at a distance all around us. But for us, it got dark.

I watched the last limb of the sun disappear behind the moon through the eclipse shades, the thin horns of light contracting rapidly. It broke into segments, atmospheric seeing warring with lunar topography. When I could see no more, I took them off just in time to see the diamond ring effect just as the sun disappeared entirely. The total eclipse had arrived.

Totality

I’m pretty jaded. I’ve worked at major observatories in Arizona and New Mexico, in the Chilean Andes, on La Palma in the Canary islands. I’ve traveled the world debating deep matters of cosmology and philosophy with renowned scientists from all over, each brilliant in their own way, some the most admirable people you could hope to meet; others, not so much. I’ve seen partial lunar eclipses, total lunar eclipses, the 2004 and 2012 transits^ of Venus, and partial solar eclipses. At this point, I’m very hard to impress. But I had never a total solar eclipse.

I was gobsmacked.

Totality in Cleveland lasted for 3 minutes and 49 seconds. Venus is visible below the sun/moon. Jupiter was also visible on the opposite side from the sun; some people spotted Saturn near the horizon. But the true star of the show was the solar corona.

Words really can’t do justice to a full eclipse. Totality is just stunning. The disk of the moon completely covers the body of the sun, and lurks there for a few minutes. This natural occultation experiment reveals the solar corona. Always there but never otherwise seen, the corona shimmers like a phantasm of white silk around the dark circle of the moon; it is so mesmerizing you’d think it overdone if it were a special effect. I was enthused to make out the small, pink glow of a solar prominence near the bottom of the disk from our perspective: a band of plasma entrained in the magnetic field of sunspots like cosmic iron filings that glow in the pinkish Balmer line of hydrogen. Venus and Jupiter were easily visible; some folks saw Saturn as well. Saturn was over towards the horizon; I didn’t look that far aside for 3 minutes and 49 seconds. Pictures really don’t do it justice. They seem ill-suited to illustrate the extend of the corona without overexposing the prominences.

You literally had to see it to appreciate it.

Fade Out

People cheered as totality started, and again as it came to an end. Daylight returned, albeit the weird dim sepia light of the partial eclipse. What had seemed stunning in its own right a few short minutes before now seemed almost mundane. We talked and milled about and shared a general sense of well-being stemming from bearing common witness to a remarkable event that is both phenomenally rare and stunningly beautiful, a shared feeling that reminds me of Melville’s words:

Oh! my dear fellow beings, why should we longer cherish any social acerbities, or know the slightest ill-humor or envy!

Melville, in Moby Dick

As the light trended back to normal, we decided to pay a visit to the rooftop telescope, where Bill and Charley had watched the eclipse. We were joined by roving groups of astronomy students and alumni, and found Charley in the dome of the 9″ with a projector in place, the brass of the eyepiece warm to the touch.

Charley Knox in his element. The moon is receding, but still blocking a portion of the sun.

There was a communal feeling of satisfaction and general well-being that I can describe no better than totality itself. Classes had been cancelled for the day, and rightly so – nothing could be more educational, nothing could match this experience, and there was no going back inside afterwards.

As the moon passed away, one could see sunspots in the projection from the 9″. That was true during the 2017 partial eclipse as well; I share an image from that time as it shows the sunspots most clearly:

The sun with sunspots, regions of magnetic disturbance that appear dark against the surface of the sun by virtue of being slightly less hot than the surrounding surface. The moon recedes at lower right.

For perspective, recall the spectacular coincidences that make eclipse observations possible. The sun is vastly larger than the moon, but also farther away. Yet they appear very nearly the same angular size in the sky, with the greater distance to the sun relative to the moon almost exactly right to balance the greater size of the sun relative to the moon. It didn’t have to be that way. Indeed, it seems phenomenally unlikely that it should be so. That it is so makes total eclipses extraordinarily rare, as the point of the conical shadow of the moon only just reaches the surface of the earth, so only a small spot is in eclipse at a given time. Indeed, the slight eccentricity of the moon’s orbit means that sometimes the point of the umbra doesn’t even reach the surface, and we get an annular eclipse in which the sun is mostly but not quite fully covered. We were lucky to get nearly four minutes of totality, but the small size of the shadow cast by the moon on the Earth by itself guarantees that eclipses are rare. Add in that the moon’s orbit is tipped about 5 degrees to the plane of the ecliptic (the orbit of the Earth around the sun) and that none of the relevant periods (day, lunar month, year) are integer multiples of one another means that the perfect alignment (syzygy) required for an eclipse rarely repeats over the same spot. But it does happen, and we humans noticed it – by the time of the ancient Babylonians, the lengthy periods on which eclipses were likely to repeat were known – they lacked sufficiently accurate data to predict exactly when and where an eclipse would occur, but they knew when it was eclipse season – a sort of astronomical weather forecast: scattered clouds with a chance of eclipses. These events made a big impression on us; it would have taken careful observations conveyed over many generations to work this out.

Eclipses on planet Earth are quite remarkable. We could have had a bigger moon or a smaller moon or lots of moons or no moon at all. But we got a moon that is exactly the right size at exactly the right distance to almost exactly cover the disk of the sun, and reveal to the human eye the corona that is otherwise lost in the glare of the solar photosphere. This coincidence in space is remarkable enough, but it is also a coincidence in time. The moon helps raise the tides on the Earth, and the tides pull back against the moon. The net effect is a slow transfer of angular momentum from the spin of the Earth to the orbit of the moon. As a consequence, the moon is slowly getting farther away (a few cm/year) and the length of the day is gradually getting longer, having been about 22 hours a mere 600 million years ago, around the time of the Cambrian explosion when multicellular life proliferated. Consequently, the moon would have been a bit closer and appeared somewhat larger on the sky for early land animals; dinosaurs would have seen somewhat more frequent eclipses of longer duration, but would have had a worse view of the corona and prominences, as the larger moon would have blocked more of the emission from near the surface of the sun.

The coincidences that make our current eclipse experience are rather special in both time and space. Make of that what you will.


*There had been so many preparatory emails that the precise location of the discussion panel was lost in the hectic babble. I remembered it was in Tinkham Veale, which is big, but not so big that I was worried about finding the right room. That would surely be easier than finding it in the enormous email thread. When I arrived, I figured the most likely location was the ballroom on the second floor, and indeed, I found the stairs blocked by a sign 2ND FLOOR CLOSED FOR PRIVATE EVENT. Bypassing this, I was greeted by enhanced campus security and a person who asked my name. Scrolling a handheld device, she got that concerned look officious people get when you’re not on the list. She politely checked the spelling of my name, checked again, then apologized that I wasn’t on the list. As this was going on, I realized this must be a list for people who registered to hear the panel, so I said “I’m the astronomer ON the panel.” Her eyes got big. “Oh!” she said. “Come right in…”

So, the moral of that story is that you can always talk your way into an exclusive event by claiming to be an astronomer – provided, of course, that it is a very specialized subset of exclusive events about astronomy.

$I found it bemusing because it was just a tiny picket fence set up in the midst of a much larger field. There was nothing special or meritorious about the location, so it was just exclusionary, which is a thing I’m generally against.

+Contrails like this are usually a bad sign for observational astronomy, being a harbinger of bad seeing as well as high humidity. In this case, it was just part of the show – and a very small part at that. Mostly I pitied the fools who had paid to confine themselves inside a metal tube at 10 km altitude while the most amazing of celestial events was going on.

#In 2017, the academic staff of the Department of Astronomy consisted of five faculty and one research scientist. By 2019 attrition had reduced us to three faculty. That no hires have been made since then is a long story of administrative incompetence and malfeasance.

^I almost missed the 2004 transit, which was conveniently observable in Europe but which we nearly over by the time the sun rose in the U.S. Not only did one have to get up at the literal crack of daen, but that meant the sun was on the eastern horizon. The only way I could find to witness it was to hold a pair of binoculars at a window in our attic and project the image onto the wall.

The 2012 transit was more friendly to observers in North America, occurring mid-day. I set up a small telescope in front of my house; the neighbors took turns holding the projection screen for all to see. Many stayed for hours to follow the gradual progress of Venus against the face of the sun.

Venus appears as the small, dark circle at the top left of the disk of the sun during its 2012 transit.

I hope you caught one of these transits yourself. The next one is in December 2117.

Eclipse Day: 8 April 2024

Eclipse Day: 8 April 2024

The day of doom approaches, and the moon is cleft in half!

Ayah al-Qamar 54:1

Perhaps the most compelling astronomical phenomenon accessible to a naked-eye observer is a total eclipse of the sun. These rare events have always fascinated us, and often terrified us. It is abnormal and disturbing for the sun to be blotted from the sky!

A solar eclipse will occur on Monday, 8 April 2024. A partial eclipse will be visible from nearly every part of North America. The path of totality will sweep from Mexico through Texas, the Midwest, New England, and across the maritime provinces of Canada. If you are anywhere where this event is visible, go out, don a pair of eclipse glasses, and look up. This is especially true in the path of totality. Partial eclipse are cool. Total eclipses are so much more that they have inspired science, art, and literature, with descriptions frequently evincing the deep emotion of profound religious experience*.

The American Astronomical Society has posted lots of useful information, including a map of the path of totality and advice about proper eclipse glasses. These are super-cheap, but that doesn’t preclude bad actors from selling ineffective versions. Simple rule of thumb: don’t look straight at the sun. A proper pair of eclipse glasses enable you to comfortably do so. If it hurts, stop+: close your eyes and look away. Listen to the messages from your pain receptors.

If you can get to the path of totality, it is worth doing so. Expect crowds and plan accordingly. This is a draw of epic proportions, and for many will be the only practical opportunity of their lifetime. Totality is brief, only a few minutes, so be sure to be in the right place at the right time$.

The AAS provides a good list of the phenomena to expect. Most of the action is around and during totality. The partial eclipse is a long (hour+) build up to the brief main show (a few minutes of totality). In addition to seeing the corona, the diamond ring and Baily’s Beads effects, this should be a good time to see solar prominences as the sun is nearing the maximum in its eleven year sunspot cycle. What we will see is unknown, as this is the solar analog of weather phenomena. The forecast calls for a high chance of prominences, but that doesn’t guarantee they’ll show.

One last thing I’ll note is that all the planets are relatively close to the sun on the sky at present, and some might be visible during the eclipse. Venus and Jupiter will be most prominent and easy to spot. Uranus and Neptune, not so much. The others maybe. Also present is Comet 12P/Pons-Brooks (aka the devil comet) in the vicinity of Jupiter. It is quite a temporal coincidence for this comet with a 71 year period to be in the inner solar system during this eclipse. It is unlikely to put on much of a show: comets are notoriously fickle, and the odds are that it will be invisible to the naked eye. But it is there, so keep a weather eye out, just in case.

All the planets and even a comet will be in the sky during the eclipse.

Now go forth this Monday and witness one of nature’s greatest marvels.


*There are many myths and monsters associated with eclipses. Until the light pollution of recent times, the motions of the sky were very much in our faces. People cared deeply about these things. They were well aware of more than the daily rising and setting of the sun. The phases of the moon, the patterns in the stars, and the wandering of the planets was obvious to everyone who looked up. People learned long ago to keep close track of these events, even those as rare as eclipses. Some of the earliest tablets unearthed from ancient Babylon are elaborate tables of eclipse seasons recognizing lengthy periods like the roughly 18 year Saros cycle. One doesn’t just up and write down this sort of knowledge on a whim one day, as it requires centuries of careful observation and record keeping to recognize the recurrence of events with such long periods, especially for solar eclipses that do not visit exactly the same spot every exeligmos cycle. I suspect there was a strong oral tradition of astronomical record keeping for long ages before we learned to write. Astronomy is the oldest science: this was important knowledge to acquire, preserve, and pass on.

The ancients managed to deduce cycles of eclipse seasons, so they could forecast the chance for eclipses, but only with the same precision as a weather forecast: there is a chance of rain, but we can’t be sure exactly when and where. Now we have measured planetary motions accurately enough and understand the geometry of what is going on so we can forecast exactly when and where eclipses will occur. This is a staggering achievement of human intellect and communal effort.

+There are a lot of misconceptions about the dangers of eclipse viewing. Looking straight at the sun is uncomfortable and dangerous at any time. The only thing special about a total eclipse is that it becomes truly dark for a few minutes, and your pupils start to expand to adapt to the darkness. Consequently, the most dangerous moment is at the end of totality, when your eyes have grown wide and the sun suddenly reappears. Be sure to don your eclipse glasses or look away right before the sun reappears; you don’t want to look straight into the sun at that moment.

Time and Date is a great resource for getting the timing of the eclipse for your specific location, accurate to within a few seconds.

$As with any astronomical observation, no guarantee is made that the skies will be clear of clouds. I have spent many a night at observatories wishing for the sky to clear and obsessively refreshing the satellite maps to discern when it might do so. It doesn’t help – it’s almost as if nature doesn’t care that we want to witness one of its greatest displays. So my advice is to go where you can and don’t sweat the weather forecast. Either the sky cooperates or it doesn’t.

I’ve agreed to serve on a discussion panel about the eclipse on campus, so I’ll be here in Cleveland. We are right in the path of totality, but the weather statistics here are… not good. To make matters worse for the superstitious, April 8 is also the home opener for the Cleveland Guardians. Opening day is always a joyous time with a packed stadium, but the weather is inevitably miserable. Nevertheless, all we need is a brief opening in the clouds at just the right time. At an observatory we would call a that a sucker hole – a gap in the clouds big enough to get the inexperienced observer to run around prepping the instrument and the telescope, an intense amount of work, to open up and observe just in time for the clouds to cover up the sky again. Come Monday, I’ll happily accept a well-timed sucker hole.

It is not linear

It is not linear

I just got back from a visit to the Carnegie Institution of Washington where I gave a talk and saw some old friends. I was a postdoc at the Department of Terrestrial Magnetism (DTM) in the ’90s. DTM is so-named because in their early days they literally traveled the world mapping the magnetic field. When I was there, DTM+ had a small extragalactic astronomy group including Vera Rubin*, Francois Schweizer, and John Graham. Working there as a Carnegie Fellow gave me great latitude to pursue whatever science I wanted, with the benefit of discussions with these great astronomers. After my initial work on low surface brightness galaxies had brought MOND to my attention, much of the follow-up work checking all (and I do mean all) the other constraints was done there, ultimately resulting in the triptych of papers showing that the bulk of the evidence available at that time favored MOND over the dark matter interpretation.

When I joined the faculty at the University of Maryland in 1998, I saw the need to develop a graduate course on cosmology, which did not exist there at that time. I began to consider how cosmic structure might form in MOND, but was taken aback when Simon White asked me to referee a paper on the subject by Bob Sanders. He had found much what I was finding, that there was no way to avoid an early burst of speedy galaxy formation. I had been scooped!

It has taken a quarter century to test our predictions, so any concern about who said what first seems silly now. Indeed, the bigger problem is informing people that these predictions were made at all. I had a huge eye roll last month when Physics Magazine came out with

February 12, 2024
NEWS FEATURE
JWST Sees More Galaxies than Expected
February 9, 2024

The new JWST observatory is revealing far more bright galaxies in the early Universe than anyone predicted, and astrophysicists have more than one explanation for the puzzle.

Physics Magazine

Far more bright galaxies in the early Universe than anyone predicted! Who could have predicted it? I guess I am anyone.

Joking aside, this is a great illustration of the inefficiency of scientific communication. I wrote a series of papers on the subject. I wasn’t alone; so did others. I gave talks about it. I’ve emphasized it in scientific reviews. My papers are frequently cited, ranking in the top 2% among the top 2% across all sciences. They’re cited by prominent cosmologists. Heck, I’ve even blogged about it. And yet, it comes as such a surprise that it couldn’t have possibly happened, to the extent that no one bothers to check what is in the literature. (There was a similar sociology around the prediction of the CMB second peak. It didn’t happen if we don’t look.)

So what did the Physics Magazine article talk about? More than one explanation, most of which are the conventionalist approaches we’ve talked about before – make star formation more efficient, or adjust the IMF (the mass spectrum with which stars form) to squeeze more UV photons out of fewer baryons. But there is also a paper by Sabti et al. that basically asserts “this can’t be happening!” which is exactly the point.

Sabti et al. ask whether the can boost the amplitude of structure formation in a way that satisfies both the new JWST observations and previous Hubble data. The answer is no:

We consider beyond-ΛCDM power-spectrum enhancements and show that any departure large enough to reproduce the abundance of ultramassive JWST candidates is in conflict with the HST data.

Sabti et al.

At first, this struck me as some form of reality denial, like an assertion that the luminosity density could not possible exceed LCDM predictions, even though that is exactly what it is observed to do:

The integrated UV luminosity density as a function of redshift from Adams et al. (2023). The data exceed the expectation for z > 10, even with the goal posts in motion.

On a closer read, I realized my initial impression was wrong; they are making a much better argument. The star formation rate is what is really constrained by the UV luminosity, but if that is attributed to stellar mass, you can’t get there from here – even with some jiggering of structure formation. That appears to be correct, within the framework of their considerations. Yet an alteration of structure formation is exactly what led to the now-corroborated prediction of Sanders (1998), so something still seemed odd. Just how were they altering it?

It took a close read, but the issue is in their equation 3. They allow for more structure formation by increasing the amplitude. However, they maintain the usual linear growth rate. In effect, they boost the amplitude of the linear dashed line in the left panel below, while maintaining its shape:

The growth rate of structure in CDM (linear, at left) and MOND (nonlinear, at right).

This is strongly constrain at both higher and lower redshifts, so only a little boost in amplitude is possible, assuming linear growth. So what they’ve correctly shown is that the usual linear growth rate of LCDM cannot do what needs to be done. That just emphasizes my point: to get the rapid growth we observe in the narrow time range available above redshift ten, the rate of growth needs to be nonlinear.

It’s not linear from Star Trek DS9.

Nonlinearity is unavoidable in MOND – hence the prediction of big galaxies at high redshift. Nonlinearity is a bear to calculate, which is part of the reason nobody wants to go there. Tough nougies. They teach us in grad school that the early universe is simple. It is a mantra to many who work in the field. I’m sorry, did God promise this? I understand the reasons why the early universe should be simple in standard FLRW cosmology, but what if the universe we live in isn’t that? No one has standing to promise that the early universe is as simple as expected. That’s just a fairy tale cosmologists tell their young so they can sleep at night.


+DTM has since been merged with the Geophysical Laboratory to become the Earth and Planets Laboratory. These departments shared the Broad Branch Road campus but maintained a friendly rivalry in the soccer Mud Cup, so named because the first Mud Cup was played on a field that was a such a quagmire that we all became completely covered in mud. It was great fun.

*Vera was always adamant that she was not a physicist, and yet a search returns the thumbnail

even though the Wikipedia article itself does not (at present) make this spurious “and physicist” assertion.

The evolution of the luminosity density

The evolution of the luminosity density

The results from the high redshift universe keep pouring in from JWST. It is a full time job, and then some, just to keep track. One intriguing aspect is the luminosity density of the universe at z > 10. I had not thought this to be problematic for LCDM, as it only depends on the overall number density of stars, not whether they’re in big or small galaxies. I checked this a couple of years ago, and it was fine. At that point we were limited to z < 10, so what about higher redshift?

It helps to have in mind the contrasting predictions of distinct hypotheses, so a quick reminder. LCDM predicts a gradual build up of the dark matter halo mass function that should presumably be tracked by the galaxies within these halos. MOND predicts that galaxies of a wide range of masses form abruptly, including the biggest ones. The big distinction I’ve focused on is the formation epoch of the most massive galaxies. These take a long time to build up in LCDM, which typically takes half a Hubble time (~7 billion years; z < 1) for a giant elliptical to build up half its final stellar mass. Baryonic mass assembly is considerably more rapid in MOND, so this benchmark can be attained much earlier, even within the first billion years after the Big Bang (z > 5).

In both theories, astrophysics plays a role. How does gas condense into galaxies, and then form into stars? Gravity just tells us when we can assemble the mass, not how it becomes luminous. So the critical question is whether the high redshift galaxies JWST sees are indeed massive. They’re much brighter than had been predicted by LCDM, and in-line with the simplest models evolutionary models one can build in MOND, so the latter is the more natural interpretation. However, it is much harder to predict how many galaxies form in MOND; it is straightforward to show that they should form fast but much harder to figure out how many do so – i.e., how many baryons get incorporated into collapsed objects, and how many get left behind, stranded in the intergalactic medium? Consequently, the luminosity density – the total number of stars, regardless of what size galaxies they’re in – did not seem like a straight-up test the way the masses of individual galaxies is.

It is not difficult to produce lots of stars at high redshift in LCDM. But those stars should be in many protogalactic fragments, not individually massive galaxies. As a reminder, here is the merger tree for a galaxy that becomes a bright elliptical at low redshift:

Merger tree from De Lucia & Blaizot 2007 showing the hierarchical build-up of massive galaxies from many protogalactic fragments.

At large lookback times, i.e., high redshift, galaxies are small protogalactic fragments that have not yet assembled into a large island universe. This happens much faster in MOND, so we expect that for many (not necessarily all) galaxies, this process is basically complete after a mere billion years or so, often less. In both theories, your mileage will vary: each galaxy will have its own unique formation history. Nevertheless, that’s the basic difference: big galaxies form quickly in MOND while they should still be little chunks at high z in LCDM.

The hierarchical formation of structure is a fundamental prediction of LCDM, so this is in principle a place it can break. That is why many people are following the usual script of blaming astrophysics, i.e., how stars form, not how mass assembles. The latter is fundamental while the former is fungible.

Gradual mass assembly is so fundamental that its failure would break LCDM. Indeed, it is so deeply embedded in the mental framework of people working on it that it doesn’t seem to occur to most of them to consider the possibility that it could work any other way. It simply has to work that way; we were taught so in grad school!

Here is a sketch of how structures grow over time under the influence of cold dark matter (left, from Schramm 1992) and MOND (right, from Sanders & McGaugh 2002; see also this further discussion). The slow linear growth of CDM (long-dashed line, left panel) is replaced by a rapid, nonlinear growth in MOND (solid lines at right; numbers correspond to different scales). Nonlinear growth moderates after cosmic expansion begins to accelerate (dashed vertical line in right panel).

A principle result in perturbation theory applied to density fluctuations in an expanding universe governed by General Relativity is that the growth rate of these proto-objects is proportional to the expansion rate of the universe – hence the linear long-dashed line in the left diagram. The baryons cannot match the observations by themselves because the universe has “only” expanded by a factor of a thousand since recombination while structure has grown by a factor of a hundred thousand. This was one of the primary motivations for inventing cold dark matter in the first place: it can grow at the theory-specified rate without obliterating the observed isotropy% of the microwave background. The skeletal structure of the cosmic web grows in cold dark matter first; the baryons fall in afterwards (short-dashed line in left panel).

That’s how it works. Without dark matter, structure cannot form, so we needn’t consider MOND nor speak of it ever again forever and ever, amen.

Except, of course, that isn’t necessarily how structure formation works in MOND. Like every other inference of dark matter, the slow growth of perturbations assumes that gravity is normal. If we consider a different force law, then we have to revisit this basic result. Exactly how structure formation works in MOND is not a settled subject, but the panel at right illustrates how I think it might work. One seemingly unavoidable aspect is that MOND is nonlinear, so the growth rate becomes nonlinear at some point, which is rather early on if Milgrom’s constant a0 does not evolve. Rather than needing dark matter to achieve a growth factory of 105, the boost to the force law enables baryons do it on their own. That, in a nutshell, is why MOND predicts the early formation of big galaxies.

The same nonlinearity that makes structure grow fast in MOND also makes it very hard to predict the mass function. My nominal expectation is that the present-day galaxy baryonic mass function is established early and galaxies mostly evolve as closed boxes after that. Not exclusively; mergers still occasionally happen, as might continued gas accretion. In addition to the big galaxies that form their stars rapidly and eventually become giant elliptical galaxies, there will also be a population for which gas accretion is gradual^ enough to settle into a preferred plane and evolve into a spiral galaxy. But that is all gas physics and hand waving; for the mass function I simply don’t know how to extract a prediction from a nonlinear version of the Press-Schechter formalism. Somebody smarter than me should try that.

We do know how to do it for LCDM, at least for the dark matter halos, so there is a testable prediction there. The observable test depends on the messy astrophysics of forming stars and the shape of the mass function. The total luminosity density integrates over the shape, so is a rather forgiving test, as it doesn’t distinguish between stars in lots of tiny galaxies or the same number in a few big ones. Consequently, I hadn’t put much stock in it. But it is also a more robustly measured quantity, so perhaps it is more interesting than I gave it credit for, at least once we get to such high redshift that there should be hardly any stars.

Here is a plot of the ultraviolet (UV) luminosity density from Adams et al. (2023):

Fig. 8 from Adams et al. (2023) showing the integrated UV luminosity density as a function of redshift. UV light is produced by short-lived, massive stars, so makes a good proxy for the star formation rate (right axis).

The lower line is one+ a priori prediction of LCDM. I checked this back when JWST was launched, and saw no issues up to z=10, which remains true. However, the data now available at higher redshift are systematically higher than the prediction. The reason for this is simple, and the same as we’ve discussed before: dark matter halos are just beginning to get big; they don’t have enough baryons in them to make that many stars – at least not for the usual assumptions, or even just from extrapolating what we know quasi-empirically. (I say “quasi” because the extrapolation requires a theory-dependent rate of mass growth.)

The dashed line is what I consider to be a reasonable adjustment of the a priori prediction. Putting on an LCDM hat, it is actually closer to what I would have predicted myself because it has a constant star formation efficiency which is one of the knobs I prefer to fix empirically and then not touch. With that, everything is good up to z=10.5, maybe even to z=12 if we only believe* the data with uncertainties. But the bulk of the high redshift data sit well above the plausible expectation of LCDM, so grasping at the dangling ends of the biggest error bars seems unlikely to save us from a fall.

Ignoring the model lines, the data flatten out at z > 10, which is another way of saying that the UV luminosity function isn’t evolving when it should be. This redshift range does not correspond to much cosmic time, only a few hundred million years, so it makes the empiricist in me uncomfortable to invoke astrophysical causes. We have to imagine that the physical conditions change rapidly in the first sliver of cosmic time at just the right fine-tuned rate to make it look like there is no evolution at all, then settle down into a star formation efficiency that remains constant in perpetuity thereafter.

Harikane et al. (2023) also come to the conclusion that there is too much star formation going on at high redshift (their Fig. 18 is like that of Adams above, but extending all the way to z=0). Like many, they appear to be unaware that the early onset of structure formation had been predicted, so discuss three conventional astrophysical solutions as if these were the only possibilities. Translating from their section 6, the astrophysical options are:

  • Star formation was more efficient early on
  • Active Galactic Nuclei (AGN)
  • A top heavy IMF

This is a pretty broad view of the things that are being considered currently, though I’m sure people will add to this list as time goes forward and entropy increases.

Taking these in reverse order, the idea of a top heavy IMF is that preferentially more massive stars form early on. These produce more light per unit mass, so one gets brighter galaxies than predicted with a normal IMF. This is an idea that recurs every so often; see, e.g., section 3.1.1 of McGaugh (2004) where I discuss it in the related context of trying to get LCDM models to reionize the universe early enough. Supermassive Population III stars were all the rage back then. Changing the mass spectrum& with which stars form is one of those uber-free parameters that good modelers refrain from twiddling because it gives too much freedom. It is not a single knob so much as a Pandora’s box full of knobs that invoke a thousand Salpeter’s demons to do nearly anything at the price of understanding nothing.

As it happens, the option of a grossly variable IMF is already disfavored by the existence of quenched galaxies at z~3 that formed a normal stellar population at much higher redshift (z~11). These galaxies are composed of stars that have the spectral signatures appropriate for a population that formed with a normal IMF and evolved as stars do. This is exactly what we expect for galaxies that form early and evolve passively. Adjusting the IMF to explain the obvious makes a mockery of Occam’s razor.

AGN is a catchall term for objects like quasars that are powered by supermassive black holes at the centers of galaxies. This is a light source that is non-stellar, so we’ll overestimate the stellar mass if we mistake some light from AGN# as being from stars. In addition, we know that AGN were more prolific in the early universe. That in itself is also a problem: just as forming galaxies early is hard, so too is it hard to form enough supermassive black holes that early. So this just becomes the same problem in a different guise. Besides, the resolution of JWST is good enough to see where the light is coming from, and it ain’t all from unresolved AGN. Harikane et al. estimate that the AGN contribution is only ~10%.

That leaves the star formation efficiency, which is certainly another knob to twiddle. On the one hand, this is a reasonable thing to do, since we don’t really know what the star formation efficiency in the early universe was. On the other, we expected the opposite: star formation should, if anything, be less efficient at high redshift when the metallicity was low so there were few ways for gas to cool, which is widely considered to be a prerequisite for initiating star formation. Indeed, inefficient cooling was an argument in favor of a top-heavy IMF (perhaps stars need to be more massive to overcome higher temperatures in the gas from which they form), so these two possibilities contradict one another: we can have one but not both.

To me, the star formation efficiency is the most obvious knob to twiddle, but it has to be rather fine-tuned. There isn’t much cosmic time over which the variation must occur, and yet it has to change rapidly and in such a way as to precisely balance the non-evolving UV luminosity function against a rapidly evolving dark matter halo mass function. Once again, we’re in the position of having to invoke astrophysics that we don’t understand to make up for a manifest deficit the behavior of dark matter. Funny how those messy baryons always cover up for that clean, pure, simple dark matter.

I could go on about these possibilities at great length (and did in the 2004 paper cited above). I decline to do so any farther: we keep digging this hole just to fill it again. These ideas only seem reasonable as knobs to turn if one doesn’t see any other way out, which is what happens if one has absolute faith in structure formation theory and is blissfully unaware of the predictions of MOND. So I can already see the community tromping down the familiar path of persuading ourselves that the unreasonable is reasonable, that what was not predicted is what we should have expected all along, that everything is fine with cosmology when it is anything but. We’ve done it so many times before.


Initially I had the cat stuffed back in the bag image here, but that was really for a theoretical paper that I didn’t quite make it to in this post. You’ll see it again soon. The observations discussed here are by observers doing their best in the context they know, so it doesn’t seem appropriate to that.


%We were convinced of the need for non-baryonic dark matter before any fluctuations in the microwave background were detected; their absence at the level of one part in a thousand sufficed.

^The assembly of baryonic mass can and in most cases should be rapid. It is the settling of gas into a rotationally supported structure that takes time – this is influenced by gas physics, not just gravity. Regardless of gravity theory, gas needs to settle gently into a rotating disk in order for spiral galaxies to exist.

+There are other predictions that differ in detail, but this is a reasonable representative of the basic expectation.

*This is not necessarily unreasonable, as there is some proclivity to underestimate the uncertainties. That’s a general statement about the field; I have made no attempt to assess how reasonable these particular error bars are.

&Top-heavy refers to there being more than the usual complement of bright but short-lived (tens of millions of years) stars. These stars are individually high mass (bigger than the sun), while long-lived stars are low mass. Though individually low in mass, these faint stars are very numerous. When one integrates over the population, one finds that most of the total stellar mass resides in the faint, low mass stars while much of the light is produced by the high mass stars. So a top heavy IMF explains high redshift galaxies by making them out of the brightest stars that require little mass to build. However, these stars will explode and go away on a short time scale, leaving little behind. If we don’t outright truncate the mass function (so many knobs here!), there could be some longer-lived stars leftover, but they must be few enough for the whole galaxy to fade to invisibility or we haven’t gained anything. So it is surprising, from this perspective, to see massive galaxies that appear to have evolved normally without any of these knobs getting twiddled.

#Excess AGN were one possibility Jay Franck considered in his thesis as the explanation for what we then considered to be hyperluminous galaxies, but the known luminosity function of AGN up to z = 4 couldn’t explain the entire excess. With the clarity of hindsight, we were just seeing the same sorts of bright, early galaxies that JWST has brought into sharper focus.

Clusters of galaxies ruin everything

Clusters of galaxies ruin everything

A common refrain I hear is that MOND works well in galaxies, but not in clusters of galaxies. The oft-unspoken but absolutely intended implication is that we can therefore dismiss MOND and never speak of it again. That’s silly.

Even if MOND is wrong, that it works as well as it does is surely telling us something. I would like to know why that is. Perhaps it has something to do with the nature of dark matter, but we need to engage with it to make sense of it. We will never make progress if we ignore it.

Like the seventeenth century cleric Paul Gerhardt, I’m a stickler for intellectual honesty:

“When a man lies, he murders some part of the world.”

Paul Gerhardt

I would extend this to ignoring facts. One should not only be truthful, but also as complete as possible. It does not suffice to be truthful about things that support a particular position while eliding unpleasant or unpopular facts* that point in another direction. By ignoring the successes of MOND, we murder a part of the world.

Clusters of galaxies are problematic in different ways for different paradigms. Here I’ll recap three ways in which they point in different directions.

1. Cluster baryon fractions

An unpleasant fact for MOND is that it does not suffice to explain the mass discrepancy in clusters of galaxies. When we apply Milgrom’s formula to galaxies, it explains the discrepancy that is conventionally attributed to dark matter. When we apply MOND clusters, it comes up short. This has been known for a long time; here is a figure from the review Sanders & McGaugh (2002):

Figure 10 from Sanders & McGaugh (2002): (Left) the Newtonian dynamical mass of clusters of galaxies within an observed cutoff radius (rout) vs. the total observable mass in 93 X-ray-emitting clusters of galaxies (White et al. 1997). The solid line corresponds to Mdyn = Mobs (no discrepancy). (Right) the MOND dynamical mass within rout vs. the total observable mass for the same X-ray-emitting clusters. From Sanders (1999).

The Newtonian dynamical mass exceeds what is seen in baryons (left). There is a missing mass problem in clusters. The inference is that the difference is made up by dark matter – presumably the same non-baryonic cold dark matter that we need in cosmology.

When we apply MOND, the data do not fall on the line of equality as they should (right panel). There is still excess mass. MOND suffers a missing baryon problem in clusters.

The common line of reasoning is that MOND still needs dark matter in clusters, so why consider it further? The whole point of MOND is to do away with the need of dark matter, so it is terrible if we need both! Why not just have dark matter?

This attitude was reinforced by the discovery of the Bullet Cluster. You can “see” the dark matter.

An artistic rendition of data for the Bullet Cluster. Pink represents hot X-ray emitting gas, blue the mass concentration inferred through gravitational lensing, and the optical image shows many galaxies. There are two clumps of galaxies that collided and passed through one another, getting ahead of the gas which shocked on impact and lags behind as a result. The gas of the smaller “bullet” subcluster shows a distinctive shock wave.

Of course, we can’t really see the dark matter. What we see is that the mass required by gravitational lensing observations exceeds what we see in normal matter: this is the same discrepancy that Zwicky first noticed in the 1930s. The important thing about the Bullet Cluster is that the mass is associated with the location of the galaxies, not with the gas.

The baryons that we know about in clusters are mostly in the gas, which outweighs the stars by roughly an order of magnitude. So we might expect, in a modified gravity theory like MOND, that the lensing signal would peak up on the gas, not the stars. That would be true, if the gas we see were indeed the majority of the baryons. We already knew from the first plot above that this is not the case.

I use the term missing baryons above intentionally. If one already believes in dark matter, then it is perfectly reasonable to infer that the unseen mass in clusters is the non-baryonic cold dark matter. But there is nothing about the data for clusters that requires this. There is also no reason to expect every baryon to be detected. So the unseen mass in clusters could just be ordinary matter that does not happen to be in a form we can readily detect.

I do not like the missing baryon hypothesis for clusters in MOND. I struggle to imagine how we could hide the required amount of baryonic mass, which is comparable to or exceeds the gas mass. But we know from the first figure that such a component is indicated. Indeed, the Bullet Cluster falls at the top end of the plots above, being one of the most massive objects known. From that perspective, it is perfectly ordinary: it shows the same discrepancy every other cluster shows. So the discovery of the Bullet was neither here nor there to me; it was just another example of the same problem. Indeed, it would have been weird if it hadn’t shown the same discrepancy that every other cluster showed. That it does so in a nifty visual is, well, nifty, but so what? I’m more concerned that the entire population of clusters shows a discrepancy than that this one nifty case does so.

The one new thing that the Bullet Cluster did teach us is that whatever the missing mass is, it is collisionless. The gas shocked when it collided, and lags behind the galaxies. Whatever the unseen mass is, is passed through unscathed, just like the galaxies. Anything with mass separated by lots of space will do that: stars, galaxies, cold dark matter particles, hard-to-see baryonic objects like brown dwarfs or black holes, or even massive [potentially sterile] neutrinos. All of those are logical possibilities, though none of them make a heck of a lot of sense.

As much as I dislike the possibility of unseen baryons, it is important to keep the history of the subject in mind. When Zwicky discovered the need for dark matter in clusters, the discrepancy was huge: a factor of a thousand. Some of that was due to having the distance scale wrong, but most of it was due to seeing only stars. It wasn’t until 40 some years later that we started to recognize that there was intracluster gas, and that it outweighed the stars. So for a long time, the mass ratio of dark to luminous mass was around 70:1 (using a modern distance scale), and we didn’t worry much about the absurd size of this number; mostly we just cited it as evidence that there had to be something massive and non-baryonic out there.

Really there were two missing mass problems in clusters: a baryonic missing mass problem, and a dynamical missing mass problem. Most of the baryons turned out to be in the form of intracluster gas, not stars. So the 70:1 ratio changed to 7:1. That’s a big change! It brings the ratio down from a silly number to something that is temptingly close to the universal baryon fraction of cosmology. Consequently, it becomes reasonable to believe that clusters are fair samples of the universe. All the baryons have been detected, and the remaining discrepancy is entirely due to non-baryonic cold dark matter.

That’s a relatively recent realization. For decades, we didn’t recognize that most of the normal matter in clusters was in an as-yet unseen form. There had been two distinct missing mass problems. Could it happen again? Have we really detected all the baryons, or are there still more lurking there to be discovered? I think it unlikely, but fifty years ago I would also have thought it unlikely that there would have been more mass in intracluster gas than in stars in galaxies. I was ten years old then, but it is clear from the literature that no one else was seriously worried about this at the time. Heck, when I first read Milgrom’s original paper on clusters, I thought he was engaging in wishful thinking to invoke the X-ray gas as possibly containing a lot of the mass. Turns out he was right; it just isn’t quite enough.

All that said, I nevertheless think the residual missing baryon problem MOND suffers in clusters is a serious one. I do not see a reasonable solution. Unfortunately, as I’ve discussed before, LCDM suffers an analogous missing baryon problem in galaxies, so pick your poison.

It is reasonable to imagine in LCDM that some of the missing baryons on galaxy scales are present in the form of warm/hot circum-galactic gas. We’ve been looking for that for a while, and have had some success – at least for bright galaxies where the discrepancy is modest. But the problem gets progressively worse for lower mass galaxies, so it is a bold presumption that the check-sum will work out. There is no indication (beyond faith) that it will, and the fact that it gets progressively worse for lower masses is a direct consequence of the data for galaxies looking like MOND rather than LCDM.

Consequently, both paradigms suffer a residual missing baryon problem. One is seen as fatal while the other is barely seen.

2. Cluster collision speeds

A novel thing the Bullet Cluster provides is a way to estimate the speed at which its subclusters collided. You can see the shock front in the X-ray gas in the picture above. The morphology of this feature is sensitive to the speed and other details of the collision. In order to reproduce it, the two subclusters had to collide head-on, in the plane of the sky (practically all the motion is transverse), and fast. I mean, really fast: nominally 4700 km/s. That is more than the virial speed of either cluster, and more than you would expect from dropping one object onto the other. How likely is this to happen?

There is now an enormous literature on this subject, which I won’t attempt to review. It was recognized early on that the high apparent collision speed was unlikely in LCDM. The chances of observing the bullet cluster even once in an LCDM universe range from merely unlikely (~10%) to completely absurd (< 3 x 10-9). Answers this varied follow from what aspects of both observation and theory are considered, and the annoying fact that the distribution of collision speed probabilities plummets like a stone so that slightly different estimates of the “true” collision speed make a big difference to the inferred probability. What the “true” gravitationally induced collision speed is is somewhat uncertain because the hydrodynamics of the gas plays a role in shaping the shock morphology. There is a long debate about this which bores me; it boils down to it being easy to explain a few hundred extra km/s but hard to get up to the extra 1000 km/s that is needed.

At its simplest, we can imagine the two subclusters forming in the early universe, initially expanding apart along with the Hubble flow like everything else. At some point, their mutual attraction overcomes the expansion, and the two start to fall together. How fast can they get going in the time allotted?

The Bullet Cluster is one of the most massive systems in the universe, so there is lots of dark mass to accelerate the subclusters towards each other. The object is less massive in MOND, even spotting it some unseen baryons, but the long-range force is stronger. Which effect wins?

Gary Angus wrote a code to address this simple question both conventionally and in MOND. Turns out, the longer range force wins this race. MOND is good at making things go fast. While the collision speed of the Bullet Cluster is problematic for LCDM, it is rather natural in MOND. Here is a comparison:

A reasonable answer falls out of MOND with no fuss and no muss. There is room for some hydrodynamical+ high jinx, but it isn’t needed, and the amount that is reasonable makes an already reasonable result more reasonable, boosting the collision speed from the edge of the observed band to pretty much smack in the middle. This is the sort of thing that keeps me puzzled: much as I’d like to go with the flow and just accept that it has to be dark matter that’s correct, it seems like every time there is a big surprise in LCDM, MOND just does it. Why? This must be telling us something.

3. Cluster formation times

Structure is predicted to form earlier in MOND than in LCDM. This is true for both galaxies and clusters of galaxies. In his thesis, Jay Franck found lots of candidate clusters at redshifts higher than expected. Even groups of clusters:

Figure 7 from Franck & McGaugh (2016). A group of four protocluster candidates at z = 3.5 that are proximate in space. The left panel is the sky association of the candidates, while the right panel shows their galaxy distribution along the LOS. The ellipses/boxes show the search volume boundaries (Rsearch = 20 cMpc, Δz ± 20 cMpc). Three of these (CCPC-z34-005, CCPC-z34-006, CCPC-z35-003) exist in a chain along the LOS stretching ≤120 cMpc. This may become a supercluster-sized structure at z = 0.

The cluster candidates at high redshift that Jay found are more common in the real universe than seen with mock observations made using the same techniques within the Millennium simulation. Their velocity dispersions are also larger than comparable simulated objects. This implies that the amount of mass that has assembled is larger than expected at that time in LCDM, or that speeds are boosted by something like MOND, or nothing has settled into anything like equilibrium yet. The last option seems most likely to me, but that doesn’t reconcile matters with LCDM, as we don’t see the same effect in the simulation.

MOND also predicts the early emergence of the cosmic web, which would explain the early appearance of very extended structures like the “big ring.” While some of these very large scale structures are probably not real, there seem to be a lot of such things being noted for all of them to be an illusion. The knee-jerk denials of all such structures reminds me of the shock cosmologists expressed at seeing quasars at redshifts as high as 4 (even 4.9! how can it be so?) or clusters are redshift 2, or the original CfA stickman, which surprised the bejeepers out of everybody in 1987. So many times I’ve been told that a thing can’t be true because it violates theoretician’s preconceptions, only for them to prove to be true, ultimately to be something the theorists expected all along.

Well, which is it?

So, as the title says, clusters ruin everything. The residual missing baryon problem that MOND suffers in clusters is both pernicious and persistent. It isn’t the outright falsification that many people presume it to be, but is sure don’t sit right. On the other hand, both the collision speeds of clusters (there are more examples now than just the Bullet Cluster) and the early appearance of clusters at high redshift is considerably more natural in MOND than In LCDM. So the data for clusters cuts both ways. Taking the most obvious interpretation of the Bullet Cluster data, this one object falsifies both LCDM and MOND.

As always, the conclusion one draws depends on how one weighs the different lines of evidence. This is always an invitation to the bane of cognitive dissonance, accepting that which supports our pre-existing world view and rejecting the validity of evidence that calls it into question. That’s why we have the scientific method. It was application of the scientific method that caused me to change my mind: maybe I was wrong to be so sure of the existence of cold dark matter? Maybe I’m wrong now to take MOND seriously? That’s why I’ve set criteria by which I would change my mind. What are yours?


*In the discussion associated with a debate held at KITP in 2018, one particle physicist said “We should just stop talking about rotation curves.” Straight-up said it out loud! No notes, no irony, no recognition that the dark matter paradigm faces problems beyond rotation curves.

+There are now multiple examples of colliding cluster systems known. They’re a mess (Abell 520 is also called “the train wreck cluster“), so I won’t attempt to describe them all. In Angus & McGaugh (2008) we did note that MOND predicted that high collision speeds would be more frequent than in LCDM, and I have seen nothing to make me doubt that. Indeed, Xavier Hernandez pointed out to me that supersonic shocks like that of the Bullet Cluster are often observed, but basically never occur in cosmological simulations.

Quantifying the excess masses of high redshift galaxies

Quantifying the excess masses of high redshift galaxies

As predicted, JWST has been seeing big galaxies at high redshift. There are now many papers on the subject, ranging in tone from “this is a huge problem for LCDM” to “this is not a problem for LCDM at all” – a dichotomy that persists. So – which is it?

It will take some time to sort out. There are several important aspects to the problem, one of which is agreeing on what LCDM actually predicts. It is fairly robust at predicting the number density of dark matter halos as a function of mass. To convert that into something observable requires understanding how baryons find their way into dark matter halos at early times, how those baryons condense into regions dense enough to form stars, what kinds of stars form there (thus determining observables like luminosity and spectral shape), and what happens in the immediate aftermath of early star formation (does feedback shut off star formation quickly or does it persist or is there some distribution over all possibilities). This is what simulators attempt to do. It is hard work, and they are a long way from agreeing with each other. Many of them appear to be a long way from agreeing with themselves, as their answers continue to evolve – sometimes because of genuine progress in the simulations, but sometimes in response to unanticipated* observations.

Observationally, we can hope to measure at least two distinct things: the masses of individual galaxies, and their number density – how many galaxies of a particular mass exist in a specified volume. I have mostly been worried about the first issue, as it appears that individual galaxies got too big too fast. In the hierarchical galaxy formation picture of LCDM, the massive galaxies of today were assembled from many smaller protogalaxies over an extended period of time, so big galaxies don’t emerge until comparatively late: it takes about seven billion years for a typical bright galaxy to assemble half its stellar mass. (The same hierarchical process is accelerated in MOND so galaxies can already be massive at z ≈ 10.) That there are examples of individual galaxies that are already massive in the early universe is a big issue.

How common should massive galaxies be? There are always early adopters: objects that grew faster than average for their mass. We’ll always see the brightest things first, so is what we’re seeing with JWST typical? Or is it just the bright tip of an iceberg that is perfectly reasonable in LCDM? This is what the luminosity function helps quantify: just how many galaxies of each mass are there? If we can quantify that, then we can quantify how many we should be able to see with a given survey of specified depth and sky coverage.

Astronomers have been measuring the galaxy luminosity function for a long time. Doing so at high redshift has always been an ambition, so JWST is hardly the first telescope to contribute to the subject. It is the newest and best, opening a regime where we had hoped to see protogalactic fragments directly. Instead, the first thing we see are galaxies bigger than we expected (in LCDM). This has been building for some time, so let’s take a step back to provide some context.

Steinhardt et al. (2016) pointed out what they call “the impossibly early galaxy problem.” They quantified this by comparing the observed luminosity function in various redshift bins to that predicted by LCDM. We’ve discussed their Fig. 1 before, so let’s look now at their Fig. 4:

Figure 4 from Steinhardt et al. (2016)Colors correspond to redshift, with z = 4, 5, 6, 7, 8, 9, 10 being represented by blue, green, yellow, orange, red, pink, and black: there are fewer objects at high redshift where they’ve had less time to form. (a) Expected halo mass to monochromatic UV luminosity ratio, along with the required evolution to reconcile observation with theory, and (b) resulting corrected halo-mass functions derived as in Figure 1 with Mhalo/LUV evolving due to a stellar population starting at low metallicity at z = 12 and aging along the star-forming main sequence, as described in Section 4.1.1. Such a model would be reasonable given observational constraints, but cannot produce agreement between measured UV luminosity functions and simulated halo-mass functions.

In a perfect model, the points (data) would match the lines (theory) of the same color (redshift). This is not the case – observed galaxies are persistently brighter than predicted. Making that prediction is subject to all the conversions from dark matter mass to stellar mass to observed luminosity we mentioned above, so they also show what they expect and what it would take to match the data. These are the different lines in the top panel. There is a lot of discussion of this in their paper that boils down to these lines are different, and we cannot plausibly make them the same.

The word “plausibly” is doing a lot of work in that last sentence. Just because one set of authors finds something to be impossible (despite their best efforts) doesn’t mean anyone else accepts that. We usually don’t, even when we should**.

It occurs to me that not every reader may appreciate how redshift corresponds to cosmic time. So here is a graph for vanilla LCDM parameters:

The age-redshift relation for the vanilla LCDM cosmology. Everything at z > 3 is in the early universe, i.e., the first two billion years after the Big Bang. Everything at z > 10 is in the very early universe, the first half billion years when there has not yet been time to form big galaxies hierarchically.

Things don’t change much if we adopt slightly different cosmologies: this aspect of LCDM is well established. We used to think it would take a least a couple of billion years to form a big galaxy, so anything at z > 3 is surprising from that perspective. That’s not wrong, as there is an inverse relation between age and redshift, with increasing redshifts crammed into an ever smaller window of time. So while z = 5 and 10 sound very different, there is only about 700 Myr between them. That sounds like a long time to you and me, but the sun will only complete 3 orbits around the Galaxy in that time. This is why it is hard to imagine an object as large as the Milky Way starting from the near-homogeneity of the very early universe then having time to expand, decouple, recollapse, and form into something coherent so “quickly.” There is a much larger distance for material to travel than the current circumference of the solar circle, and not much time in which to do it. If we want to get it done by z = 10, there is less than 500 Myr available – about two orbits of the sun. We just can’t get there fast enough.

We’ve quickly become jaded to the absurdly high redshifts revealed by JWST, but there’s not much difference in cosmic time between these seemingly ever higher redshifts. Very early epochs were already being probed before JSWT; JWST just brings them into excruciating focus. To provide some historical perspective about what “high redshift” means, here is a quote from Schramm (1992). The full text is behind a paywall, so I’ll just quote a relevant paragraph:

Pushing the opposite direction from the “zone of mystery” epoch [the dark ages] between the background radiation and the existence of objects at high redshift is the discovery of objects at higher and higher redshift. The higher the redshift of objects found, the harder it is to have the slow growth of Figure 5 [SCDM] explain their existence. Some high redshift objects can be dismissed as statistical fluctuations if the bulk of objects still formed late. In the last year, the number of quasars with redshifts > 4 has gone to 30, with one having a redshift as large as 4.9… While such constraints are not yet a serious problem for linear growth models, eventually they might be.

David Schramm, 1992

Here we have a cosmologist already concerned 30 years ago that objects exist at z > 4. Crazy, that! Back then, the standard model was SCDM; one of the reasons to switch to LCDM was to address exactly this problem. That only buys us a couple of billion years, so now we’re smack up against the same problem all over again, just shifted to higher redshift. Some people are even invoking statistical fluctuations: same as it ever was.

Consequently, a critical question is how common these massive galaxies are. Sure, massive galaxies exist before we expected them. But are they just statistical fluctuations? This is a question we can address with the luminosity function.

Here is the situation just before JWST was launched. Yung et al. (2019) made a good faith effort to establish a prior: they made predictions for what JWST would see. This is how science is supposed to work. In the figure below, I compare that to what was known (Stefanon et al. 2021) from the Spitzer Space Telescope, in many ways the predecessor to JSWT:

Figure 4 from McGaugh (2024). The number density Φ of galaxies as a function of their stellar mass 𝑀∗, color coded by redshift with 𝑧=6, 7, 8, 9, 10 in dark blue, light blue, green, orange, and red, respectively. The left panel shows predicted stellar mass functions [lines] with the corresponding data [circles]. The right panel shows the ratio of the observed-to-predicted density of galaxies. There is a clear excess of massive galaxies at high redshifts.

If you just look at the mass functions in the left panel, things look pretty good. This is one of the dangers of the logarithmic plots necessary to illustrate the large dynamic range of astronomical data: large differences may look small in log-log space. So I also plot the ratio of densities at right. There one can see a clear excess in the number density of high mass galaxies. There are nearly an order of magnitude more 1010 M galaxies than expected at z ≈ 8!

For technical reasons I don’t care to delve into, it is difficult to get the volume estimate right when constructing the luminosity function. So I can imagine there might be some systematic effects to scale the ratio up or down. That wouldn’t do anything to explain the bump at high masses, and it is rather harder to get the shape wrong, especially at the bright end. The faint end of the luminosity function is the hard part!

The Spitzer data already probes the early universe, before JWST reported results. As those have come in, it has started to be possible to construct luminosity functions at very high redshift. Here are some measurements from Harikane et al. (2023), Finkelstein et al. (2023), and Robertson et al. (2023) together with revised predictions from Yung et al. (2024).

Figure 5 from McGaugh (2024). The number density of galaxies as a function of their rest-frame ultraviolet absolute magnitude observed by JWST, a proxy for stellar mass at high redshift. The left panel shows predicted luminosity functions [lines], color coded by redshift: blue, green, orange, red for 𝑧=9, 11, 12, 14, respectively. Data in the corresponding redshift bins are shown as squares, circles, and triangles. The right panel shows the ratio of the observed-to-predicted density of galaxies. The observed luminosity function barely evolves, in contrast to the prediction of substantial evolution as the first dark matter halos assemble. There is a large excess of bright galaxies at the highest redshifts observed.

Again, we see that there is an excess of bright galaxies at the highest redshifts.

As we look to progressively higher redshift, the light we observe shifts from familiar optical bands to the ultraviolet. This was a huge part of the motivation to build JWST: it is optimized for the infrared, so we can observed the redshifted optical light as our eyes would see it. Astronomers always push to the edge of what a telescope can do, so we start to run into this problem again at the highest redshifts. The mapping of ultraviolet light to stellar mass is one of the harder tasks in stellar population work, much less mapping that to a dark matter halo mass. So one promising conventional idea is “the up-scattering in UV luminosity of small, abundant halos due to stochastic, high efficiency star formation during the initial phases of galaxy formation (unregulated star formation)” discussed$ by Finkelstein et al. (2023). I like this because, yeah, we expect lots of little halos, star formation is messy and star formation during the first phases of galaxy formation should be especially messy, so it is easy to imagine little halos stochastically lighting up in the UV. But can this be enough?

It remains to be seen if the observations can be explained by this or any of the usual tweaks to star formation. It seems like a big gap to overcome. I mean, just look at the left panel of the final figure above. The observed UV luminosity function is barely evolving while the prediction of LCDM is dropping like a rock. Indeed, the mass functions get jagged, which may be an indication that there are so few dark matter halos in the simulation volume at the redshift in question that they do not suffice to define a smooth mass function. Indeed, Harikane et al. estimate a luminosity density of ∼7 × 10−6 mag.−1 Mpc−3 at 𝑧≈16. This point is omitted from the figure above because the corresponding prediction is NAN (not a number): there just isn’t anything big enough in the simulation to do be so bright that early.

There is good reason to be skeptical of the data at 𝑧≈16. There is also good reason to be skeptical of the simulations. These have yet to converge, and even the predictions of the same group continue to evolve. Yung et al. (2019) did the right thing to establish a prior before JWST’s launch, but they haven’t stuck by it. The density of rare, massive galaxies has gone up by a factor of 2 to 2.5 in Yung et al. (2024). They attribute this to the use of higher resolution simulations, which may very well be correct: in order to track the formation of the earliest structures, you have to resolve them. But it doesn’t exactly inspire confidence that we actually know what LCDM predicts, and it feels like the same sort of moving of the goalposts that I’ve witnessed over and over and over and over and over again.

It always seems to come down to special pleading:

Please don’t falsify LCDM! I ran out of computer time. I had a disk crash. I didn’t have a grant for supercomputer time. My simulation data didn’t come back from the processing center. A senior colleague insisted on a rewrite. Someone stole my laptop. There was an earthquake, a terrible flood, locusts! It wasn’t my fault! I swear to God!

And the community loves LCDM, so we fall for it every time.

Oh, LCDM. LCDM, honey.

*There is always a danger in turning knobs to fit the data, and there are plenty of knobs to turn. So what LCDM predicts is a very serious matter – a theory is only as good as its prior, and we should be skeptical if theorists keep adjusting what that is in response to observations they failed to predict. This is true even in the absence of the existential threat of MOND which implies that the entire field of cosmological simulations is betrayed by its most fundamental assumptions, reducing it to “garbage in, garbage out.”

**When I first found that MOND had predicted our observations of low surface brightness galaxies where dark matter had not, despite my best efforts to make it work out, Ortwin Gerhard asked me if he “had to believe it.” My instant reaction was “this is astronomy, we don’t have to believe anything.” More seriously, this question applies on many levels: do we believe the data? do we believe the interpretation? is this the only possible conclusion? At the time, I had already tried very hard to fix it, and had failed. Still, I was willing to imagine there might be some way out, and maybe someone could figure out something I had not. Since that time, lots of other people have tried and also failed. This has not kept some of them from claiming that they have succeeded, but they never seem to address the underlying problem, and most of these models are mere variations on things I tried and dismissed as obviously unworkable.

Now, as then, what we are obliged to believe is the data, to the limits of their accuracy. The data have improved substantially, and at this point it is clear that the radial acceleration relation exists+ and has remarkably small intrinsic scatter. What we can always argue about is the interpretation: sure, it looks exactly like MOND, and MOND was the only theory that predicted it in advance, and we haven’t been able to come up with a reasonable explanation in terms of dark matter, but perhaps one can be found in some dark matter model that does not yet exist.

+Of course, there will always be some people behind the times and in a state of denial, as this subject seems to defeat rationalism in the hearts and minds of particle physicists in the same way Darwin still enrages some of the more religiously inclined.

$I directly quote Finkelstein’s coauthor Mauro Giavalisco from an email exchange.

Discussion of Dark Matter and Modified Gravity

To start the new year, I provide a link to a discussion I had with Simon White on Phil Halper’s YouTube channel:

In this post I’ll say little that we don’t talk about, but will add some background and mildly amusing anecdotes. I’ll also try addressing the one point of factual disagreement. For the most part, Simon & I entirely agree about the relevant facts; what we’re discussing is the interpretation of those facts. It was a perfectly civil conversation, and I hope it can provide an example for how it is possible to have a positive discussion about a controversial topic+ without personal animus.

First, I’ll comment on the title, in particular the “vs.” This is not really Simon vs. me. This is a discussion between two scientists who are trying to understand how the universe works (no small ask!). We’ve been asked to advocate for different viewpoints, so one might call it “Dark Matter vs. MOND.” I expect Simon and I could swap sides and have an equally interesting discussion. One needs to be able to do that in order to not simply be a partisan hack. It’s not like MOND is my theory – I falsified my own hypothesis long ago, and got dragged reluctantly into this business for honestly reporting that Milgrom got right what I got wrong.

For those who don’t know, Simon White is one of the preeminent scholars working on cosmological computer simulations, having done important work on galaxy formation and structure formation, the baryon fraction in clusters, and the structure of dark matter halos (Simon is the W in NFW halos). He was a Reader at the Institute of Astronomy at the University of Cambridge where we overlapped (it was my first postdoc) before he moved on to become the director of the Max Planck Institute for Astrophysics where he was mentor to many people now working in the field.

That’s a very short summary of a long and distinguished career; Simon has done lots of other things. I highlight these works because they came up at some point in our discussion. Davis, Efstathiou, Frenk, & White are the “gang of four” that was mentioned; around Cambridge I also occasionally heard them referred to as the Cold Dark Mafia. The baryon fraction of clusters was one of the key observations that led from SCDM to LCDM.

The subject of galaxy formation runs throughout our discussion. It is always a fraught issue how things form in astronomy. It is one thing to understand how stars evolve, once made; making them in the first place is another matter. Hard as that is to do in simulations, galaxy formation involves the extra element of dark matter in an expanding universe. Understanding how galaxies come to be is essential to predicting anything about what they are now, at least in the context of LCDM*. Both Simon and I have worked on this subject our entire careers, in very much the same framework if from different perspectives – by which I mean he is a theorist who does some observational work while I’m an observer who does some theory, not LCDM vs. MOND.

When Simon moved to Max Planck, the center of galaxy formation work moved as well – it seemed like he took half of Cambridge astronomy with him. This included my then-office mate, Houjun Mo. At one point I refer to the paper Mo & I wrote on the clustering of low surface brightness galaxies and how I expected them to reside in late-forming dark matter halos**. I often cite Mo, Mao, & White as a touchstone of galaxy formation theory in LCDM; they subsequently wrote an entire textbook about it. (I was already warning them then that I didn’t think their explanations of the Tully-Fisher relation were viable, at least not when combined with the effect we have subsequently named the diversity of rotation curve shapes.)

When I first began to worry that we were barking up the wrong tree with dark matter, I asked myself what could falsify it. It was hard to come up with good answers, and I worried it wasn’t falsifiable. So I started asking other people what would falsify cold dark matter. Most did not answer. They often had a shocked look like they’d never thought about it, and would rather not***. It’s a bind: no one wants it to be false, but most everyone accepts that for it to qualify as physical science it should be falsifiable. So it was a question that always provoked a record-scratch moment in which most scientists simply freeze up.

Simon was one of the first to give a straight answer to this question without hesitation, circa 1999. At that point it was clear that dark matter halos formed central density cusps in simulations; so those “cusps had to exist” in the centers of galaxies. At that point, we believed that to mean all galaxies. The question was complicated by the large dynamical contribution of stars in high surface brightness galaxies, but low surface brightness galaxies were dark matter dominated down to small radii. So we thought these were the ideal place to test the cusp hypothesis.

We no longer believe that. After many attempts at evasion, cold dark matter failed this test; feedback was invoked, and the goalposts started to move. There is now a consensus among simulators that feedback in intermediate mass galaxies can alter the inner mass distribution of dark matter halos. Exactly how this happens depends on who you ask, but it is at least possible to explain the absence of the predicted cusps. This goes in the right direction to explain some data, but by itself does not suffice to address the thornier question of why the distribution of baryons is predictive of the kinematics even when the mass is dominated by dark matter. This is why the discussion focused on the lowest mass galaxies where there hasn’t been enough star formation to drive the feedback necessary to alter cusps. Some of these galaxies can be described as having cusps, but probably not all. Thinking only in those terms elides the fact that MOND has a better record of predictive success. I want to know why this happens; it must surely be telling us something important about how the universe works.

The one point of factual disagreement we encountered had to do with the mass profile of galaxies at large radii as traced by gravitational lensing. It is always necessary to agree on the facts before debating their interpretation, so we didn’t press this far. Afterwards, Simon sent a citation to what he was talking about: this paper by Wang et al. (2016). In particular, look at their Fig. 4:

Fig. 4 of Wang et al. (2016). The excess surface density inferred from gravitational lensing for galaxies in different mass bins (data points) compared to mock observations of the same quantity made from within a simulation (lines). Looks like excellent agreement.

This plot quantifies the mass distribution around isolated galaxies to very large scales. There is good agreement between the lensing observations and the mock observations made within a simulation. Indeed, one can see an initial downward bend corresponding to the outer part of an NFW halo (the “one-halo term”), then an inflection to different behavior due to the presence of surrounding dark matter halos (the “two-halo term”). This is what Simon was talking about when he said gravitational lensing was in good agreement with LCDM.

I was thinking of a different, closely related result. I had in mind the work of Brouwer et al. (2021), which I discussed previously. Very recently, Dr. Tobias Mistele has made a revised analysis of these data. That’s worthy its own post, so I’ll leave out the details, which can be found in this preprint. The bottom line is in Fig. 2, which shows the radial acceleration relation derived from gravitational lensing around isolated galaxies:

The radial acceleration relation from weak gravitational lensing (colored points) extending existing kinematic data (grey points) to lower acceleration corresponding to very large radii (~ 1 Mpc). The dashed line is the prediction of MOND. Looks like excellent agreement.

This plot quantifies the radial acceleration due to the gravitational potential of isolated galaxies to very low accelerations. There is good agreement between the lensing observations and the extrapolation of the radial acceleration relation predicted by MOND. There are no features until extremely low acceleration where there may be a hint of the external field effect. This is what I was talking about when I said gravitational lensing was in good agreement with MOND, and that the data indicated a single halo with an r-2 density profile that extends far out where we ought to see the r-3 behavior of NFW.

The two plots above use the same method applied to the same kind of data. They should be consistent, yet they seem to tell a different story. This is the point of factual disagreement Simon and I had, so we let it be. No point in arguing about the interpretation when you can’t agree on the facts.

I do not know why these results differ, and I’m not going to attempt to solve it here. I suspect it has something to do with sample selection. Both studies rely on isolated galaxies, but how do we define that? How well do we achieve the goal of identifying isolated galaxies? No galaxy is an island; at some level, there is always a neighbor. But is it massive enough to perturb the lensing signal, or can we successfully define samples of galaxies that are effectively isolated, so that we’re only looking at the gravitational potential of that galaxy and not that of it plus some neighbors? Looks like there is some work left to do to sort this out.

Stepping back from that, we agreed on pretty much everything else. MOND as a fundamental theory remains incomplete. LCDM requires us to believe that 95% of the mass-energy content of the universe is something unknown and perhaps unknowable. Dark matter has become familiar as a term but remains a mystery so long as it goes undetected in the laboratory. Perhaps it exists and cannot be detected – this is a logical possibility – but that would be the least satisfactory result possible: we might as well resume counting angels on the head of a pin.

The community has been working on these issues for a long time. I have been working on this for a long time. It is a big problem. There is lots left to do.


+I get a lot of kill the messenger from people who are not capable of discussing controversial topics without personal animus. A lotinevitably from people who know assume they know more about the subject than I do but actually know much less. It is really amazing how many scientists equate me as a person with MOND as a theory without bothering to do any fact-checking. This is logical fallacy 101.

*The predictions of MOND are insensitive to the details of galaxy formation. Though of course an interesting question, we don’t need that in order to make predictions. All we need is the mass distribution that the kinematics respond to – we don’t need to know how it got that way. This is like the solar system, where it suffices to know Newton’s laws to compute orbits; we don’t need to know how the sun and planets formed. In contrast, one needs to know how a galaxy was assembled in LCDM to have any hope of predicting what its distribution of dark matter is and then using that to predict kinematics.

**The ideas Mo & I discussed thirty years ago have reappeared in the literature under the designation “assembly bias.”

***It was often accompanied by “why would you even ask that?” followed by a pained, constipated expression when they realized that every physical theory has to answer that question.

Holiday Concordance

Holiday Concordance

Screw the Earth and its smoking habit. The end of 2023 approaches, so let’s talk about the whole universe, which is its own special kind of mess.

As I’ve related before, our current cosmology, LCDM, was established over the course of the 1990s through a steady drip, drip, drip of results in observational cosmology – what Peebles calls the classic cosmological tests. There were many contributory results; I’m not going to attempt to go through them all. Important among them were the age problem, the realization that the mass density was lower than expected, and that there was more structure on large scales+ than predicted. These established LCDM in the mid-1990s as the “concordance model” – the most probable flavor of FLRW universe. Here is the key figure from Ostriker & Steinhardt depicting the then-allowed region of the density parameter and Hubble constant:

The addition of the cosmological constant to the standard model – replacing SCDM with LCDM – was a brain-wrenching ordeal. Lambda had long been anathema, and there was a region in which an open universe was possible, even reasonable (stripes over shade in the figure above). Moreover, this strange new LCDM made the seemingly inconceivable prediction that not only was the universe expanding [itself the older mind-bender brought to us by Hubble (and Slipher and Lemaître)], the expansion rate should be accelerating. This sounded like crazy talk at the time, so it was greeted with great rejoicing when corroborated by observations of Type Ia supernovae.

A further prediction that could distinguish LCDM from then-viable open models was the geometry of the universe. Open models have a negative curvaturek < 0, in which initially parallel light beams diverge) while the geometry in LCDM should be uniquely flat (Ωk = 0, in which initially parallel light beams remain parallel forever). Uniqueness is important, as it makes for a strong prediction, such as the location of the first peak of the acoustic power spectrum of the cosmic microwave background. In LCDM, this location was predicted to be ℓ ≈ 200 with little flexibility. For viable open models, it was more like ℓ ≈ 800 with a great deal of flexibility. The interpretation of the supernova data relied heavily on the assumption of a flat geometry, so I recall breathing a sigh of relief* when ℓ ≈ 200 was clearly observed.

Where are we now? I decided to reconstruct the Ostriker & Steinhardt plot with modern data. Here it is, with the axes swapped for reasons unrelated to this post. Deal with it.

The concordance region (white space) in the mass density-expansion rate space where the allowed regions (colored bands) of many constraints intersect. Illustrated constraints include a direct measurement of the Hubble constant, the age of the universe, the cluster baryon fraction, and large scale structure. Also shown are the best-fit values from CMB fits labeled by their date of publication (WMAP in orange; Planck in yellow). These follow the green line of constant ΩmH03; combinations of parameters along the line are tolerable but regions away from it are strongly excluded.

There is lots to be said here. First, note the scale. As the accuracy of data have improved, it has become possible to zoom in. My version of the figure is a wee postage stamp on that of Ostriker & Steinhardt. Nevertheless, the concordance region is in pretty much the same spot. Not exactly, of course; the biggest thing that has changed is that the age constraint is now completely incompatible with an open universe, so I haven’t bothered depicting it. Indeed, for the illustrated Hubble constant, the Hubble time (the age of a completely empty, “coasting” universe) is 13.4 Gyr. This is consistent with the illustrated age (13.80 ± 0.75 Gyr) only for Ωm ≈ 0, which is far off the left edge of the plot.

Second, the CMB best-fit values follow a line of constant ΩmH03. This is a deep trench in χ2 space. The region outside this trench is strongly excluded – it’s kinda the grand canyon of cosmology. Even a little off, and you’re standing on the rim looking a long way down, knowing that a much better fit is only a short step away. Once you’re in the valley of χ2, one must hunt along its bottom to find the true minimum. In the mid-`00s, a decade after Ostriker & Steinhardt, the best fit fell smack in the middle of the concordance region defined by completely independent data. It was this additional concordance that impressed me most, more than the detailed CMB fits themselves. This convinced the vast majority of scientists practicing in the field that it had to be LCDM and could only be LCDM and nothing but LCDM.

Since that time, the best-fit CMB value has wandered down the trench, away from the concordance region. These are the results that changed, not everything else. This temporal variation suggests a systematic in the interpretation of the CMB data rather than in the local distance scale.

I recall being at a conference (the Bright & Dark Universe in Naples in 2017) when the latest Planck results were announced. There was a palpable sense in the audience of having been whacked by a blunt object, like walking into a closed door you thought was open. We’d been doing precision cosmology for a long time and had settled on an answer informed by lots of independent lines of evidence, but they were telling us the One True answer was off over there. Not crazy far, but not consistent with the concordance we had come to expect. Worse, they had these crazy tiny error bars – not only were they getting an answer outside the concordance region, it was in tension with pretty much everything else. Not strong tension, but enough to make us all uncomfortable if not outright object. Indeed, there was a definite vibe that people were afraid to object. Not terrified, but nervous. Worried about being on the wrong side of the community. I get it. I know a lot about that.

People are remarkably talented at refashioning the past. Over the past five years, the Planck best-fit parameters have come to be synonymous with LCDM: all else is moot. Young scientists can be forgiven for not realizing it was ever otherwise, just as they might have been taught that cosmic acceleration was discovered by the supernova experiments totally out of the blue. These are convenient oversimplifications that elide so many pertinent events as to be tantamount to gaslighting. We refashion the past until there was never a serious controversy, then it seems strange that some of us think there still is. Sorry, not so fast, there definitely is: if you use the Planck value of the Hubble constant to estimate distances to local galaxies, you will get it wrong%, along with all distance-dependent quantities.

I’m old enough to remember a time when there was a factor of two uncertainty in the Hubble constant (50 vs. 1000) and the age constraint was the most accurate one in this plot. Thanks to genuine progress, the Hubble constant is now the more precise. Consequently, of all the data one could plot above, this is the choice that matters most to where the concordance region falls. If I adopt our own estimate (H0 = 75.1 ± 2.3 km/s/Mpc), then the concordance band gets wider and slides up a little but is basically the same as above. If instead I adopt the lowest highly accurate value, H0 = 69.8 ± 0.8 km/s/Mpc, the window slides down, but not enough to be consistent with the Planck results. Indeed, it stays to the left of the CMB constraint, becoming inconsistent with the mass density as well as the expansion rate.

Dang it, now I want to make that plot. Processing… OK, here it is:

As above, but with a lower measurement of H0. Only the range of statistical uncertainty is illustrated as a systematic uncertainty corresponds to a calibration error that slides H0 up and down – i.e., the exact situation being illustrated relative to the figure above. These two plots illustrate the range of outcomes that are possible from slightly discordant direct modern measurements of the Hubble constant; it is hard to go lower. Doing so doesn’t really help as it would just shift the tension from H0 to Ωm.

Yes, as I expected: the allowed range slides down but remains to the left of the green line. It is less inconsistent with the Planck H0, but that isn’t the only thing that matters. It is also inconsistent with the matter density. Indeed, it misses the CMB-allowed trench entirely. There is no allowed FLRW universe here.

These are only two parameters. Though arguably the most important, there are others, all of which matter to CMB fits. These are difficult to visualize simultaneously. We could, for starters, plot the baryon density as a third axis. If we did so, the concordance region would become a 3D object. It would also get squeezed, depending on what we think the baryon density actually is. Even restricting ourselves to the above-plotted constraints, there is some tension between the cluster baryon fraction and large scale structure constraint along the new third axis. I’m sure I could find in the literature more or less consistent values; this way the madness of cherry-picking lies.

There are many other constraints that could be added here. I’ve tried to stay consistent with the spirit of the original plot without making it illegible by overburdening it with lots and lots of data that all say pretty much the same thing. Nor do I wish to engage in cherry-picking. There are so many results out there that I’m sure one could find some combination that slides the allowed box this way or that – but only a little.

Whenever I’ve taught cosmology, I’ve made it a class exercise$ to investigate diagrams like this, with each student choosing an observational constraint to explore and champion. as a result, I’ve seen many variations on the above plots over the years, but since I first taught it in 1999 they’ve always been consistent with pretty much the same concordance region. It often happens that there is no concordance region; there are so many constraints that when you put them all together, nothing is left. We then debate which results to believe, or not, a process that has always been a part of the practice of cosmology.

We have painted ourselves into a corner. The usual interpretation is that we have painted ourselves into the correct corner: we live in this strange LCDM universe. It is also possible that there really is nothing left, the concordance window is closed, and we’ve falsified FLRW cosmology. That is a fate most fear to contemplate, and it seems less likely than mistakes in some discordant results, so we inevitably go down the path of cognitive dissonance, giving more credence to results that are consistent with our favorite set of LCDM parameters and less to those that do not. This is widely done without contemplating the possibility that the weird FLRW parameters we’ve ended up with are weird because they are just an approximation to some deeper theory.

So, as 2023 winds to an end, we [still] know pretty well what the parameters of cosmology are. While the tension between H0 = 67 and 73 km/s/Mpc is real, it seems like small beans compared to the successful isolation of a narrow concordance window. Sure beats arguing between 50 and 100! Even deciding which concordance window is right seems like a small matter compared to the deeper issues raised by LCDM: what is the cold dark matter? Does it really exist, or is it just a mythical entity we’ve invented for the convenient calculation of cosmic quantities? What the heck do we even mean by Lambda? Does the whole picture hang together so well that it must be correct? Or can it be falsified? Has it already been? How do we decide?

I’m sure we’ll be arguing over these questions for a long time to come.


+Structure formation is often depicted as a great success of cosmology, but it was the failure of the previous standard model, SCDM, to predict enough structure on large scales that led to its demise and its replacement by LCDM, which now faces a similar problem. The observer’s experience has consistently been that there is more structure in place earlier and on larger scales than had been anticipated before its observation.

*I believe in giving theories credit where credit is due. Putting on a cosmologist’s hat, the location of the first peak was a great success of LCDM. It was the amplitude of the second peak that came as a great surprise – unless you can take off the cosmology hat and don a MOND hat – then it was predicted. What is surprising from that perspective is the amplitude of the third peak, which makes more sense in LCDM. It seems impossible to some people that I can wear both hats without my head exploding, so they seem to simply assume I don’t think about it from their perspective when in reality it is the other way around.

%As adjudicated by galaxies with distances known from direct measurements provided by Cepheids or the tip of the red giant branch or surface brightness fluctuations or geometric methods, etc., etc., etc.

$This is a great exercise, but only works if CMB results are excluded. There has to be some narrative suspense: will the various disparate lines of evidence indeed line up? Since CMB fits constrain all parameters simultaneously, and brook no dissent, they suck the joy away from everything else in the sky and drain all interest in the debate.

Global climate basics

Global climate basics

Last time, I expressed extreme disappointment that fossil fuel executives had any role in leading the climate meeting COP28. This is a classic example of putting the the fox in charge of the hen house. The issue is easily summed up:

It’s difficult to get a man to understand something when his salary depends on not understanding it.

Upton Sinclair

Setting aside economic self-interest and other human foibles, it is clear from the comments that the science is not as clear to everyone as it is to me. That’s fair; I’ve followed this subject for half a lifetime, and it is closely related to my own field.

Stars are fusion reactors surrounded by big balls of gas; understanding how they work was a major triumph of 20th century astrophysics. We understand these things. Planetary atmospheres are also balls of gas; there is some rich physics there but the problem is in many ways simpler when they aren’t acting as the container for a giant fusion reactor. We understand these things. The atmospheres of Venus and Mars come up when teaching Astronomy 101, these planets represent opposite extremes of climate change run amok. From that perspective, Earth is a nice problem to have. We understand these things.

It is easy to get distracted by irrelevant details. No climate model is ever perfect, but that doesn’t mean we don’t understand what’s going on. The issue is basic physics, which has been understood for well over a century. Not only is the physics incredibly clear; so too is the need to take collective action to ameliorate the effects of climate change. The latter has itself been clear since 1990+ at least.

The temperature of a planet is the balance between heating by the sun during the day and re-radiation of that heat at night. The effectiveness of both depend on the properties of the planet. What is the albedo? That is, how much of the incident radiation is reflected into space without heating? Once heated, how efficiently can the heated surface cool by radiating energy to space?

If a planet has no atmosphere, it is a straightforward calculation to find the balance point. If the Earth had no atmosphere, the average temperature would be much colder than it is, about -18 C. Thankfully, we have an atmosphere. There is a natural greenhouse effect – nothing to do with human activity – that makes the actual average temperature more like +15 C. I, for one, am grateful for this. It also means that changing the composition of the atmosphere will change the balance point.

The bulk of Earth’s atmosphere is nitrogen and oxygen. These gases are transparent to the incoming optical radiation from the sun that heats the surface. They are also transparent to the outgoing infrared radiation that cools the surface. Despite composing the bulk of the atmosphere, they play basically zero role in the greenhouse effect. As far as climate goes, having only these gases in the atmosphere returns the same answer as the zero atmosphere case.

The natural greenhouse effect is entirely due to trace gases like water vapor and carbon dioxide. I note this because one reasonable-sounding falsehood that gets repeated a lot is that CO2 is a trace gas, so it can’t possibly make a difference. That’s like saying adding a small dash of poison to a beverage isn’t dangerous. Or that it makes no difference to draw a shade over a window. The shade may be much thinner than the glass of the window, but unlike the transparent glass, the shade is opaque. That’s the property greenhouse gases provide, even in trace quantities: they are opaque to the infrared radiation that is trying to cool the surface by escaping to space.

If we looked down on the Earth with eyes that saw in the infrared part of the spectrum where greenhouse gases trap heat, we’d wouldn’t see the surface of the planet. Instead, we’d see a hazy ball: the effective altitude in the atmosphere from which infrared radiation can escape to space. This isn’t a solid surface any more than the edge of a cloud is – to you and me. To the photons seeking escape, it is an effective barrier. Some don’t make it out.

The greenhouse gases are like a fog bank that has to be traversed before the heat carried by the infrared radiation can escape into space. If we add greenhouse gases to the atmosphere, it makes the fog bank thicker, effectively trapping more heat. At a basic level, the issue is that simple. The science is entirely settled; no one seriously* debates this. It has been known for over a century.

Article published in 1912 in the Braidwood Dispatch and Mining Journal, via the National Library of Australia

The leading greenhouse gas in Earth’s atmosphere is water vapor. You don’t need a fancy scientific instrument to detect this effect, just your own senses. High humidity leads to hot, sultry nights while low humidity allows rapid cooling. To feel this, visit a humid place like New Orleans and an arid one like the desert of the US west. These places feel very different at night even when their daytime temperatures are similar. The humid place cannot cool effectively because of the greenhouse effect provided by water vapor, and nighttime temperatures can remain unpleasantly high. In the dry desert, the temperature drops like a rock as soon as the sun sets, and it can get rather chilly even if it was baking hot all day long. I’ve personally experienced both conditions many times; the difference is stark and obvious.

The amount of water vapor the atmosphere can hold is a function of temperature, but on bulk it is always less than half a percent. That trace gas is nevertheless 100% of what you care about in the morning weather forecast, as it leads to rain, snow, sleet, hail, cloud cover, and all the other weather phenomena that makes life near the triple point of water interesting. Indeed, clouds increase the albedo of the planet, reflecting some of the incoming solar radiation, so water in the atmosphere prevents some heating as well as helping to retain warmth once heated. This is pretty much in balance, as the limit on how much water vapor the atmosphere can hold means that equilibrium is achieved on a short time scale: too much humidity, and it rains. The sources and sinks of H2O in the atmosphere balance out on short timescales readily perceptible to humans. It’s what we call weather.

The next most important greenhouse gas is CO2. That too has a natural level with sources and sinks. The issue that induces human-caused climate change is the extra CO2 we put in the atmosphere by burning coal, oil, etc. for the energy it provides. This does not balance out on a short timescale, so there is a cumulative effect on the climate, as anticipated in 1912.

Producing energy is a good thing; no one here is advocating that we stop doing this add return to living like cavemen. Heck, even cavemen had an environmental impact: they burned enough wood to blacken many a cave roof. Human activity has always left a mark; the problem today is that there are 8+ billion of us doing a lot more than making campfires. That adds up to a measurable change in the composition of the atmosphere.

The natural pre-industrial level of CO2 was about 277 parts per million (ppm). Here is a graph of the CO2 content of the atmosphere over the past few centuries, extending to back before the onset of the industrial revolution when our collective experiment in atmospheric physics got going. We know how much carbon we’ve burned (that’s economic activity with profits and receipts, we know this number quite well) and we can measure how much CO2 is in the atmosphere directly. They ramp up together.

Mass of CO2 in the atmosphere (in gigatonnes) since 1700. Modern measurements (blue line) come from the Mauna Loa observatory courtesy of the NOAA Global Monitoring Laboratory; older measurements (black line) come from the Law Dome Antarctic ice core data. The red line is the cumulative CO2 added to the atmosphere by human activity. I’ve added the pre-industrial value to this in the upper (thin) red line to show how it compares with the measured CO2 content.

There is lots that can be said about this plot. Just some basic points: the amount of CO2 in the atmosphere has gone up as we have burned coal and oil to generate energy. We have measurably changed the composition of the atmosphere we all breathe. The current CO2 content of the atmosphere is 424 ppm, which is much larger than the pre-industrial level of 277 ppm. That by itself ought to give one pause: we are conducting an uncontrolled experiment in atmospheric physics on a global scale. That seems like a bad idea, even if we didn’t understand heat propagation in the atmosphere, which we do.

Not only has the amount of CO2 in the atmosphere increased as we’ve burned things, it is accumulating. There are natural sinks, which is why the extra amount of CO2 in the atmosphere is less than what we’ve added: not all of it sticks around. Much of it has been absorbed by the ocean, which is acidifying as a result. But lots of CO2 persists in the atmosphere: the timescale for it to “rain out” is much longer than for water. It will take many decades and probably centuries to restore anything resembling equilibrium. We aren’t just adding CO2 to the atmosphere, we’re making a long-term investment in having it there. Future generations will have to contend with the consequences of what we’ve already done.

What have we already done? I’ve outlined the basic physics; let’s now check the predictions of one of the earliest forecasts. This is from a 1982 report generated by Exxon scientists:

Forecast atmospheric CO2 content of the atmosphere (upper line; left axis) and the corresponding temperature change (lower line; right axis). I’ve added current values for both CO2 (green) and the temperature anomaly (orange). Looks like they pretty much nailed it. Note also that the null hypothesis of no climate change, i.e., a constant temperature with increasing CO2 content, is strongly rejected.

The study made over forty years ago accurately forecast where we are today. These predictions have repeatedly been corroborated. Some models may miss minor details here and there, but the basic picture is crystal clear. Anyone who tells you otherwise has some fossil fuels to sell.

Enough has been written on this subject; I won’t suggest solutions nor delve into likely impacts. But there is absolutely no doubt that climate change is real and that we caused it. None. That this simple, plain fact is not obvious to everyone at this point is a credit to the power of disinformation and propaganda. The best course forward from here is debatable. Pretending like it isn’t a problem is straight-up reality denial.


+It has become a trope of wingnut politics in the U.S. that scientists only say climate change is real so they can get research grants. That’s ridiculous on many levels. One reason that such grants exist is that right wing politicians asked for more research. This was a delaying tactic employed in the early 1990s by then-president and oil magnate George H. W. Bush.

Fresh off the success of regulatory repair to the ozone hole problem in the late 1980s, it was reasonable to hope that we could start tackling the threat of climate change. This was a much bigger problem encompassing a broader range of human activity, but the basic science is far simpler than the atmospheric chemistry that threatened ozone. Industries that didn’t want to be regulated whined about that as usual, but no one seriously questioned the science. After the usual wailing and gnashing of teeth, appropriate regulatory action was taken, and it worked.

When it came to doing the same thing with the oil industry when an oil baron was president, well, harrumph harrumph, more research was needed. The first Bush was a Republican, but he wasn’t a backwards science-denying goon, so he offered to fund more research. It was an obvious delaying tactic, but the argument in favor of it was to make the case more convincing. So the science community was like, sure, the basic answer is already clear, but there are things that we could understand better, so we’ll do more research if that helps you to also understand the problem. But it hasn’t helped people who don’t want to understand to do so, and never will, because the problem is with them, not with the science. So now, thirty years on from Bush I, the same political party that demanded more research be done now routinely attacks scientists for doing the research they asked scientists to do.

Sorry, not sorry: just because you don’t like the answer science gives doesn’t make it wrong. It is well past time for climate denying snowflakes to stop having emotional meltdowns and grow up already.

*Sometimes it is asserted that the opacity of CO2 is already saturated, so adding more doesn’t matter. Yes on one, no on two. Even at saturation we can still make the fog bank thicker by adding more CO2 – just ask Venus. Indeed, we’re dang lucky that the CO2 bands are already saturated; if not for that, the response of the climate to adding as much CO2 as we have would be much stronger. If these features were not saturated the response would be linear instead of incremental, so the temperature would have already increased by about an extra 7 C, not the mere 1 C we’ve so far** accomplished.

**Just how much we’ve added depends on how you define “before.” Modern studies often seem to adopt the average temperature measured between 1980 and 2000, presumably because the data with which to do so are very good. This gives an increase since then around 1.1 C, which is a remarkable amount of growth in just a few decades: we’ve tipped the climate system out of anything resembling equilibrium hard and fast. Of course, the impact of human activity was already palpable before 1980, so the total change since the industrial revolution is closer to 1.5 C. We’re not quite to that arbitrary threshold yet, but I see no way to avoid blowing past it. Talk of doing so is predicated on giving us a half degree mulligan by defining “average” during a period that is not average. So if you think portrayals of the problem are exaggerated, it is actually already worse than generally depicted.