Holiday Concordance

Holiday Concordance

Screw the Earth and its smoking habit. The end of 2023 approaches, so let’s talk about the whole universe, which is its own special kind of mess.

As I’ve related before, our current cosmology, LCDM, was established over the course of the 1990s through a steady drip, drip, drip of results in observational cosmology – what Peebles calls the classic cosmological tests. There were many contributory results; I’m not going to attempt to go through them all. Important among them were the age problem, the realization that the mass density was lower than expected, and that there was more structure on large scales+ than predicted. These established LCDM in the mid-1990s as the “concordance model” – the most probable flavor of FLRW universe. Here is the key figure from Ostriker & Steinhardt depicting the then-allowed region of the density parameter and Hubble constant:

The addition of the cosmological constant to the standard model – replacing SCDM with LCDM – was a brain-wrenching ordeal. Lambda had long been anathema, and there was a region in which an open universe was possible, even reasonable (stripes over shade in the figure above). Moreover, this strange new LCDM made the seemingly inconceivable prediction that not only was the universe expanding [itself the older mind-bender brought to us by Hubble (and Slipher and Lemaître)], the expansion rate should be accelerating. This sounded like crazy talk at the time, so it was greeted with great rejoicing when corroborated by observations of Type Ia supernovae.

A further prediction that could distinguish LCDM from then-viable open models was the geometry of the universe. Open models have a negative curvaturek < 0, in which initially parallel light beams diverge) while the geometry in LCDM should be uniquely flat (Ωk = 0, in which initially parallel light beams remain parallel forever). Uniqueness is important, as it makes for a strong prediction, such as the location of the first peak of the acoustic power spectrum of the cosmic microwave background. In LCDM, this location was predicted to be ℓ ≈ 200 with little flexibility. For viable open models, it was more like ℓ ≈ 800 with a great deal of flexibility. The interpretation of the supernova data relied heavily on the assumption of a flat geometry, so I recall breathing a sigh of relief* when ℓ ≈ 200 was clearly observed.

Where are we now? I decided to reconstruct the Ostriker & Steinhardt plot with modern data. Here it is, with the axes swapped for reasons unrelated to this post. Deal with it.

The concordance region (white space) in the mass density-expansion rate space where the allowed regions (colored bands) of many constraints intersect. Illustrated constraints include a direct measurement of the Hubble constant, the age of the universe, the cluster baryon fraction, and large scale structure. Also shown are the best-fit values from CMB fits labeled by their date of publication (WMAP in orange; Planck in yellow). These follow the green line of constant ΩmH03; combinations of parameters along the line are tolerable but regions away from it are strongly excluded.

There is lots to be said here. First, note the scale. As the accuracy of data have improved, it has become possible to zoom in. My version of the figure is a wee postage stamp on that of Ostriker & Steinhardt. Nevertheless, the concordance region is in pretty much the same spot. Not exactly, of course; the biggest thing that has changed is that the age constraint is now completely incompatible with an open universe, so I haven’t bothered depicting it. Indeed, for the illustrated Hubble constant, the Hubble time (the age of a completely empty, “coasting” universe) is 13.4 Gyr. This is consistent with the illustrated age (13.80 ± 0.75 Gyr) only for Ωm ≈ 0, which is far off the left edge of the plot.

Second, the CMB best-fit values follow a line of constant ΩmH03. This is a deep trench in χ2 space. The region outside this trench is strongly excluded – it’s kinda the grand canyon of cosmology. Even a little off, and you’re standing on the rim looking a long way down, knowing that a much better fit is only a short step away. Once you’re in the valley of χ2, one must hunt along its bottom to find the true minimum. In the mid-`00s, a decade after Ostriker & Steinhardt, the best fit fell smack in the middle of the concordance region defined by completely independent data. It was this additional concordance that impressed me most, more than the detailed CMB fits themselves. This convinced the vast majority of scientists practicing in the field that it had to be LCDM and could only be LCDM and nothing but LCDM.

Since that time, the best-fit CMB value has wandered down the trench, away from the concordance region. These are the results that changed, not everything else. This temporal variation suggests a systematic in the interpretation of the CMB data rather than in the local distance scale.

I recall being at a conference (the Bright & Dark Universe in Naples in 2017) when the latest Planck results were announced. There was a palpable sense in the audience of having been whacked by a blunt object, like walking into a closed door you thought was open. We’d been doing precision cosmology for a long time and had settled on an answer informed by lots of independent lines of evidence, but they were telling us the One True answer was off over there. Not crazy far, but not consistent with the concordance we had come to expect. Worse, they had these crazy tiny error bars – not only were they getting an answer outside the concordance region, it was in tension with pretty much everything else. Not strong tension, but enough to make us all uncomfortable if not outright object. Indeed, there was a definite vibe that people were afraid to object. Not terrified, but nervous. Worried about being on the wrong side of the community. I get it. I know a lot about that.

People are remarkably talented at refashioning the past. Over the past five years, the Planck best-fit parameters have come to be synonymous with LCDM: all else is moot. Young scientists can be forgiven for not realizing it was ever otherwise, just as they might have been taught that cosmic acceleration was discovered by the supernova experiments totally out of the blue. These are convenient oversimplifications that elide so many pertinent events as to be tantamount to gaslighting. We refashion the past until there was never a serious controversy, then it seems strange that some of us think there still is. Sorry, not so fast, there definitely is: if you use the Planck value of the Hubble constant to estimate distances to local galaxies, you will get it wrong%, along with all distance-dependent quantities.

I’m old enough to remember a time when there was a factor of two uncertainty in the Hubble constant (50 vs. 1000) and the age constraint was the most accurate one in this plot. Thanks to genuine progress, the Hubble constant is now the more precise. Consequently, of all the data one could plot above, this is the choice that matters most to where the concordance region falls. If I adopt our own estimate (H0 = 75.1 ± 2.3 km/s/Mpc), then the concordance band gets wider and slides up a little but is basically the same as above. If instead I adopt the lowest highly accurate value, H0 = 69.8 ± 0.8 km/s/Mpc, the window slides down, but not enough to be consistent with the Planck results. Indeed, it stays to the left of the CMB constraint, becoming inconsistent with the mass density as well as the expansion rate.

Dang it, now I want to make that plot. Processing… OK, here it is:

As above, but with a lower measurement of H0. Only the range of statistical uncertainty is illustrated as a systematic uncertainty corresponds to a calibration error that slides H0 up and down – i.e., the exact situation being illustrated relative to the figure above. These two plots illustrate the range of outcomes that are possible from slightly discordant direct modern measurements of the Hubble constant; it is hard to go lower. Doing so doesn’t really help as it would just shift the tension from H0 to Ωm.

Yes, as I expected: the allowed range slides down but remains to the left of the green line. It is less inconsistent with the Planck H0, but that isn’t the only thing that matters. It is also inconsistent with the matter density. Indeed, it misses the CMB-allowed trench entirely. There is no allowed FLRW universe here.

These are only two parameters. Though arguably the most important, there are others, all of which matter to CMB fits. These are difficult to visualize simultaneously. We could, for starters, plot the baryon density as a third axis. If we did so, the concordance region would become a 3D object. It would also get squeezed, depending on what we think the baryon density actually is. Even restricting ourselves to the above-plotted constraints, there is some tension between the cluster baryon fraction and large scale structure constraint along the new third axis. I’m sure I could find in the literature more or less consistent values; this way the madness of cherry-picking lies.

There are many other constraints that could be added here. I’ve tried to stay consistent with the spirit of the original plot without making it illegible by overburdening it with lots and lots of data that all say pretty much the same thing. Nor do I wish to engage in cherry-picking. There are so many results out there that I’m sure one could find some combination that slides the allowed box this way or that – but only a little.

Whenever I’ve taught cosmology, I’ve made it a class exercise$ to investigate diagrams like this, with each student choosing an observational constraint to explore and champion. as a result, I’ve seen many variations on the above plots over the years, but since I first taught it in 1999 they’ve always been consistent with pretty much the same concordance region. It often happens that there is no concordance region; there are so many constraints that when you put them all together, nothing is left. We then debate which results to believe, or not, a process that has always been a part of the practice of cosmology.

We have painted ourselves into a corner. The usual interpretation is that we have painted ourselves into the correct corner: we live in this strange LCDM universe. It is also possible that there really is nothing left, the concordance window is closed, and we’ve falsified FLRW cosmology. That is a fate most fear to contemplate, and it seems less likely than mistakes in some discordant results, so we inevitably go down the path of cognitive dissonance, giving more credence to results that are consistent with our favorite set of LCDM parameters and less to those that do not. This is widely done without contemplating the possibility that the weird FLRW parameters we’ve ended up with are weird because they are just an approximation to some deeper theory.

So, as 2023 winds to an end, we [still] know pretty well what the parameters of cosmology are. While the tension between H0 = 67 and 73 km/s/Mpc is real, it seems like small beans compared to the successful isolation of a narrow concordance window. Sure beats arguing between 50 and 100! Even deciding which concordance window is right seems like a small matter compared to the deeper issues raised by LCDM: what is the cold dark matter? Does it really exist, or is it just a mythical entity we’ve invented for the convenient calculation of cosmic quantities? What the heck do we even mean by Lambda? Does the whole picture hang together so well that it must be correct? Or can it be falsified? Has it already been? How do we decide?

I’m sure we’ll be arguing over these questions for a long time to come.


+Structure formation is often depicted as a great success of cosmology, but it was the failure of the previous standard model, SCDM, to predict enough structure on large scales that led to its demise and its replacement by LCDM, which now faces a similar problem. The observer’s experience has consistently been that there is more structure in place earlier and on larger scales than had been anticipated before its observation.

*I believe in giving theories credit where credit is due. Putting on a cosmologist’s hat, the location of the first peak was a great success of LCDM. It was the amplitude of the second peak that came as a great surprise – unless you can take off the cosmology hat and don a MOND hat – then it was predicted. What is surprising from that perspective is the amplitude of the third peak, which makes more sense in LCDM. It seems impossible to some people that I can wear both hats without my head exploding, so they seem to simply assume I don’t think about it from their perspective when in reality it is the other way around.

%As adjudicated by galaxies with distances known from direct measurements provided by Cepheids or the tip of the red giant branch or surface brightness fluctuations or geometric methods, etc., etc., etc.

$This is a great exercise, but only works if CMB results are excluded. There has to be some narrative suspense: will the various disparate lines of evidence indeed line up? Since CMB fits constrain all parameters simultaneously, and brook no dissent, they suck the joy away from everything else in the sky and drain all interest in the debate.

Global climate basics

Global climate basics

Last time, I expressed extreme disappointment that fossil fuel executives had any role in leading the climate meeting COP28. This is a classic example of putting the the fox in charge of the hen house. The issue is easily summed up:

It’s difficult to get a man to understand something when his salary depends on not understanding it.

Upton Sinclair

Setting aside economic self-interest and other human foibles, it is clear from the comments that the science is not as clear to everyone as it is to me. That’s fair; I’ve followed this subject for half a lifetime, and it is closely related to my own field.

Stars are fusion reactors surrounded by big balls of gas; understanding how they work was a major triumph of 20th century astrophysics. We understand these things. Planetary atmospheres are also balls of gas; there is some rich physics there but the problem is in many ways simpler when they aren’t acting as the container for a giant fusion reactor. We understand these things. The atmospheres of Venus and Mars come up when teaching Astronomy 101, these planets represent opposite extremes of climate change run amok. From that perspective, Earth is a nice problem to have. We understand these things.

It is easy to get distracted by irrelevant details. No climate model is ever perfect, but that doesn’t mean we don’t understand what’s going on. The issue is basic physics, which has been understood for well over a century. Not only is the physics incredibly clear; so too is the need to take collective action to ameliorate the effects of climate change. The latter has itself been clear since 1990+ at least.

The temperature of a planet is the balance between heating by the sun during the day and re-radiation of that heat at night. The effectiveness of both depend on the properties of the planet. What is the albedo? That is, how much of the incident radiation is reflected into space without heating? Once heated, how efficiently can the heated surface cool by radiating energy to space?

If a planet has no atmosphere, it is a straightforward calculation to find the balance point. If the Earth had no atmosphere, the average temperature would be much colder than it is, about -18 C. Thankfully, we have an atmosphere. There is a natural greenhouse effect – nothing to do with human activity – that makes the actual average temperature more like +15 C. I, for one, am grateful for this. It also means that changing the composition of the atmosphere will change the balance point.

The bulk of Earth’s atmosphere is nitrogen and oxygen. These gases are transparent to the incoming optical radiation from the sun that heats the surface. They are also transparent to the outgoing infrared radiation that cools the surface. Despite composing the bulk of the atmosphere, they play basically zero role in the greenhouse effect. As far as climate goes, having only these gases in the atmosphere returns the same answer as the zero atmosphere case.

The natural greenhouse effect is entirely due to trace gases like water vapor and carbon dioxide. I note this because one reasonable-sounding falsehood that gets repeated a lot is that CO2 is a trace gas, so it can’t possibly make a difference. That’s like saying adding a small dash of poison to a beverage isn’t dangerous. Or that it makes no difference to draw a shade over a window. The shade may be much thinner than the glass of the window, but unlike the transparent glass, the shade is opaque. That’s the property greenhouse gases provide, even in trace quantities: they are opaque to the infrared radiation that is trying to cool the surface by escaping to space.

If we looked down on the Earth with eyes that saw in the infrared part of the spectrum where greenhouse gases trap heat, we’d wouldn’t see the surface of the planet. Instead, we’d see a hazy ball: the effective altitude in the atmosphere from which infrared radiation can escape to space. This isn’t a solid surface any more than the edge of a cloud is – to you and me. To the photons seeking escape, it is an effective barrier. Some don’t make it out.

The greenhouse gases are like a fog bank that has to be traversed before the heat carried by the infrared radiation can escape into space. If we add greenhouse gases to the atmosphere, it makes the fog bank thicker, effectively trapping more heat. At a basic level, the issue is that simple. The science is entirely settled; no one seriously* debates this. It has been known for over a century.

Article published in 1912 in the Braidwood Dispatch and Mining Journal, via the National Library of Australia

The leading greenhouse gas in Earth’s atmosphere is water vapor. You don’t need a fancy scientific instrument to detect this effect, just your own senses. High humidity leads to hot, sultry nights while low humidity allows rapid cooling. To feel this, visit a humid place like New Orleans and an arid one like the desert of the US west. These places feel very different at night even when their daytime temperatures are similar. The humid place cannot cool effectively because of the greenhouse effect provided by water vapor, and nighttime temperatures can remain unpleasantly high. In the dry desert, the temperature drops like a rock as soon as the sun sets, and it can get rather chilly even if it was baking hot all day long. I’ve personally experienced both conditions many times; the difference is stark and obvious.

The amount of water vapor the atmosphere can hold is a function of temperature, but on bulk it is always less than half a percent. That trace gas is nevertheless 100% of what you care about in the morning weather forecast, as it leads to rain, snow, sleet, hail, cloud cover, and all the other weather phenomena that makes life near the triple point of water interesting. Indeed, clouds increase the albedo of the planet, reflecting some of the incoming solar radiation, so water in the atmosphere prevents some heating as well as helping to retain warmth once heated. This is pretty much in balance, as the limit on how much water vapor the atmosphere can hold means that equilibrium is achieved on a short time scale: too much humidity, and it rains. The sources and sinks of H2O in the atmosphere balance out on short timescales readily perceptible to humans. It’s what we call weather.

The next most important greenhouse gas is CO2. That too has a natural level with sources and sinks. The issue that induces human-caused climate change is the extra CO2 we put in the atmosphere by burning coal, oil, etc. for the energy it provides. This does not balance out on a short timescale, so there is a cumulative effect on the climate, as anticipated in 1912.

Producing energy is a good thing; no one here is advocating that we stop doing this add return to living like cavemen. Heck, even cavemen had an environmental impact: they burned enough wood to blacken many a cave roof. Human activity has always left a mark; the problem today is that there are 8+ billion of us doing a lot more than making campfires. That adds up to a measurable change in the composition of the atmosphere.

The natural pre-industrial level of CO2 was about 277 parts per million (ppm). Here is a graph of the CO2 content of the atmosphere over the past few centuries, extending to back before the onset of the industrial revolution when our collective experiment in atmospheric physics got going. We know how much carbon we’ve burned (that’s economic activity with profits and receipts, we know this number quite well) and we can measure how much CO2 is in the atmosphere directly. They ramp up together.

Mass of CO2 in the atmosphere (in gigatonnes) since 1700. Modern measurements (blue line) come from the Mauna Loa observatory courtesy of the NOAA Global Monitoring Laboratory; older measurements (black line) come from the Law Dome Antarctic ice core data. The red line is the cumulative CO2 added to the atmosphere by human activity. I’ve added the pre-industrial value to this in the upper (thin) red line to show how it compares with the measured CO2 content.

There is lots that can be said about this plot. Just some basic points: the amount of CO2 in the atmosphere has gone up as we have burned coal and oil to generate energy. We have measurably changed the composition of the atmosphere we all breathe. The current CO2 content of the atmosphere is 424 ppm, which is much larger than the pre-industrial level of 277 ppm. That by itself ought to give one pause: we are conducting an uncontrolled experiment in atmospheric physics on a global scale. That seems like a bad idea, even if we didn’t understand heat propagation in the atmosphere, which we do.

Not only has the amount of CO2 in the atmosphere increased as we’ve burned things, it is accumulating. There are natural sinks, which is why the extra amount of CO2 in the atmosphere is less than what we’ve added: not all of it sticks around. Much of it has been absorbed by the ocean, which is acidifying as a result. But lots of CO2 persists in the atmosphere: the timescale for it to “rain out” is much longer than for water. It will take many decades and probably centuries to restore anything resembling equilibrium. We aren’t just adding CO2 to the atmosphere, we’re making a long-term investment in having it there. Future generations will have to contend with the consequences of what we’ve already done.

What have we already done? I’ve outlined the basic physics; let’s now check the predictions of one of the earliest forecasts. This is from a 1982 report generated by Exxon scientists:

Forecast atmospheric CO2 content of the atmosphere (upper line; left axis) and the corresponding temperature change (lower line; right axis). I’ve added current values for both CO2 (green) and the temperature anomaly (orange). Looks like they pretty much nailed it. Note also that the null hypothesis of no climate change, i.e., a constant temperature with increasing CO2 content, is strongly rejected.

The study made over forty years ago accurately forecast where we are today. These predictions have repeatedly been corroborated. Some models may miss minor details here and there, but the basic picture is crystal clear. Anyone who tells you otherwise has some fossil fuels to sell.

Enough has been written on this subject; I won’t suggest solutions nor delve into likely impacts. But there is absolutely no doubt that climate change is real and that we caused it. None. That this simple, plain fact is not obvious to everyone at this point is a credit to the power of disinformation and propaganda. The best course forward from here is debatable. Pretending like it isn’t a problem is straight-up reality denial.


+It has become a trope of wingnut politics in the U.S. that scientists only say climate change is real so they can get research grants. That’s ridiculous on many levels. One reason that such grants exist is that right wing politicians asked for more research. This was a delaying tactic employed in the early 1990s by then-president and oil magnate George H. W. Bush.

Fresh off the success of regulatory repair to the ozone hole problem in the late 1980s, it was reasonable to hope that we could start tackling the threat of climate change. This was a much bigger problem encompassing a broader range of human activity, but the basic science is far simpler than the atmospheric chemistry that threatened ozone. Industries that didn’t want to be regulated whined about that as usual, but no one seriously questioned the science. After the usual wailing and gnashing of teeth, appropriate regulatory action was taken, and it worked.

When it came to doing the same thing with the oil industry when an oil baron was president, well, harrumph harrumph, more research was needed. The first Bush was a Republican, but he wasn’t a backwards science-denying goon, so he offered to fund more research. It was an obvious delaying tactic, but the argument in favor of it was to make the case more convincing. So the science community was like, sure, the basic answer is already clear, but there are things that we could understand better, so we’ll do more research if that helps you to also understand the problem. But it hasn’t helped people who don’t want to understand to do so, and never will, because the problem is with them, not with the science. So now, thirty years on from Bush I, the same political party that demanded more research be done now routinely attacks scientists for doing the research they asked scientists to do.

Sorry, not sorry: just because you don’t like the answer science gives doesn’t make it wrong. It is well past time for climate denying snowflakes to stop having emotional meltdowns and grow up already.

*Sometimes it is asserted that the opacity of CO2 is already saturated, so adding more doesn’t matter. Yes on one, no on two. Even at saturation we can still make the fog bank thicker by adding more CO2 – just ask Venus. Indeed, we’re dang lucky that the CO2 bands are already saturated; if not for that, the response of the climate to adding as much CO2 as we have would be much stronger. If these features were not saturated the response would be linear instead of incremental, so the temperature would have already increased by about an extra 7 C, not the mere 1 C we’ve so far** accomplished.

**Just how much we’ve added depends on how you define “before.” Modern studies often seem to adopt the average temperature measured between 1980 and 2000, presumably because the data with which to do so are very good. This gives an increase since then around 1.1 C, which is a remarkable amount of growth in just a few decades: we’ve tipped the climate system out of anything resembling equilibrium hard and fast. Of course, the impact of human activity was already palpable before 1980, so the total change since the industrial revolution is closer to 1.5 C. We’re not quite to that arbitrary threshold yet, but I see no way to avoid blowing past it. Talk of doing so is predicated on giving us a half degree mulligan by defining “average” during a period that is not average. So if you think portrayals of the problem are exaggerated, it is actually already worse than generally depicted.

Cop28 president not even trying to hide his obvious bias

In 1986, I was a grad student at Princeton, working in the atomic physics lab of Will Happer. It was at a department colloquium that I first heard a science talk that raised serious concerns about our use of fossil fuels potentially impacting the climate. This was not received well.

People asked all sorts of questions, with much of the discussion revolving around feedback effects. Perhaps warmer weather from CO2 will result in higher humidity, making more clouds*, and reflecting more sunlight into space. It does not. What about ice cover? This is actually a positive feedback – as the globe warms, ice coverage is replaced by darker surfaces, leading to more absorption of the incident solar radiation. And so on.

I thought the speaker did a creditable job of answering the concerns raised, repeatedly making the point that most feedback effects would make things worse, not better. It was entirely new to me at the time; I didn’t have any context to judge the relative merits of the discussion. Prof. Happer is one of those remarkable people who seems to know a lot about everything. So, as I related before, I asked him. His immediate and harsh retort was

“We can’t turn off the wheels of industry, and go back to living like cavemen.”

I relate this story again because the same language comes up today in a story I saw in the Guardian:

The president of Cop28, Sultan Al Jaber, has claimed there is “no science” indicating that a phase-out of fossil fuels is needed** to restrict global heating to 1.5C, the Guardian and the Centre for Climate Reporting can reveal.

Al Jaber also said a phase-out of fossil fuels would not allow sustainable development “unless you want to take the world back into caves”.

COP23 article, 3 December 2023

This is exactly the same solution aversion that Happer displayed, using exactly the same language. It doesn’t address the actual question. It leaps ahead to the worst conceivable consequence, doesn’t like it, and so reverts to reality denial: We don’t want that to happen, so the evidence must be wrong!

We humans excel at reality-denial. It is not helpful. Rather than starting to deal with the problem of climate change thirty years ago – the science was already crystal clear by then – we’ve dug ourselves a much deeper hole. That’s not to say we should abandon all hope and revert to living in caves, but we do need to take serious and rapid steps to reform the ways in which we generate power. It is doing more of the same that risks sending us back into caves.

The quoted reaction to this assertion in the story is predictably tepid. The quote in the Guardian is “The comments were `incredibly concerning’ and `verging on climate denial’, scientists said.” As a scientist who is not directly involved with dealing with these people, let me be more blunt:

ARE YOU FUCKING KIDDING ME?

Al Jaber’s attitude isn’t verging on climate denial, it is the archetype of climate denial. Literally the same thing that climate deniers said in the 1980s. It expresses an attitude that was clearly wrong and dangerously backwards by the early 1990s. And this guy is the president of COP28? I say again

ARE YOU FUCKING KIDDING ME?

And who is this guy? The Guardian reports “Al Jaber is also the chief executive of the United Arab Emirates’ state oil company, Adnoc, which many observers see as a serious conflict of interest.” A conflict of interest? Really? Do you think? Again I say

ARE YOU FUCKING KIDDING ME?

This is obviously a conflict of interest, of the worst sort. His personal wealth, and the sovereign wealth of his nation, is entirely based on the production and sale of fossil fuels. Mitigating climate change means reducing our consumption of fossil fuels, which is a direct threat to the economic interests he represents. Talk about putting the fox in charge of the hen house.

I am impressed by how the moneyed interests have managed to slither their way into positions of consequence on discussions in which they have an obvious conflict. I guess money always finds a way in. But can we please stop being so polite that we fail to call out obvious bullshit wherever it crops up? It seems to be spreading at the rate of made-up conspiracy nonsense on that site formerly known as Twitter. We should stop putting up with it already.


*Ironically, SO2 pollution from ocean-going vessels does have this effect, and as these emissions have been cleaned up, we can see the effect in global temperatures. This is not to advocate for SO2 pollution! though injecting aerosols like SO2 into the stratosphere is one of geoengineering approaches that gets discussed. Before going down that path, the obvious first step is to stop pouring petrol on the fire by continuing to add CO2 to the atmosphere.

**We’re already committed to 1.5C. I see no conceivable way that we can curb emissions fast enough to avoid that. So I guess this statement is true, from a certain point of view – that of a liar. A less misleading statement would be that a phase-out of fossil fuels is necessary to prevent things from getting much worse than forecast for the 1.5C threshold.