Scott K. Johnson
Meteorologists run weather-forecast models to provide good predictions of weather conditions over the next few days. Climate scientists, on the other hand, run global climate models to project the impacts of climate-changing greenhouse gas emissions over the next few decades. In between these two activities is an interesting task that has proven more difficult than either: predicting global temperature over a few years.
The annual average surface temperature for the globe varies a bit from one year to the next even as a long-term warming trend is apparent. It’s those year-to-year wiggles that are hard to predict. They depend on variable regional weather patterns, most notably the El Niño Southern Oscillation. This seesaw pattern of warm surface water along the equatorial Pacific is significant enough to bump the planet’s average surface temperature up and down. It also affects weather patterns in many places around the world.
And that’s true for other ways in which large-scale weather patterns vary. Years in which the planet sets a new temperature record obviously also tend to be warm in most places on a local scale. So the ability to provide some advance warning is useful beyond just the global-temperature bookkeeping.
This is obviously something that climate scientists have played around with for a long time. One way to do it is to input recent temperature data into a climate model and then simulate ahead a year or two. That has some advantages, but the swings of El Niño/La Niña are stubbornly difficult to nail in simulation and these models require supercomputer time.
Another, less resource-intensive option is to attempt a purely statistical prediction based on previous data. This often relies on metrics for variability like the El Niño Southern Oscillation index, which distills sea surface temperature patterns down to a number representing La Niña, neutral, or El Niño states. But while convenient, that might discard data that could drive a better prediction.
A new method
So to try something a little different, the researchers constructed a method that simply takes in temperature data for every cell in a grid covering the globe. For all the data prior to the year 2000, their method analyzes the full temperature pattern over a two-year period and compares that to the average global temperature in following years. The end result is a complex mathematical correlation that can be used to predict future global temperature.
The method was tested several ways. It was repeatedly recalculated based on every year before 2000 except one. Then the method was tested by predicting that missing year—what’s known as “leave-one-out” validation. It was also used to predict global temperatures for each year after 2000, which hadn’t been used to calculate correlations. In each case, the method actually performed pretty well, beating simple assumptions of a continued trend, as well as climate model simulations.
Taking a closer look at the correlation patterns, the method isn’t doing anything too surprising. It picks up on the trend of warming temperatures globally, and it also finds predictive power in the areas of the ocean that vary with things like the El Niño Southern Oscillation. It shows that an El Niño (warm water stretching across to the eastern side of the equatorial Pacific) is associated with a warmer global temperature the next year. The cooler waters of a La Niña, on the other hand, tend to be associated with a warmer global temperature two years afterward—reflecting the back-and-forth nature of this phenomenon.
Alongside the study, the researchers posted up-to-date predictions online, incorporating new data every month to predict the next three years. (You can also see how the method would have predicted each year previous.) It currently predicts that 2020 has a 92 percent chance of being the warmest year on record. The year is halfway through at this point, but that’s similar to Berkeley Earth’s current prediction based on the temperature so far. On the other hand, NOAA’s latest prediction—released Thursday—put those odds at 49 percent.
This statistical approach can’t predict unforeseeable events, of course. Volcanic eruptions (or global pandemics) can add sudden wrinkles that have little to do with the previous year’s temperatures. Take the method’s updated 2020 prediction, for example. In the published study, it was presented based on data through the end of 2019. There, the central estimate would have put it as the third-warmest year on record, although the error bars certainly included the current prediction.
The first half of 2020 has been quite warm, pushing the current prediction upwards. That could just be a case of the method missing the mark slightly, but it could be that the economic impacts of COVID-19 led to a reduction in aerosol pollution and therefore a little extra warmth. The possibility hasn’t been studied in detail yet.
Either way, this study adds another independent method that could lead to more confident short-term global temperature forecasts. That would mean more of a heads-up when a record-breaking year is on deck—including the first year that tops a milestone like 1.5°C warming.