New Paper Re-Evaluated Performance of Key Algorithm Used in Calculating the U.S. Surface Temperature Record
One of the key elements used in looking at the long-term climate change is the temperature on the Earth’s surface. Across the United States, thousands of weather stations are observing daily maximum and minimum temperatures, mostly by volunteers in the Cooperative Observer Program. NOAA’s National Climatic Data Center compiles all of the data from the late 1800s to today from a subset of those stations with long records to produce the long-term surface temperature record for the United States. But many things have changed since the 19th century – stations have moved, the type of thermometer has changed, and many observers have changed the time of day that observations are made. – These changes have impacted temperature measurements through time, but these impacts are unrelated to true climate change. NCDC scientists use a scientific method, called homogenization, to identify and account for these “artificial” elements that can affect the trend in the data. But how do we know that this method is working as intended?
A new paper recently published in the Journal of Geophysical Research-Atmospheres by Williams and colleagues re-evaluates the performance of a key data homogenization algorithm used in calculating the U.S. surface temperature record. In the paper, the researchers applied the algorithm to several simulated datasets that mimic the U.S. temperature record. In all but one of the datasets a known pattern of errors was inserted to simulate the kinds of artificial impacts that may have occurred in the real data. The scientists also created one dataset that was “perfect”. They created multiple estimates of the overall trend by running slight variants of the algorithm to see how much small changes in the algorithm affect the results. By applying the algorithm to the simulated datasets, they could also assess how well the algorithm accounted for the errors in situations where we know exactly what the true climate is.
The results of this analysis demonstrated that when the simulated data were “perfect”, the algorithm did not change the overall trend. However, in cases where errors caused the trend to be too large or too small, the algorithm moved the dataset in the correct direction, essentially taking out much of the pattern of errors and providing an answer closer to the true trend. It hardly ever overestimated the required change, but for hard cases with many small errors it tended to underestimate their impact on the overall trend. The same settings were applied to the real world observations and confirmed that the raw data show too little warming since 1895. “It is much more likely that the U.S. surface temperature record warming is underestimated than that it is overestimated, especially for maximum daily temperature,” according to Dr. Peter Thorne, Senior Scientist for the Cooperative Institute for Climate and Satellites-North Carolina.
By researching how the algorithm functions when we know exactly what the trend should be and how sensitive the trend is to reasonable algorithm variations, we now better understand the surface temperature trends in the United States.