Did You Know?
nClimDiv Maximum and Minimum Temperatures
As a supplemental release to the nClimDiv divisional dataset transition in early 2014, divisional, statewide, regional and national monthly maximum and minimum temperature data are now available from 1895 to the present. As with the monthly average temperature data, the maximum and minimum temperature data are derived from a 5km gridded instance of the nClimGrid dataset (Vose et al.2014) using data from the GHCN-Daily database.
Did you Know?… trends in maximum and minimum temperature are different?
On a national scale, the century-long warming trend in minimum temperature is a little larger than the maximum temperature trend (Figure 1), although the trend over the last 40 years in maximum temperature is slightly larger than for minimum temperature (0.50°F/Decade vs. 0.46°F/Decade).
Did You Know?… maximum and minimum ranks should not be averaged?
Unlike averaging temperature values, ranks in monthly or seasonal maximum and minimum temperature do not necessarily give an indication of what the monthly average temperature rank might be. Each dataset (Tmax, Tmin and Tavg) has its own history and ranks associated with each value. Thus, the ranks between datasets are determined independent of one another. An illustration of this point can be seen by looking at year-to-date temperature (Jan-Apr) for California (Figure 2). The average maximum temperature for this period was 65.0°F and was record warmest with a rank of 120. Average minimum temperature was 40.7°F and ranked 3rd warmest with a rank of 118. Now, one might think that the average temperature rank should be the average of the maximum and minimum ranks, but that is not the case. The average temperature for this period in California was 52.8°F, which corresponds to a rank of 120, or record warmest. The point to remember with this example is that we do not average maximum and minimum temperature ranks to determine the average temperature rank. We do average maximum and minimum temperature values to get the average temperature for a given period.
Figure 2. State rank maps for maximum, average and minimum temperatures for the January-April 2014 year-to-date period.
Did You Know?… when comparing historical temperature records, maximum and minimum temperature data need to be adjusted to account for artificial shifts in the data?
As was stated in the introduction, the nClimDiv divisional maximum and minimum temperature values are derived from a gridded instance (nClimGrid) of the GHCN-Daily database.
NCDC employs a temperature data "homogenization" algorithm that is designed to account for artificial shifts in the historical record and reduce the error in trend calculations. In short, the homogenized data should better reflect the real long-term temperature trends both in the aggregate U.S. average and within individual station records. Why is this? The locations of weather stations that measure daily highs and lows have changed over time. Changes have also occurred in the measurement technology. These types of changes have often caused jumps or shifts in the historical temperature readings in NOAA's networks of weather station that have nothing to do with real climate change or variability. At any particular station, the shifts can be as large as, or even larger than, real year-to-year variation in temperature. The shifts therefore often also lead to large errors in calculating long-term climate trends. Collectively, these widespread changes throughout the network lead to errors that accumulate over time in network-wide averages. The causes and impacts of historical changes to U.S. weather stations have been discussed in a number of scientific papers which document the impact of observational changes as well as how the homogenization algorithm improves the accuracy of the temperature record.
These studies (cited below) indicate that the impacts of changes in observation practice have had important but somewhat different effects on maximum temperature versus minimum temperature trends. In the case of maximum temperatures, the evidence indicates that there are widespread negative or "cool" shifts throughout the historical observations that have artificially depressed the true rate of change in maximum temperatures since about the year 1950. These shifts appear to be caused primarily by two major changes in observation practice, specifically changes in the time of observation and changes in the type of thermometers used to measure temperature over the period of record. (see Menne et al. 2009)
For minimum temperatures, these same changes have caused shifts that work in opposition to each other. Specifically, false cooling has been caused by shifts associated with changes in observation time throughout the Cooperative Observer network since about 1950. In addition, some false cooling in the U.S. minimum temperature average also appears to have been caused by shifts associated with a station moves to locations with somewhat cooler microclimates largely between the years from about 1930 and 1950. On the other hand, false warming in the aggregate U.S. trend also appears to have occurred since the mid-1980s caused by shifts associated with change in thermometer technology. NCDC's homogenization algorithm is designed to remove the impact of these shifts and the homogenized data therefore provide a more accurate estimate of long-term changes in maximum and minimum temperature.
Finally, given that many stations in the U.S. are located in urban areas, a number of the station records appear to be locally impacted by changes associated with urbanization. In comprehensive assessment of possible urban impacts, Hausfather et al. (2013) found that the urban warming signal is larger in minimum temperatures than in maximum temperatures. The homogenization process also addresses this urban signal in individual and aggregate station records.
For more on the science, see Menne et al. 2009, which provides an overview of these impacts on U.S. weather stations as well as discusses the approach to homogenization. Additionally, other key science papers related to this topic are listed below.
- Hausfather, Z., M.J. Menne, C.N. Williams, T. Masters, R. Broberg, and D. Jones, 2013: Quantifying the impact of urbanization on U.S. Historical Climatology Network Temperature Records. Journal of Geophysical Research, 118, 481—494.
- Menne, M.J., C.N. Williams Jr., and R.S. Vose, 2009: The United States Historical Climatology Network monthly temperature data—Version 2. Bulletin of the American Meteorological Society, 90, 993-1007.
- Menne, M. J., C. N. Williams, Jr., and M. A. Palecki, 2010: On the reliability of the U.S. surface temperature record. Journal of Geophysical Research, 115, D11108.
- Vose, R.S., Applequist, S., Durre, I., Menne, M.J., Williams, C.N., Fenimore, C., Gleason, K., Arndt, D. 2014: Improved Historical Temperature and Precipitation Time Series For U.S. Climate Divisions Journal of Applied Meteorology and Climatology.
- Williams, C.N., M.J. Menne, and P.W. Thorne, 2012: Benchmarking the performance of pairwise homogenization of surface temperatures in the United States. Journal of Geophysical Research- Atmospheres, 117, D5.
- Zhang, J., W. Zheng, and M.J. Menne, 2012: A Bayes factor model for detecting artificial discontinuities via pairwise comparisons. Journal of Climate, 25, 8462-8474.