Did You Know?

Anomalies vs. Temperature

Asheville Stations

In climate change studies, temperature anomalies are more important than absolute temperature. A temperature anomaly is the difference from an average, or baseline, temperature. The baseline temperature is typically computed by averaging 30 or more years of temperature data. A positive anomaly indicates the observed temperature was warmer than the baseline, while a negative anomaly indicates the observed temperature was cooler than the baseline. When calculating an average of absolute temperatures, things like station location or elevation will have an effect on the data (ex. higher elevations tend to be cooler than lower elevations and urban areas tend to be warmer than rural areas). However, when looking at anomalies, those factors are less critical. For example, a summer month over an area may be cooler than average, both at a mountain top and in a nearby valley, but the absolute temperatures will be quite different at the two locations.

Using anomalies also helps minimize problems when stations are added, removed, or missing from the monitoring network. The above diagram shows absolute temperatures (lines) for five neighboring stations, with the 2008 anomalies as symbols. Notice how all of the anomalies fit into a tiny range when compared to the absolute temperatures. Even if one station were removed from the record, the average anomaly would not change significantly, but the overall average temperature could change significantly depending on which station dropped out of the record. For example, if the coolest station (Mt. Mitchell) were removed from the record, the average absolute temperature would become significantly warmer. However, because its anomaly is similar to the neighboring stations, the average anomaly would change much less.

Arctic Sea Ice Measurements

Ice Sat

Arctic Sea ice extent is virtually impossible to accurately measure from the Earth's surface. The edges of the ice are ever changing and the sheer size of the ice mass (averaging two and half times the size of Canada) makes it difficult to measure directly on short time scales. To overcome the shortcomings of in situ observations, polar orbiting satellites began collecting data over the Arctic (as well as the Antarctic) in the 1970s. Scientists use radiometry data and visible imagery collected from the satellites to determine the sea ice extent. Each technique has its advantages and disadvantages, and more information can be found through the National Snow and Ice Data Center. Today a suite of NASA, NOAA, and Department of Defense satellites provide the data which is needed to accurately monitor sea ice extent on a daily, monthly, and annual basis.

The transition from ice–covered to ice–free ocean can occur over a large distance. When measuring the Arctic Sea ice extent from satellites, a threshold of minimum ice concentration is defined to mark where the ice sheet ends. NOAA uses a threshold of 15 percent ice concentration over an areal extent, because it provides the most consistent agreement between satellite and ground observations. At this low ice concentration, ocean waters are generally navigable by ships, one of the earliest motivations for better understanding changes in Arctic ice.

Binomial Filter

A binomial filter is a statistical smoothing technique that is used to reveal underlying trends in data. The nine points refer to the number of weighted terms used to approximate the Gaussian distribution (the more points, the closer the curve tends towards a normal, or "bell-shaped" distribution). In a 9-point binomial filter, four data values on each side of a given value are averaged with decreasing weights further from the center value to arrive at a smoothed value. This is a recognized method of data smoothing, common in climate analysis. Additional information can be obtained from Aubury, M. and Luk, W., Binomial filters, J. VLSI Signal Processing Volume 12, Number 1, (January 1996), 35-50.

Climate Division Dataset Transition

U.S. Climate Divisions

For years, the Climate Divisional Dataset was the only long-term temporally and spatially complete dataset from which to generate historical climate analyses (1895-2013) for the contiguous United States. Traditionally, the monthly values for all of the Cooperative Observer Network (COOP) stations in each division are averaged to compute divisional monthly temperature and precipitation averages/totals.

NCEI's Monitoring Branch transitioned to a more modern 5km gridded divisional dataset in early 2014. This dataset is based on a similar station inventory as was the former dataaset. However, new methodologies are used to compute temperature, precipitation, and drought for United States climate divisions. These improve the data coverage and the quality of the dataset, while maintaining the current product stream. More detailed information on the transition and the resulting impacts can be accessed here: Gridded Division Dataset Transition.

While this transition did not disrupt the product stream, some variances in temperature, precipitation and drought values may be observed in the new data record. A visualization toolkit can help users examine snapshots of both datasets. Changes in monthly, seasonal and annual variability can be examined through the use of the interactive time series plots.

Climate Extremes Index

The U.S. Climate Extremes Index (CEI) was proposed in 1995 as a framework for quantifying observed changes in climate within the contiguous United States. The CEI is based on a set of climate indicators: extremes in monthly mean maximum and minimum temperatures, heavy 1-day precipitation events, drought severity, the number of days with/without precipitation, and wind intensity of landfalling tropical cyclones.

A CEI value of 0 percent, the lower limit, indicates that no portion of the country was subject to any of the extremes of temperature or precipitation considered in the index. In contrast, a value of 100 percent would mean that the entire country had extreme conditions throughout the time period for each of the indicators, a virtually impossible scenario. Since we're dealing with the upper and lower tenth percentile as a definition of the extremes, and we're looking at the cold and warm (wet and dry) ends of the extremes, the long-term average expected percent area experiencing extremes is 20 percent. Therefore, observed CEI values of more than 20 percent indicate "more extreme" conditions than average, and CEI values less than 20 percent indicate "less extreme" conditions than average.

The CEI is evaluated for eight seasons: spring, summer, autumn, winter, annual, cold season, warm season, and hurricane season. Data and graphics for each season and indicator are updated at the beginning of the month. CEI results indicate that for the annual, summer, warm and hurricane seasons, the percent of the contiguous United States experiencing extreme conditions has been generally increasing since the early 1970s (see figure). Recent percentages are similar to those found during the early 1900s for these same periods.

Data and graphics for the most current CEI and the individual indicators within it are available online at http://www.ncdc.noaa.gov/extremes/cei/.

Climate Monitoring Monthly Releases

Release dates of the Climate Monitoring U.S. National and Global monthly reports are tentatively scheduled and as such are subject to change. Definitive release dates are determined no later than one week prior to the reports' release and all public calendars will be updated accordingly. The closer to the release date, the less tentative it is. The associated National and Global datasets are released at the same time as the reports.

Upcoming Climate Monitoring Releases xml ical html

U.S. National

  • January Release: 8 February 2017, 11:00 AM EST
  • February Release: 8 March 2017, 11:00 AM EST
  • March Release: 6 April 2017, 11:00 AM EDT
  • April Release: 8 May 2017, 11:00 AM EDT
  • May Release: 7 June 2017, 11:00 AM EDT
  • June Release: 7 July 2017, 11:00 AM EDT
  • July Release: 8 August 2017, 11:00 AM EDT
  • August Release: 7 September 2017, 11:00 AM EDT
  • September Release: 6 October 2017, 11:00 AM EDT
  • October Release: 8 November 2017, 11:00 AM EST
  • November Release: 6 December 2017, 11:00 AM EST


  • January Release: 16 February 2017, 11:00 AM EST
  • February Release: 16 March 2017, 11:00 AM EDT
  • March Release: 19 April 2017, 11:00 AM EDT

All questions regarding the reports and their release dates should be sent to NCEI.Monitoring.Info@noaa.gov.

CLIMAT Messages

CLIMAT Messages Reported

NOAA's National Centers for Environmental Information (NCEI) is the world's largest active archive of weather data. It houses data archives dating back to 1880 from all over the world. Each month, countries from all over the world send their land-based meteorological surface observations, meaning temperature and precipitation measurements, to NCEI to be added to the global record. This information is sent through the World Meteorological Organization's (WMO) Global Telecommunication System (GTS)—a coordinated system for the rapid collection, exchange and distribution of observation data from more than 200 countries around the world. The data are sent in a format called "CLIMAT messages", which are a summary of monthly weather data for a specific station. The CLIMAT message contains information of average pressure at station level, average air temperature, maximum air temperature, minimum air temperature, average vapor pressure, total precipitation, and total sunshine for a particular month. These messages are typically sent to NCEI by the 8th of every month. NCEI uses the data to produce numerous climate publications, such as the Monthly Global Report. The red dots on the loop above show how a typical month's worth of data arrives at NCEI, in a day-by-day, country-by-country fashion. Please refer to the WMO for detailed information about CLIMAT messages or the GTS.

Climatological Rankings

12 Much Below Normal (10%)
41 Below Normal
Bottom 1/3
40 Ranks
Near Normal
41 Above Normal
Top 1/3
12 Much Above Normal (10%)
1895 – 2016
(122 Years)
12 Much Below Normal (10%)
41 Below Normal
Bottom 1/3
40 Ranks
Near Normal
41 Above Normal
Top 1/3
12 Much Above Normal (10%)
Explanation of legend terminology graph

In order to place each month and season into historical context, NCEI assigns ranks for each geographic area (division, state, region, etc.) based on how the temperature or precipitation value compares with other values throughout the entire record when sorted from lowest to highest value. In other words, the numeric rank value within the area represents the position or location of the sorted value throughout the historical record (1895-present). As a year is added to the inventory, the length of record increases. As of 2016, NCEI has 122 years of records. Thus a rank of 122 would represent the warmest or wettest on record; a rank of 1 would represent the coolest or driest on record. If a state has a rank of 109, then it would be the 14ᵗʰ warmest or wettest on record. If a state rank has a value of 14, then that state ranked 14ᵗʰ out of 122 years, or 14ᵗʰ coolest or driest.

The "Below Normal", "Near Normal", and "Above Normal" shadings on the color maps represent the bottom, middle, and upper tercile (or three equal portions) of the distribution, respectively. The lowest and uppermost decile (or 10%) of the distribution are marked as "Much Below Normal" and "Much Above Normal", respectively. In other words, for a 122-year period (1895-2016), a rank of Much Above/Below Normal, would be in the top/bottom 12 on record. Below/Above Normal, would represent one of the 41 coolest/warmest or driest/wettest such periods on record. "Near Normal" would represent an average temperature or precipitation value that was not one of the 41 coolest/warmest or driest/wettest on record. For a 122-year period of record, "Near Normal" would represent a rank between 42 and 81.

Coral Reef Bleaching

Coral Bleaching
Coral Bleaching

Photo: Andy Bruckner, NOAA NMFS

Coral reefs, sometimes called "rainforests of the seas", are found throughout the world's oceans. Not only do reefs provide food and habitat for many species to grow, live, and reproduce, but they also are essential for supporting fisheries, coastal protection, and tourism. Today, many coral reefs are threatened by overfishing and pollution, as well as ocean acidification, disease, and warmer ocean temperatures. Warmer-than-average temperatures cause corals to become stressed. This can lead to mass bleaching (indicated by a white or pale color) of coral colonies and reefs. If ocean temperatures increase just 1°-2°C (1.8°-3.6°F) above average and persist for a month or more, this frequently leads to severe damage or death of corals. Even if corals survive a mass bleaching event, their vulnerability to infectious disease increases and their ability to reproduce decreases. Experts have estimated that bleaching and disease from high ocean temperatures have destroyed nearly one-third of the world's coral reefs.

In 2005, warmer-than-average ocean temperatures in the Caribbean contributed to record-breaking mass coral bleaching, with 50-95% of coral colonies being severely affected. This was the worst bleaching event ever seen in many Caribbean countries. Unfortunately, Caribbean corals are at risk from warming ocean temperatures again this year. Warming in 2010 already has caused mass coral bleaching and mortality in Southeast Asia and the Coral Triangle.

For information on current coral reef environment conditions, please visit NOAA's Coral Reef Watch (CRW).

Dead Fuel Moisture

10 Hour Dead Fuel Moistures
1000 Hour Dead Fuel Moistures

The fuel moisture index is a tool that is widely used to understand the fire potential for locations across the country. Fuel moisture is a measure of the amount of water in a fuel (vegetation) available to a fire, and is expressed as a percent of the dry weight of that specific fuel. For example, if a fuel were totally dry, the fuel moisture content would be zero percent. Fuel moisture is dependent upon both environmental conditions (such as weather, local topography, and length of day) and vegetation characteristics. When fuel moisture content is high, fires do not ignite readily, or at all, because heat energy has to be used to evaporate and drive water from the plant before it can burn. When the fuel moisture content is low, fires start easily and will spread rapidly - all of the heat energy goes directly into the burning flame itself. When the fuel moisture content is less than 30 percent, that fuel is essentially considered to be dead. Dead fuels respond solely to current environmental conditions and are critical in determining fire potential. The dead fuel moisture threshold (10–hour, 100–hour, or 1,000–hour), called a time lag, is based upon how long it would take for 2/3 of the dead fuel to respond to atmospheric moisture. Small fuels (less than 1/4 inch in diameter), such as grass, leaves, and mulch respond more quickly to changes in the atmospheric moisture content, and take 10 hours to adjust to moist/dry conditions. Larger fuels lose or gain moisture less rapidly through time. Fuels that are 3 inches to 8 inches in diameter, such as dead fallen trees and brush piles can take up to 1,000 hours to adjust to moist conditions, and are represented by the 1,000–hour dead fuel moisture index. 1,000+ hour fuels do not burn easily, but if they do burn, they will generate extreme heat often causing dangerous fire behavior conditions.

Definition of the Dead Fuel Moisture Time Lag Classes
Time Lag Fuel Size Determination
10–hour 0.25 to 1 inch diameter Computed from observation time temperature, humidity, and cloudiness. Can also be an observed value, from a standard set of fuel sticks that are weighed as part of the fire weather observation.
100–hour 1 to 3 inches diameter Computed from 24–hour average conditions composed of day length, hours of rain, and daily temperature/humidity ranges.
1000–hour 3 to 8 inches diameter Computed from a 7–day average conditions composed of day length, hours of rain, and daily temperature/humidity ranges.

Check out the Wildand Fire Assessment System for additional information.

Definition of Drought

U.S. Agriculture Display Map

Drought is a complex phenomenon which is difficult to monitor and define. Hurricanes, for example, have a definite beginning and end and can easily be seen as they develop and move. Drought, on the other hand, is the absence of water. It is a creeping phenomenon that slowly sneaks up and impacts many sectors of the economy, and operates on many different time scales. As a result, the climatological community has defined four types of drought: 1) meteorological drought, 2) hydrological drought, 3) agricultural drought, and 4) socioeconomic drought. Meteorological drought happens when dry weather patterns dominate an area. Hydrological drought occurs when low water supply becomes evident, especially in streams, reservoirs, and groundwater levels, usually after many months of meteorological drought. Agricultural drought happens when crops become affected. And socioeconomic drought relates the supply and demand of various commodities to drought. Meteorological drought can begin and end rapidly, while hydrological drought takes much longer to develop and then recover. Many different indices have been developed over the decades to measure drought in these various sectors. The U.S. Drought Monitor depicts drought integrated across all time scales and differentiates between agricultural and hydrological impacts.

Drought Indicators

In order to get a complete picture of drought conditions, an analyst should examine several drought indicators and indices. These include simple indices like the percent of normal precipitation and number of days with no precipitation, specific indices created to assess drought (such as the Palmer Drought Index and Standardized Precipitation Index [SPI]), complex models (such as the National Land Data Assimilation System [NLDAS]) which calculate soil moisture and other hydrologic variables, indices used for water supply forecasting (such as the Surface Water Supply Index [SWSI]), and indices which reflect impacts on vegetation (such as the Vegetation Health Index [VHI] and Vegetation Drought Response Index [VegDRI]) and water availability (such as groundwater well levels and streamflow). The analyst should also examine indices at many different time scales to assess short-term to long-term drought conditions. The U.S. Drought Monitor does this by depicting drought integrated across all time scales and differentiates between agricultural and hydrological impacts.

Drought in the Colorado River Basin

Elevation of Lake Mead, July 1935-October 2010

The decade-long drought in the West has had a severe impact on the water level of Lake Mead. By the end of October 2010, data from the U.S. Department of the Interior Bureau of Reclamation indicated that the level of Lake Mead had dropped to 1082.36 feet, which is the lowest level since the lake was filled in the 1930s. The previous lowest level was 1083.57 feet, reached in March 1956 during the peak of the 1950s drought. This has serious implications for water supplies in Arizona and Nevada.

Lake Mead is one of several reservoirs along the Colorado River. A major water source for the Colorado River is precipitation that falls in the central Rocky Mountains of Colorado, Wyoming, and Utah. This region is the Upper Colorado River Basin. Much of the West has experienced very dry conditions for the last ten years. This decade of drought is reflected in the precipitation received in the Upper Colorado River Basin. The early 2000s were very dry, with the Upper Colorado's Palmer Hydrological Drought Index (PHDI) reaching record low levels during the summer of 2002.

Upper Colorado River Basin Precipitation, Hydrologic Year October-September, 1895-2010
PHDI for Upper Colorado River Basin, January 1900-November 2010
A 2129-year reconstruction of precipitation for northwest New Mexico

Droughts in the West, including in the Upper Colorado Basin, have been getting more widespread and severe during the last 50 to 90 years of instrument-based weather records (large-scale U.S. weather records go back to 1895). Tree ring records provide a useful paleoclimatic index that extends our historical perspective of droughts centuries beyond the approximately 100-year instrumental record. A 2129-year paleoclimatic reconstruction of precipitation for northwest New Mexico indicates that, during the last 2000 years, there have been many droughts more severe and longer-lasting than the droughts of the last 110 years. This has implications for water management in the West. For example, the Colorado Compact is the legal agreement used for allocation of Colorado River waters among the western states. The Compact was negotiated early in the 20th century during a very wet period, which was not representative of the long-term climatic conditions of the West.

Drought vs. Aridity

When discussing drought, one must have an understanding of aridity and the difference between the two. Aridity is defined, in meteorology and climatology, as "the degree to which a climate lacks effective, life-promoting moisture" (Glossary of Meteorology, American Meteorological Society). Drought is "a period of abnormally dry weather sufficiently long enough to cause a serious hydrological imbalance". Aridity is measured by comparing long-term average water supply (precipitation) to long-term average water demand (evapotranspiration). If demand is greater than supply, on average, then the climate is arid. Drought refers to the moisture balance that happens on a month-to-month (or more frequent) basis. If the water supply is less than water demand for a given month, then that month is abnormally dry; if there is a serious hydrological impact, then a drought is occurring that month. Aridity is permanent, while drought is temporary.

El Niño 2015/16: A Historical Perspective


This page was created to provide data and information to users based on historical El Niño events. This is not a prognostic tool, but a resource to help understand potential impacts of the current El Niño across the United States based on past events. This website and the resources provided will be continually updated throughout the cool season, so please bookmark this page for future reference to ensure you are accessing the most up-to-date information.

Dec-Feb Strong El Nino Precipitation Departure from average
  • To help understand the potential impacts of this El Niño event, six analog events were chosen due to their similarities in magnitude (as measured by the Oceanic Niño Index), duration, and atmospheric coupling that is forecast for this event. Those previous El Niño events include: 1957-58, 1965-66, 1972-73, 1982-83, 1991-92, and 1997-98.
  • Individual and composite temperature and precipitation maps were created based on these six events. Historical comparisons are based on data back to 1950.
  • For some months and seasons, temperature and precipitation varied greatly across the country among these six events, highlighting that no two El Niño events are the same.
    • For example, the composite December-February precipitation maps show that northern California has tended to be wetter than average during strong El Niño events, but that was not the case for the specific El Niño events of 1965/66 and 1991/92. The El Niño events during 1957/58, 1982/83 and 1997/98 were exceptionally wet for northern California, which boosted the six event composite values.
  • According to the temperature and precipitation outlooks from NOAA's Climate Prediction Center, the seasonal forecast resembles the six strong El Niño event composites. While the above-average precipitation forecast is good news for drought conditions in California, the state would need close to twice its normal October-May rainfall for the drought to completely end and that is unlikely.
  • Other factors and teleconnections often play a role in winter weather. Their influence can impact seasonal temperature and precipitation outcomes in the United States. They include:
    • The Arctic Oscillation, which influences the number of arctic air masses that penetrate into the South and nor'easters on the East Coast.
    • The Madden-Julian Oscillation, which can impact the number of heavy rain storms in the Pacific Northwest.
  • In addition to general El Niño information, static maps, GIS shapefiles, and climate division timeseries and data are available. Additional El Niño monitoring products will be added in the coming months.

For the most up-to-date information on current El Niño conditions, please visit the Climate Prediction Center's ENSO page and NCEI's ENSO monitoring teleconnections page.

About El Niño

Current global SST anomaly

El Niño is characterized by unusually warm ocean temperatures in the Equatorial Pacific, as opposed to La Niña, which is characterized by unusually cold ocean temperatures in the Equatorial Pacific. El Niño and the Southern Oscillation, also known as ENSO, is a periodic fluctuation in sea surface temperature (El Niño) and the air pressure of the overlying atmosphere (Southern Oscillation) across the equatorial Pacific Ocean. ENSO has important consequences for weather in the U.S. and around the globe. El Niño conditions are typically experienced every two-to-five years.

El Niño was originally recognized by fisherman off the coast of South America as the appearance of unusually warm water in the Pacific Ocean, occurring near the beginning of the year. El Niño means The Little Boy or Christ child in Spanish. This name was used for the tendency of the phenomenon to arrive around the Christmas holiday.

To provide necessary data, NOAA operates a network of buoys which measure temperature, currents and winds in the equatorial band of the Pacific Ocean. These buoys transmit daily data which are available to researchers and forecasters around the world in real time. The strength of each El Niño event can be measured in several ways, but NOAA uses the Oceanic Niño Index (ONI). The ONI is the three-month running mean of sea surface temperature departures from average in the Niño 3.4 region. The Niño 3.4 region is a defined as an area in the central Pacific Ocean bounded by 5°N-5°S, 120°W-170°W. When the ONI exceeds +0.5°C for five consecutive overlapping three-month periods, we are considered to be in an El Niño, or warm phase of ENSO. The strength of an El Niño event is determined by how much above zero the ONI is. If the ONI exceeds 0.5°C, it is considered weak, 1.0°C moderate, 1.5°C strong, and 2.0°C very strong. In the 1950-present record, the ONI has only exceeded 2.0°C twice, during the El Niño events of 1982-82 and 1997-98. The ONI is forecasted to exceed 2.0°C with this current El Niño event during the Northern Hemisphere winter of 2015/16. In terms of climatology, the stronger the El Niño event, the stronger the impacts tend to be. However, not all El Niño events are the same, with other atmospheric and oceanic phenomenon also influencing weather outcomes in the U.S. and around the globe.

El Niño events can directly and indirectly impact the weather pattern across the United States. The impacts of El Niño are typically largest in the U.S. during the cool months from October through May. During an El Niño event, the subtropical jet stream, which is defined as a belt of strong upper-level winds located near latitude 30°N, tends to become stronger and stretch across the southern U.S. and the Gulf of Mexico. Storm systems tend to follow the subtropical jet, bringing above-average precipitation to the southern United States. Once these storms tap into the Gulf of Mexico moisture, they often move up the East Coast. The southern storm track typically results in above-average precipitation for the southern half of the country, from California to the Southern Plains, as well as along the East Coast. This storminess also suppresses temperatures, with below-average temperatures accompanying the above-average precipitation. Across the northern half of the country, the winter season tends to be warmer and drier than average, particularly in the Northwest, Northern Plains, and Ohio Valley.

A considerable amount of attention has been given to this El Niño and what it will mean for the drought impacting the West, particularly in California which has been dealing with a devastating drought for four years. The climatological signal favors above-average precipitation for Southern California and the Southwest, with below-average precipitation for the Northwest. This is optimistic for the locations where above-average precipitation is expected, but not for the locations where below-average precipitation is expected. Northern and central California falls into the area between where above-average and below-average precipitation is expected. During some strong El Niño events, this part of the West has been dry and during others it was been wet, creating a great deal of uncertainty for what this El Niño will mean for the region. This also happens to be the location of California's largest reservoirs, which were at their second lowest water storage levels on record during the autumn of 2015.

For the most up-to-date information on current El Niño conditions, please visit the Climate Prediction Center's ENSO page and NCEI's ENSO monitoring teleconnections page.

Time Series

Climate Division Composite Data

These timeseries show, for states and for the climate divisions within states, how the precipitation that resulted during individual strong El Niño events compares to the long-term historical (1950-spring 2015) baseline, and to the average for strong El Niños.

Temperature and Precipitation Maps

The maps below are temperature and precipitation composites based on six strong historical El Niño events (1957-58, 1965-66, 1972-73, 1982-83, 1991-92, and 1997-98). El Niño impacts are typically strongest during the cold season in the United States. For some months and seasons, temperature and precipitation varied greatly across the country among these six events, highlighting that no two El Niño events are the same. Maps for different timescales from October through May are available.

Note: 5 km gridded maps are not available for Alaska.

Frequency Maps

The following maps identify the variability of precipitation outcomes of the six strong El Niño events. The maps show the frequency of the precipitation falling in the wettest or driest third of the 1950 through spring 2015 historical record. For example, California Climate Division 7 (Southeast Desert Basins) was wetter than the 1950-2015 average (its own average) for each of the six winter seasons (December-February) when a strong El Niño was present.


Haywood Plots

The plots below, for about 200 locations in the United States, depict accumulated rainfall from October 1 through May 31 for each year on record as "threads" extending upward and rightward from zero. These plots, commonly known as "Haywood" plots, are useful to track the current season's rainfall compared to the seasonal results from the past. The current season, 2015-16, is colored darkest blue. The previous seasons that occurred during strong El Niño events are shown in light blue. The average of these strong El Niño seasons is also shown in a bold blue thread. Other seasons are shown in light gray. These plots are updated weekly. Click on the plots for a larger version.

GIS Shapefiles

The following GIS shapefile contains monthly and seasonal composite temperature anomalies, precipitation, and precipitation percent of average for the six historical strong El Niño events (1957-58, 1965-66, 1972-73, 1982-83, 1991-92, and 1997-98). Anomalies are provided with respect to the 1981-2010 base period. Data are available for each month from October through May and for the following multi-month periods: October-December, January-March, October-March, December-February, March-May, and December-May.

Divisional Data Files

The following .csv files contain monthly and seasonal composite temperature anomalies, precipitation, and precipitation percent of average for the six historical strong El Niño events (1957-58, 1965-66, 1972-73, 1982-83, 1991-92, and 1997-98). Anomalies are provided with respect to the 1981-2010 base period. Data are available for each month from October through May and for the following multi-month periods: October-December, January-March, October-March, December-February, March-May, and December-May.

Explanation of the 500 mb Flow

The sun is the primary source of earth's weather, by causing differential heating between the tropics and polar regions. This sets up a state of motion in which the atmosphere is always trying to balance itself: the warm air moves poleward in patterns called ridges, and the cooler air moves equatorward in patterns called troughs. In the mid-latitudes (30 to 60 degrees North and South) the rotation of the earth generally causes weather systems to move eastward.

This dynamic process is best seen on the 500-millibar chart. This chart shows the circulation of the atmosphere at roughly 18,000 feet (5486 meters) and is based on soundings taken by weather balloon on a twice-daily basis. These soundings are then plotted on a map and the lines of equal pressure are connected. Ridges extend toward the pole, are usually associated with warm, dry weather, and have the general shape of an upside down "U" in the Northern Hemisphere. Troughs extend toward the equator, are usually associated with cool, wet weather, and have the general shape of a "U" in the Northern Hemisphere. The area of greatest surface instability (thunderstorms) is usually immediately ahead of (to the right of) the 500 mb trough.

Future Drought

Examination of data from many diverse sources shows that the world is warming. According to global climate models, this will have a significant impact on the hydrologic cycle and, consequently, on the nature of drought in the future.

The hydrologic cycle describes the movement of water between the oceans, land, and atmosphere. Two important factors are relevant: (1) warmer air can hold more water vapor (moisture), and (2) warmer air causes more evaporation (or evapotranspiration which includes water used by plants). As the world continues to warm, the air will hold more moisture and more water will be evaporated, so there will be an increase in heavy rain events producing more frequent flooding. But more evaporation with hotter temperatures will dry out the soils more and increase water demand, which is one component in the water demand versus water supply drought equation. More demand translates to more frequent and intense droughts.

So, in this climate warming scenario, an accelerated hydrologic cycle will result in more severe droughts (especially in the summer) interspersed with periods of intense flooding. This one-two — dry-wet — punch will add extra stress to our agricultural and economic systems.

Global Historical Climatology Network

NOAA's National Centers for Environmental Information (NCEI, formerly National Climatic Data Center) has developed and maintained the Global Historical Climatology Network-Monthly (GHCN-M) data set since 1992 (Vose et al. 1992). The GHCN-M data set was created with the intention of having a single repository of climate data for stations across the globe. In 1997, a second version (GHCN-M v2) of the data set was developed and released with improvements such as quality control and removal of inhomogeneities in the data record (non-climatic influences that could produce a false trend) and an increased number of stations and length of the data record (Peterson and Vose, 1997).

The GHCN-M v2 has been the official dataset since its release, and has been widely used in several international climate assessments, such as the Intergovernmental Panel on Climate Change, as well as our monthly State of the Climate reports and in the yearly State of the Climate published in the Bulletin of the American Meteorological Society.

NCEI scientists continued to improve the data set, resulting in a version 3 of GHCN-M. The primary advances in GHCN-M v3, versus GHCN-M v2, are further improvements to quality control processes, more advanced techniques for removing data inhomogeneities, and improved station coverage. For more information, please visit the GHCN-Monthly website.

On May 2, 2011, NCEI (know then as National Climatic Data Center) transitioned to GHCN-M v3 as the official land component of its global temperature monitoring efforts. GHCN-M v2 will continue to be updated through May 30, 2011, but no support for this version of the dataset will be provided.

Global Precipitation Percentile Maps

Global anomaly maps are an essential tool when describing the current state of the climate across the globe. Precipitation anomaly maps tell us whether the precipitation observed for a specific place and time period (for example, month, season, or year) was drier or wetter than a reference value, which is usually a 30-year average, and by how much.

The August 2012 Global State of the Climate report introduces percentile maps that complement the information provided by the anomaly maps. These new maps provide additional information by placing the precipitation anomaly observed for a specific place and time period into historical perspective, showing how the most current month, season or year compares with the past.

Precipitation Climatological Ranking

In order to place the month, season, or year into historical perspective, each grid point's precipitation values for the time period of interest (for example all August values from 1900 to 2012) are sorted from driest to wettest, with ranks assigned to each value. The numeric rank represents the position of that particular value throughout the historical record. The length of record increases with each year. It is important to note that each grid point's period of record may vary, but all grid points displayed in the map have a minimum of 80 years of data. For example, considering a grid point with a period of record of 113 years, a value of "1" in the precipitation record refers to record driest, while a value of "113" refers to record wettest.

The Drier than Average, Near Average, and Wetter than Average shadings on the precipitation percentile maps represent the bottom, middle, and upper tercile (or three equal portions) of the sorted values or distribution, respectively. Much Drier than Average and Much Wetter than Average, refer to the lowest and uppermost decile (top or bottom 10 percent) of the distribution, respectively. For a 113-year period, Drier than Average (Wetter than Average) would represent one of the 38 driest (wettest) such periods on record. However, if the value ranked among the 11 driest (wettest) on record, that value would be classified as Much Drier than Average (Much Wetter than Average). Near Average would represent an average precipitation value that was in the middle third (rank of 39 to 75) on record.

Global Temperature Percentile Maps

Global anomaly maps are an essential tool when describing the current state of the climate across the globe. Temperature anomaly maps tell us whether the temperature observed for a specific place and time period (for example, month, season, or year) was warmer or cooler than a reference value, which is usually a 30-year average, and by how much.

The August 2012 Global State of the Climate report introduces percentile maps that complement the information provided by the anomaly maps. These new maps provide additional information by placing the temperature anomaly observed for a specific place and time period into historical perspective, showing how the most current month, season or year compares with the past.

Temperature Climatological Ranking

In order to place the month, season, or year into historical perspective, each grid point's temperature values for the time period of interest (for example all August values from 1880 to 2012) are sorted from warmest to coolest, with ranks assigned to each value. The numeric rank represents the position of that particular value throughout the historical record. The length of record increases with each year. It is important to note that each grid point's period of record may vary, but all grid points displayed in the map have a minimum of 80 years of data. For the global temperature anomaly record, the data does extend back to 1880. But not all grid points have data from 1880 to present. Considering a grid point with a period of record of 133 years, a value of "1" in the temperature record refers to record warmest, while a value of "133" refers to record coldest.

The Warmer than Average, Near Average, and Cooler than Average shadings on the temperature percentile maps represent the bottom, middle, and upper tercile (or three equal portions) of the sorted values or distribution, respectively. Much Warmer than Average and Much Cooler than Average, refer to the lowest and uppermost decile (top or bottom 10 percent) of the distribution, respectively. For a 133-year period, Warmer than Average (Cooler than Average) would represent one of the 44 warmest (coolest) such periods on record. However, if the value ranked among the 13 warmest (coolest) on record, that value would be classified as Much Warmer than Average (Much Cooler than Average). Near Average would represent an average temperature value that was in the middle third (rank of 45 to 89) on record.

Groundwater Drought Indicators

Groundwater is an important factor, or indicator, for drought. Like soil moisture, which is the amount of moisture in the top layers of the ground where plants and crops grow, groundwater is the amount of moisture in the ground but at much deeper levels. Groundwater is recharged from the surface, from sources such as rainfall and streamflow. If there is a substantial amount of groundwater, an aquifer is formed which can be tapped for agricultural, municipal, or industrial use via extraction wells. The depth at which the ground is completely saturated with water is called the water table.

Groundwater is measured via a network of wells, which monitor the depth of the water table. For drought-monitoring purposes, this well data needs to be compared to the historical record of the well, usually in the form of a percentile. The following factors affect the usefulness of well data:

  • the length of record of each well varies,
  • some wells report on a near-real time operational basis, while others report on a delayed basis and only periodically,
  • the water table level is affected by water drawn out for irrigation, municipal, industrial, and other non-drought purposes, so well data may not reflect true drought conditions, and
  • there is limited spatial coverage of wells across the country.

As a result, other groundwater monitoring methods are being developed. One is satellite observations. Water has a different density than land. Saturated ground has a different weight, or "mass", than dry ground. Mass affects gravity. Gravity affects satellites. By measuring the effect of varying gravity fields on a pair of satellites as they orbit the earth, estimates of groundwater can be calculated from the GRACE (Gravity Recovery and Climate Experiment) satellite project.


LOESS is an acronym ("LOcal regrESSion") for locally estimated scatterplot smoother. It is a nonparametric method for smoothing a series of data. Therefore there are no assumptions made about the underlying structure of the data. LOESS uses local regression to fit a smooth curve through a scatterplot of data. The LOESS curve is typically smoother than a binomial filter or running average. LOESS is also effective when there are outliers in the data. The LOESS methodology includes techniques for constructing confidence intervals around the curve. For more information see Cleveland, W.S., 1979: Robust locally-weighted regression and smoothing scatterplots. Journal of the American Statistical Association, 74, 829-836. DOI: 10.1080/01621459.1979.10481038

Measuring Drought

The wide variety of disciplines affected by drought, its diverse geographical and temporal distribution, and the many scales drought operates on make it difficult to develop both a definition to describe drought and an index to measure it. Many quantitative measures of drought have been developed in the United States, depending on the discipline affected, the region being considered, and the particular application. Several indices developed by Wayne Palmer, as well as the Standardized Precipitation Index, are useful for describing the many scales of drought.

Common to all types of drought is the fact that they originate from a deficiency of precipitation resulting from an unusual weather pattern. If the weather pattern lasts a short time (say, a few weeks or a couple months), the drought is considered short-term. But if the weather or atmospheric circulation pattern becomes entrenched and the precipitation deficits last for several months to several years, the drought is considered to be a long-term drought. It is possible for a region to experience a long-term circulation pattern that produces drought, and to have short-term changes in this long-term pattern that result in short-term wet spells. Likewise, it is possible for a long-term wet circulation pattern to be interrupted by short-term weather spells that result in short-term drought.

The Palmer Drought Indices

The Palmer Z-Index measures short-term drought on a monthly scale. The Palmer Crop Moisture Index (CMI) measures short-term drought on a weekly scale and is used to quantify drought's impacts on agriculture during the growing season.

The Palmer Drought Severity Index (PDSI) (known operationally as the Palmer Drought Index (PDI)) attempts to measure the duration and intensity of the long-term drought-inducing circulation patterns. Long-term drought is cumulative, so the intensity of drought during the current month is dependent on the current weather patterns plus the cumulative patterns of previous months. Since weather patterns can change almost literally overnight from a long-term drought pattern to a long-term wet pattern, the PDSI (PDI) can respond fairly rapidly.

The hydrological impacts of drought (e.g., reservoir levels, groundwater levels, etc.) take longer to develop and it takes longer to recover from them. The Palmer Hydrological Drought Index (PHDI), another long-term drought index, was developed to quantify these hydrological effects. The PHDI responds more slowly to changing conditions than the PDSI (PDI).

The Standardized Precipitation Index

While Palmer's indices are water balance indices that consider water supply (precipitation), demand (evapotranspiration) and loss (runoff), the Standardized Precipitation Index (SPI) is a probability index that considers only precipitation. The SPI is an index based on the probability of recording a given amount of precipitation, and the probabilities are standardized so that an index of zero indicates the median precipitation amount (half of the historical precipitation amounts are below the median, and half are above the median). The index is negative for drought, and positive for wet conditions. As the dry or wet conditions become more severe, the index becomes more negative or positive. The SPI is computed by NCEI for several time scales, ranging from one month to 24 months, to capture the various scales of both short-term and long-term drought.

National Data Flow

NOAA's National Centers for Environmental Information (NCEI) is the world's largest active archive of weather data. Each month, observers that are part of the National Weather Service Cooperative Observer Program (COOP) send their land-based meteorological surface observations of temperature and precipitation to NCEI to be added to the U.S. data archives. The COOP network is the country's oldest surface weather network and consists of more than 11,000 observers. At the end of each month, the data are transmitted to NCEI via telephone, computer, or mail.

Typically by the 3rd day of the following month, NCEI has received enough data to run processes which are used to calculate divisional averages within each of the 48 contiguous states. These climate divisions represent areas with similar temperature and precipitation characteristics (see Guttman and Quayle, 1996 for additional details). State values are then derived from the area-weighted divisional values. Regions are derived from the statewide values in the same manner. These results are then used in numerous climate applications and publications, such as the monthly U.S. State of the Climate Report.

The U.S. operational suite of products transitioned from the traditional divisional dataset to the Global Historical Climatological Network (GHCN) dataset in the summer of 2011. The GHCN dataset is the world's largest collection of daily climatological data. The GHCN utilizes many of the same surface stations as the current divisional dataset, and the data are delivered to NCEI in the same fashion. Further details on the transition and how it will affect the customer will be made available in the near future.

nClimDiv Maximum and Minimum Temperatures

As a supplemental release to the nClimDiv divisional dataset transition in early 2014, divisional, statewide, regional and national monthly maximum and minimum temperature data are now available from 1895 to the present. As with the monthly average temperature data, the maximum and minimum temperature data are derived from a 5km gridded instance of the nClimGrid dataset (Vose et al.2014) using data from the GHCN-Daily database.

Did you Know?… trends in maximum and minimum temperature are different?

On a national scale, the century-long warming trend in minimum temperature is a little larger than the maximum temperature trend (Figure 1), although the trend over the last 40 years in maximum temperature is slightly larger than for minimum temperature (0.50°F/Decade vs. 0.46°F/Decade).

Tmax and Tmin Annual Time Series

Figure 1. Annual maximum and minimum temperature time series plots with trend lines.

Did You Know?… maximum and minimum ranks should not be averaged?

Unlike averaging temperature values, ranks in monthly or seasonal maximum and minimum temperature do not necessarily give an indication of what the monthly average temperature rank might be. Each dataset (Tmax, Tmin and Tavg) has its own history and ranks associated with each value. Thus, the ranks between datasets are determined independent of one another. An illustration of this point can be seen by looking at year-to-date temperature (Jan-Apr) for California (Figure 2). The average maximum temperature for this period was 65.0°F and was record warmest with a rank of 120. Average minimum temperature was 40.7°F and ranked 3rd warmest with a rank of 118. Now, one might think that the average temperature rank should be the average of the maximum and minimum ranks, but that is not the case. The average temperature for this period in California was 52.8°F, which corresponds to a rank of 120, or record warmest. The point to remember with this example is that we do not average maximum and minimum temperature ranks to determine the average temperature rank. We do average maximum and minimum temperature values to get the average temperature for a given period.

April 2014 Year-to-Date Statewide Temperature Ranks

Figure 2. State rank maps for maximum, average and minimum temperatures for the January-April 2014 year-to-date period.

Did You Know?… when comparing historical temperature records, maximum and minimum temperature data need to be adjusted to account for artificial shifts in the data?

As was stated in the introduction, the nClimDiv divisional maximum and minimum temperature values are derived from a gridded instance (nClimGrid) of the GHCN-Daily database.

NCEI employs a temperature data "homogenization" algorithm that is designed to account for artificial shifts in the historical record and reduce the error in trend calculations. In short, the homogenized data should better reflect the real long-term temperature trends both in the aggregate U.S. average and within individual station records. Why is this? The locations of weather stations that measure daily highs and lows have changed over time. Changes have also occurred in the measurement technology. These types of changes have often caused jumps or shifts in the historical temperature readings in NOAA's networks of weather station that have nothing to do with real climate change or variability. At any particular station, the shifts can be as large as, or even larger than, real year-to-year variation in temperature. The shifts therefore often also lead to large errors in calculating long-term climate trends. Collectively, these widespread changes throughout the network lead to errors that accumulate over time in network-wide averages. The causes and impacts of historical changes to U.S. weather stations have been discussed in a number of scientific papers which document the impact of observational changes as well as how the homogenization algorithm improves the accuracy of the temperature record.

These studies (cited below) indicate that the impacts of changes in observation practice have had important but somewhat different effects on maximum temperature versus minimum temperature trends. In the case of maximum temperatures, the evidence indicates that there are widespread negative or "cool" shifts throughout the historical observations that have artificially depressed the true rate of change in maximum temperatures since about the year 1950. These shifts appear to be caused primarily by two major changes in observation practice, specifically changes in the time of observation and changes in the type of thermometers used to measure temperature over the period of record. (see Menne et al. 2009)

For minimum temperatures, these same changes have caused shifts that work in opposition to each other. Specifically, false cooling has been caused by shifts associated with changes in observation time throughout the Cooperative Observer network since about 1950. In addition, some false cooling in the U.S. minimum temperature average also appears to have been caused by shifts associated with a station moves to locations with somewhat cooler microclimates largely between the years from about 1930 and 1950. On the other hand, false warming in the aggregate U.S. trend also appears to have occurred since the mid-1980s caused by shifts associated with change in thermometer technology. NCEI's homogenization algorithm is designed to remove the impact of these shifts and the homogenized data therefore provide a more accurate estimate of long-term changes in maximum and minimum temperature.

Finally, given that many stations in the U.S. are located in urban areas, a number of the station records appear to be locally impacted by changes associated with urbanization. In comprehensive assessment of possible urban impacts, Hausfather et al. (2013) found that the urban warming signal is larger in minimum temperatures than in maximum temperatures. The homogenization process also addresses this urban signal in individual and aggregate station records.

For more on the science, see Menne et al. 2009, which provides an overview of these impacts on U.S. weather stations as well as discusses the approach to homogenization. Additionally, other key science papers related to this topic are listed below.

For additional information on the nClimDiv dataset and access to these data, please visit our CIRS FTP site and our climate division reference page.


Palmer Drought Index

In 1965, Wayne Palmer developed a drought index which built upon decades of earlier attempts to define and monitor drought. Palmer's index was revolutionary because, for the first time, it expressed drought and wet spells as a standardized index reflecting the imbalance between water supply and water demand. Many new drought indices have been developed since then, with some utilizing satellite observations and more sophisticated computer models among other things. The U.S. Drought Monitor incorporates information from all of these indices as well as drought impacts in depicting drought.

Potential Evapotranspiration

The Palmer drought indices measure the balance between moisture demand and moisture supply. Drought results from an imbalance between these two components. Precipitation provides the water supply. Water demand is usually measured by evapotranspiration (the amount of water that would be evaporated and transpired by plants). There is a distinction made between potential evapotranspiration (PE) and actual evapotranspiration (AE). The Palmer model uses Thornthwaite's equations to estimate PE from temperature. PE is the demand or maximum amount of water that would be evapotranspired if enough water were available (from precipitation and soil moisture). AE is how much water actually is evapotranspired and is limited by the amount of water that is available. AE is always less than or equal to PE, so PE is used for the water demand component of the drought equation.

In the Palmer model, if the amount of precipitation (P) during the month is greater than PE for the month, then the leftover P soaks into the ground to recharge soil moisture, and any left over after that runs off as streamflow. If P is less than PE, then moisture has to be drawn out of the soil to meet the PE demand. Hotter temperatures result in greater PE which requires more P just to meet the greater demand. Climates where PE is always greater than P are termed arid climates. The American Southwest is a typical arid climate.

Reforestation of Bastrop Lost Pines

In central Texas, preparations for the first volunteer planting workday at the Bastrop State Park occurred during November, according to media reports. Most of the 6,600-acre park's signature "Lost Pines" were destroyed during the 2011 fires — deemed as the most destructive wildfires in the state's history after devastating more than 32,000 acres. Drought-hardy loblolly pine seedlings were nurtured during the past year from over 1,000 pounds of surplus seeds in refrigerated storage. Over 400,000 seedlings were delivered to the park for the planting event commencing on December 1st. The seeds were collected as part of cooperative efforts between five states and eight industrial partners — led by the Texas A&M Forest Service — to promote the best genetic quality seed for use in forest regeneration programs in the Western Gulf Region of the United States. Planting of more than one million seedlings is planned for each of the two next years. Geneticists estimated that the 10-inch tall seedlings need up to 25 years to reach the mature size of the former Bastrop Lost Pines.

Regional Climate Centers

RCC Locations

The regional sections of the U.S. State of the Climate report are written by NOAA's Regional Climate Centers (RCCs). These monthly updates provide the report with detail and relevance that would not be possible without the RCCs regional expertise and perspective. RCCs partner with NCEI, the National Weather Service, the American Association of State Climatologists, and NOAA Research Institutes to collectively deliver and improve climate services at the national, regional, and state level.

How RCCs support NOAA's climate services:

Regional Snowfall Index (RSI)

Sample ReSIS Image

The National Centers for Environmental Information is producing a new regional snowfall index; the Regional Snowfall Index (RSI). Like the Northeast Snowfall Impact Scale (NESIS), RSI uses snowfall and population to create an index that puts snowstorms and their societal impacts into historical perspective. However RSI only uses snowfall and population information within a particular region collection of states) to calculate an index. NESIS uses snowfall and population information from the eastern two thirds of the United States and is therefore a quasi-national index. It is called Northeast" because some of the constants in the algorithm used to calculate NESIS are specifically calibrated to the northeast; a region with abundant snowfall and a large population. The constants in the RSI algorithms are specific to the region in which an index is being calculated. Therefore RSI is a true regional index.

Satellite-Based Drought Indicators

The most accurate assessment of drought and climatic conditions can be made by taking measurements of the climatic conditions where the climate is happening — on the ground or in the atmosphere. These are called in situ observations. There are several valuable in situ observational networks around the world that provide crucial data to support weather forecasters, industries and economies, and government and private sector decision makers and policy makers. To get the best information, networks with many stations spaced close together (high spatial density) are needed, but this kind of dense spatial coverage can be expensive. In their high earth orbits, satellites can supplement in situ data by providing consistent observations at high spatial density with global coverage.

Satellites measure energy intensities (radiances) at several wavelengths of the electromagnetic spectrum. This information is useful because everything — the ground, the oceans, the atmosphere, clouds, rain, vegetation, cities, people, etc. — absorbs energy at certain wavelengths and emits energy at other wavelengths. You may be familiar with some of these: visible satellite imagery of clouds showing the movement and strength of storms and fronts; infrared imagery which measures the temperature of clouds and weather systems; water vapor maps generated from a spectroscopic analysis of satellite data.

Two commonly-used satellite-based drought indicators are the Vegetation Health Index (VHI) and Vegetation Drought Response Index (VegDRI).

  • The VHI is a NOAA product that contains several indices derived from polar-orbiting NOAA satellites. The data and images have global coverage and represent average conditions over a 7-day period. The VHI monitors the health of vegetation regardless of the cause. Poor vegetation health, as indicated by the VHI, may be due to stress caused by drought, stress caused by too much water (e.g., flooding), or some other cause (such as insect infestation).
  • The VegDRI product is produced by the National Drought Mitigation Center (NDMC) in collaboration with several other partners. It is a national map covering the contiguous U.S. produced every two weeks and provides regional to sub-county scale information about drought's effects on vegetation. It is a unique hybrid product, in that the VegDRI calculations integrate satellite-based observations of vegetation conditions with in situ climate data and other information such as land cover/land use type, soil characteristics, and ecological setting. The VegDRI monitors the health of vegetation as it is specifically related to drought.

Soil Moisture Water Balance Models

In an ideal world, measurements of soil moisture content — at several levels from the surface of the ground to five feet below or deeper — would be available on a daily basis for every backyard and field across the United States. This type of observation network would give us a good idea of how dry or wet the ground is and help tremendously with drought monitoring. Such a national soil moisture observation network doesn't exist. At present, a few hundred soil moisture stations are scattered around the country. So, water balance models are run on a gridded spatial scale in order to get an idea of what the national soil moisture conditions are. Examples of soil moisture water balance models include the "Leaky Bucket", North American Land Data Assimilation System (NLDAS), and VegET models. Water balance models typically use precipitation as the water supply component. They calculate a water demand component (evapotranspiration) using temperature and other variables such as humidity, wind speed, and insolation (solar energy). Then they calculate fluxes (how water and energy change over time and space) to estimate things like soil moisture, soil temperature, snow water content, and stream runoff. Some of the models use station soil moisture observations and satellite observations of surface wetness as "ground truth" for calibration. While not perfect, using a variety of models gives us a good idea of where soils are drying across the country, especially in areas where no soil moisture observations exist.

Southern Hemisphere Snow Cover Extent

As part of its monthly State of the Climate Global Snow & Ice report, the National Centers for Environmental Information monitors monthly and seasonal snow cover extent across the Northern Hemisphere's major land areas — North America and Eurasia — using data from the Rutgers Global Snow Lab. Snow cover extent is sensitive to both regional temperatures and precipitation patterns across the mid and high latitudes, providing an important metric to measure the earth's climate system. By utilizing NOAA satellites, snow cover can be observed over all land areas, not just locations with surface-based observations, which can be sparse in non-developed areas of the globe.

Snow cover extent in the Southern Hemisphere is not currently examined for several reasons:

  • Snow cover can only be measured over land areas, and not the ocean surface. Most of the surface area of the Southern Hemisphere is covered by ocean.
  • The Southern Hemisphere land area located in the mid and high latitudes is very small, excluding Antarctica, when compared to the land area of the Northern Hemisphere at the same latitudes.
  • The Antarctic continent is generally snow covered year round, with very little annual variation.
  • Snow cover is difficult to measure across Antarctica and the high elevations of South America. These land areas are mostly covered by glaciers, and it is nearly impossible to distinguish snow cover from glacial ice using current satellite technology.

Standardized Precipitation Index

Drought results from an imbalance between water supply and water demand. The Standardized Precipitation Index (SPI), one of several drought indices that have been developed over the decades, measures water supply, specifically precipitation. The SPI is computed over several time scales, typically from one month to 24 months, to evaluate both short-term drought and long-term drought. It can also measure precipitation excess. The U.S. Drought Monitor incorporates information from the SPI and many other drought indices, as well as drought impacts, in depicting drought.

State of the Climate RSS

RSS icon

NOAA's National Centers for Environmental Information produces monthly State of the Climate (SOTC) reports. These monthly reports summarize climate conditions and extreme weather events across the globe, while placing them into historical perspective. An RSS feed helps notify and deliver these reports to customers when they are updated. You can subscribe by visiting the State of the Climate RSS Feed XML icon.

Streamflow Drought Indicators

Streamflow levels are a useful indicator for drought as well as floods, but they must be used with considerable caution. For streams that are regulated, the streamflow may reflect management decisions at upstream reservoirs which will not reflect true drought conditions. Streams that are unregulated are better for drought monitoring. A hydrograph is used to show stream discharge (water level) over time. When a precipitation event occurs over a river basin, the discharge increases to a high level (peak flow), then gradually decreases to near the base flow after the precipitation event ends. The time it takes to reach peak flow and the magnitude of the peak flow are useful for flood forecasting. Base flow is the level the stream would have if no precipitation occurred over the basin and is generally fed by groundwater. Base flow is the best streamflow indicator for drought monitoring. In practice, however, the weather doesn't cooperate to provide widespread measurements of base flow at a network of river gauge stations on an operational basis — at any given time, streamflow measurements across the nation are a mix of peak flow, base flow, and everything in between. Consequently, streamflow measurements are averaged over several days to a month to indicate areas with low streamflow and, by implication, low groundwater levels. The values are compared to the local streamflow history, converted into percentiles, and grouped into above- and below-normal percentile classes to be used as an indicator for drought classification.

Subtropical Highs

The sun is the ultimate source of energy that drives the earth's weather. Most of the energy reaches the equatorial regions and the least energy reaches the poles, causing the tropics to warm and the poles to cool. The earth's atmosphere redistributes this heat imbalance through a complex set of atmospheric circulation patterns. The warm air at the low latitudes rises and moves toward the poles. The rising air, and the subsequent clouds and precipitation, cause the tropics to be very wet. As the air moves towards the subtropics, it descends over the oceans and creates semi-permanent circulation features called subtropical highs. In the Northern Hemisphere, these high pressure systems are located over the North Pacific and North Atlantic oceans. The North Atlantic High is generally centered over Bermuda, so it is also known as the Bermuda High. The descending air under subtropical highs warms and dries as it descends, resulting in generally sunny skies and dry weather. Cold air from the poles flows toward lower latitudes in order to complete the redistribution of the heat imbalance in the atmosphere. This cold polar air collides with warmer subtropical air in the mid-latitudes, resulting in frontal precipitation and low pressure cyclonic storm systems.

This entire system of fronts, subtropical highs, and tropical rain migrates with the seasons, moving northward during the Northern Hemisphere summer and southward during the Northern Hemisphere winter. Sometimes during the summer, the Bermuda High will extend further to the west than usual, encompassing a significant part of the southern and eastern United States. Its descending air inhibits precipitation and its anticyclonic circulation pattern deflects tropical storms and hurricanes to the south and weakens cold fronts to its north, resulting in heat waves and droughts.

Tornado Count

The final monthly tornado count is typically less than the preliminary count due to some reported tornadoes not actually being a tornado or a single tornado being reported multiple times. Given in this report is the monthly preliminary tornado count because the final tornado count was not available at the time of production. Reports of tornadoes come into the National Weather Service by trained spotters, local officials, emergency responders, media, and the general public. Frequently, a reported tornado can actually be a cloud feature that resembles the shape of a tornado, but is not actually a tornado. Historically, for every 100 preliminary tornado reports, at least 65 tornadoes are typically confirmed. The red error bar shown for the tornado count represents this uncertainty in the preliminary tornado count. The local National Weather Service forecast offices are responsible for going into the field and verifying each tornado reported and the final count is archived by the Storm Prediction Center (SPC). The final tornado count is published by the SPC once all reports have been investigated, which usually takes several months.

U.S. Climate Normals

U.S. Climate Normals
1971-2000 Normals: approximately 8000 stations

Beginning August 1, 2011, the U.S. ASOS maps and the monthly mean maximum and minimum temperature anomaly maps will be using the newly updated 1981-2010 Normals to calculate anomalies. Other 30-year Normals anomaly-based monitoring products will not be affected by the Normals transition until early 2012.

Climate Normals are three-decade averages of climatological variables. The most widely-used Normals are those for daily and monthly station-based temperature, precipitation, snowfall and heating and cooling degree days. These come from NOAA's Cooperative and First-Order station networks. Meteorologists and climatologists regularly use Normals for placing recent climate conditions into historical context; such as comparisons with the day's weather conditions on local television. Normals are also utilized in many applications across a variety of sectors. These include regulation of power companies, energy load forecasting, crop selection and planting times, construction planning, building design, and many others.

Several changes and additions will be incorporated into the 1981-2010 Normals. Monthly temperature and precipitation Normals will utilize underlying data values that have undergone additional quality control to account for things such as stations having been moved. Unlike the 1971-2000 Normals, daily (rather than monthly) data will be used extensively in daily temperature Normals as well as heating and cooling degree day Normals, providing greater precision of intra-seasonal features. More details can be found in Arguez et al. 2011.

When the new Normals are released, relevant comparisons between the new version and previous versions will be highlighted. Observational evidence shows that the 2000-2009 timeframe was the warmest decade on record in the U.S. Therefore, we expect that the new Normals will generally be warmer on average for most stations, but not uniformly warmer for all stations and all seasons. In fact, some station Normals in certain seasons will become cooler.

Information on the current Normals, as well as the history of the Normals, please visit U.S. Climate Normals. For general questions about Normals or help accessing the 1971-2000 product, please contact NCEI.

U.S. Drought Monitor Scale

US Drought Monitor Legend

The U.S. Drought Monitor established a drought scale much like those that rate hurricanes and tornadoes. The "D-scale" speaks to the "unusualness" of a drought episode. Over the long run, D1 conditions are expected to occur about 10 to 20 percent of the time. D4 is much rarer, expected less than 2 percent of the time. For more detailed information about drought definitions, please visit the U.S. Drought Monitor.

USHCN Version 2.5 Transition

Since 1987, National Centers for Environmental Information (NCEI, formerly National Climatic Data Center) has used observations from the U.S. Historical Climatology Network (USHCN) to quantify national- and regional-scale temperature changes in the conterminous United States (CONUS). To that end, USHCN temperature records have been "corrected" to account for various historical changes in station location, instrumentation, and observing practice. The USHCN is a designated subset of the NOAA Cooperative Observer Program (COOP) Network. USHCN sites were selected according to their spatial coverage, record length, data completeness, and historical stability. The USHCN, therefore, consists primarily of long-term COOP stations whose temperature records have been adjusted for systematic, non-climatic changes that bias temperature trends.

The National Centers for Environmental Information periodically improves the quality of the datasets maintained at the center and releases updated versions. Beginning with the September 2012 processing, NCEI (then known as NCDC) began using USHCN version 2.5 for national temperature calculations as well as in other products, including Climate at a Glance and the Climate Extremes Index. For additional information on the improvements made to USHCN version 2.5, please visit USHCN.

Water Supply vs. Water Demand

It is important for drought indices to have a term for water demand (evaporation or evapotranspiration, ET) as well as water supply (precipitation). Drought indices like the Standardized Precipitation Index (SPI) or precipitation percentiles measure only water supply, while indices like the Palmer Drought Index incorporate a balance between water supply and water demand. Indices that measure just water supply tell only part of the story. Indices that are based on the hydrological balance between water supply and water demand give a more complete picture. Water supply is easy to measure -- it is just precipitation, maybe with some component that looks at soil moisture. Water demand is harder to calculate and is frequently based on temperature. Warmer temperatures generally result in more ET and greater water demand, increasing the risk of drought.