by Kevin E. Trenberth*
The climate is changing. In general, temperatures are increasing (Figure 1), owing to human-induced changes in the composition of the atmosphere, notably increased carbon dioxide from the burning of fossil fuels (IPCC, 2007). Land is mostly warming faster than the ocean. A close examination of Figure 1, however, shows that the temperatures actually declined from 1901 to 2005 in the south-eastern USA and the North Atlantic. Why is this? In the North Atlantic, changes in ocean currents clearly contribute. Over the south-eastern USA, changes in the atmospheric circulation that brought cloudier and much wetter conditions played a major role (Trenberth et al., 2007). This non-uniformity of change highlights the challenges of regional climate change that has considerable spatial structure and temporal variability.
|Figure 1 — Linear trend of annual temperatures for 1901 to 2005
(°C century–1). Areas in grey have insufficient data to produce reliable trends. Trends significant at the 5% level are indicated by white + marks. (From Trenberth et al., Climate Change 2007: The Physical Science Basis, Intergovernmental Panel on Climate Change)
A foundation of climate research and future projections comes from the observations. These come from many and varied sources. Many are taken for weather forecasting purposes. Changes are common in instrumentation and siting, thereby disrupting the climate record, for which continuity and homogeneity are vitally important for assessing climate variations and change. Increasing volumes of observations come from space-based platforms, but satellites have a finite life time (typically five years or so), the orbit drifts and decays over time, the instruments degrade and, hence, the apparent climate record can become corrupted by spurious changes. An ongoing challenge is to create climate data records from the observations to serve many purposes.
Loss of Earth-observing satellites is also of concern, as documented in the recent National Research Council decadal survey (2007). Ground-based observations are not being adequately kept up in many countries. Calibration of climate records is critical. Small changes over a long time are characteristic of climate change but they occur in the midst of large variations associated with weather and natural climate variations, such as El Niño. Yet the climate is changing and it is imperative to track the changes and causes as they occur and identify what the prospects are for the future—to the extent that they are predictable. We need to build a system based on these observations to inform decision-makers about what is happening and why and what the predictions are for the future on several time horizons.
In this article, an outline is given of a subset of activities related to the needs of decision-makers for climate information for adaptation purposes. It builds on some discussions held at a workshop on learning from the Fourth Assessment of the Intergovernmental Panel on Climate Change (IPCC) (Sydney, Australia, 4-6 October 2007). The Workshop was sponsored by the Global Climate Observing System, the World Climate Research Programme (WCRP) and the International Geosphere-Biosphere Programme of the International Council for Science. Within WCRP, the WCRP Observations and Assimilation Panel (WOAP), which the author chairs, attempts to highlight outstanding issues and ways forward in addressing them.
Building an information base for adaptation
A detailed diagnosis of the vital signs of planet Earth has revealed that the planet is running a “fever” and the prognosis is that it is liable to become much worse. “Warming of the climate system is unequivocal” and it is “very likely” due to human activities. This is the verdict of the Fourth Assessment Report of the IPCC, known as AR4 (IPCC, 2007). Although mitigation of human climate change is vitally important, the evidence suggests that climate will continue to change substantially as a result of human activities over the next several decades, so that adaptation will be essential.
An imperative and essential first step is to build a climate information system (Trenberth et al., 2002; 2006) that informs decision-makers about what is happening and why, and what the immediate prospects are (see Figure 2). Overall what is required are the observations (that satisfy the climate-observing principles); a performance tracking system; the ingest, archival and stewardship of data; access to data, including data management and integration; analysis and re-analysis of the observations and derivation of products, especially including climate data records (National Research Council, 2005); assessment of what has happened and why (attribution), including likely impacts on humans and ecosystems; prediction of near-term climate change over several decades; and responsiveness to decision-makers and users.
|Figure 2 — A schematic of the flow of the climate information system, as basic research feeds into applied and operational research and the development of climate services. The system is built on the climate observing system that includes the analysis and assimilation of data using models to produce analyses and fields for initializing models; the use of models for attribution and prediction and with all the information assessed and assembled into products and information that are disseminated to users. The users in turn provide feedback on their needs and how to improve information.|
This first means gathering the information on changes in climate and the external forcings and attributing, to the extent possible, why the changes have occurred. Any attribution activity is fundamentally about the science of climate predictability, including the predictability of climate variability (e.g. El Niño-Southern Oscillation (ENSO), seasonal variability, etc.), as well as long-term climate change. Indeed, understanding and attributing what has just happened would seem to be a prerequisite to making the next climate prediction. The central goal is to understand the causes of observed climate variability and change, including the uncertainties, and to be able to: (a) use this understanding to improve model realism and forecast skill; and (b) communicate this understanding to users of climate knowledge and the public in general. Attribution may take two stages. For instance, one stage entails running atmospheric models to determine the extent to which recent conditions could have been predicted, given the observed sea-surface temperatures (SSTs), soil moisture, sea ice and other anomalous influences on the atmosphere. The second step is to say why the SSTs and soil moisture, etc., are the way they are. As models have become better, climate events that could not be attributed in the past now can be.
Models are still far from perfect and are likely to underestimate what can actually be attributed. Individual researchers may be ahead of consensus. Accordingly, both steps should take account of shortcomings of models, and empirical (statistical, etc.) evidence can often be more compelling. A requirement is to have significantly expanded computer resources for ensemble simulations and simulations that have sufficient resolution for regional-scale climate attribution (e.g. droughts, hurricanes, floods). A research question is how to do this efficiently.
The development of this climate information system potentially takes on, in a more operational framework, a key part of what is currently done by the IPCC. The research questions are many on how to develop the system and what the system includes in ways that make it viable. The related follow-on activities are then to improve and initialize climate models and make ensemble predictions for the next 30 years or so, as given below.
Initialization and validation of decadal forecasts
Running atmospheric models with specified SSTs has often produced understanding of past climate anomalies. For instance, the Sahel drought (Giannini et al., 2003) and the “Dust Bowl” period of drought in the USA in the 1930s (Schubert et al., 2004; Seager et al., 2005) can be simulated in this way. Hurrell et al. (2004) find that some aspects of the North Atlantic Oscillation can be simulated with prescribed SSTs. It is essential to have the patterns of SSTs around the globe simulated much as observed, and it is clearly not possible to make such predictions without initialization of oceans and other aspects of the climate system.
The extent to which this leads to predictability is not yet clear but the underlying hypothesis is that there is significant predictability that can be exploited for improved adaptation and planning by decision-makers. Early tests of this approach (Smith et al., 2007) show the promise and benefit of initializing models, but the benefit thus far stems mainly from ENSO.
In a 30-year time frame, climate predictions are not sensitive to emissions scenarios and this aspect can hence be largely removed from consideration. Yet, forecasts in this time frame would be exceedingly valuable. Climate (change) predictions are therefore needed to provide information on a time-scale of 0-30 years but with estimates of uncertainty (ensembles) and estimates of sensitivity to errors in initial conditions. This also leads to improved models through regular testing against data, as noted below. The WCRP has initiated research in this area under the banner “seamless climate prediction” that calls for prediction on multiple time-scales, ranging from numerical weather prediction to extended range over weeks (see The Observing system Research and Predictability EXperiment (THORPEX), interannual variability including ENSO, to multi-decadal predictions, all as initial value problems requiring specification of the initial observed state.
For weather prediction, detailed analyses of the atmosphere are required but uncertainties in the initial state grow rapidly over several days. For climate predictions, the initial state of the atmosphere is less critical; states separated by a day or so can be substituted. However, the initial states of other climate-system components, some of which may not be critical to day-to-day weather prediction, become vital. For predictions of a season to a year or so, SSTs, sea-ice extent and upper-ocean heat content, soil moisture, snow cover and state of surface vegetation over land are all important. Such initial value predictions are already operational for forecasting El Niño and extensions to the global oceans are underway. On longer time-scales, increased information throughout the ocean is essential. The mass, extent, thickness and state of sea ice and snow cover are vital at high latitudes. The states of soil moisture and surface vegetation are especially important in understanding and predicting warm season precipitation and temperature anomalies, along with other aspects of the land surface. Any information on systematic changes to the atmosphere (especially its composition and influences from volcanic eruptions), as well as external forcings, such as from changes in the Sun, is also needed. Uncertainties in the initial state and the lack of detailed predictability of the atmosphere and other aspects of climate mandate that ensembles of predictions must be made and statistical forecasts given.
The activity feeds directly into providing regional predictions, including downscaling, and should result in probability distributions for fields of interest. The results would have direct applications regionally where impacts are most felt and where planned adaptation can occur.
Predictability should arise from certain phenomena that evolve slowly or which have large thermal inertia, such as ocean current systems, including the meridional overturning circulation of the ocean, ice-sheets, sea-level and land properties. Some predictability can be determined from model experiments, but only to the extent that models themselves are adequate. Coping with systematic errors is a particular challenge in assimilating real observations. Having available high-quality comprehensive datasets both to initialize the models and test them in hindcast mode is vitally important and is linked to re-analysis of the climate system components, perhaps in coupled mode. Such data are also essential for improving models. Compromises have to be made over Latest issue and fidelity versus multiple runs and perturbation ensembles, as well as multi-model ensembles. Metrics for evaluation is a developing field but attention must be devoted to modes of variability, such as ENSO, and how to cope with missing phenomena such as tropical cyclones.
Confronting models with observations
It is desirable to confront models with a variety of observational evidence in order to interpret observed historical changes in the climate system and to have confidence in projections of future change. The AR4 demonstrates that we now have a relatively good understanding of the causes of surface temperature changes observed over the 20th century on both global and continental scales. We are able to quantify the contributions to observed change from the main external influences (including human influences) on the climate over the past century. However, our capability of interpreting change in most other impacts-relevant variables (such as circulation change, precipitation change and changes in extremes of various types) remains more limited.
It is necessary to perform specific, designed experiments to isolate and correct the causes of long-standing model biases resulting, for example, from persistent difficulties in the representation of convective processes resulting in the poor representation of the diurnal cycle of precipitation, coupled air-sea modes of interaction and the distribution of marine stratus. Focused experimentation with climate models is needed to isolate the causes of specific and persistent model biases at the process level. For instance, the Cloud Feedback Model Intercomparison Project (CFMIP) focuses on cloud feedbacks. The Transpose Atmospheric Model Intercomparison Project employs climate models in weather forecast mode to examine biases in models that develop rapidly in forecasts of up to five days.
There is a pressing need to develop and apply a set of community-accepted model metrics that could be used to weigh the many different models contributing to the large ensembles. The metrics could be based on the ability to simulate the mean annual cycle; observed climate variability on scales from hours (diurnal) to decadal; the features of the longer-term historical evolution of the climate system as estimated from paleo records of, for example, the last millennium, the Last Glacial Maximum, or other times when observational constraints are adequate; and the ability to produce short-term weather predictions as an initial value problem and short-term evolution of the climate system over the satellite period as an initial value and boundary forced problem.
Reprocessing and re-analyses
While we have generally seen continuing improvement in the Earth-observing satellite network, with significant enhancements in the measurements made by operational satellites, problems have arisen over establishing and maintaining climate observations from space that are highlighted by the de-scoping of the National Polar-orbiting Operational Environmental Satellite System, in which climate observations have been seriously compromised. Longer-term prospects for Earth observations are also not as good as they have been (National Research Council, 2007).
The continuing problems in establishing and maintaining global measurements of essential climate variables highlight the need for formal international coordination of these measurements across agencies and missions, in liaison with user groups from the climate community. Coordination is especially important in the design phase of missions (so that consistency and continuity can be maintained) and in the calibration and validation processes (so that spatially and temporally consistent data can be collected). Both these aspects have been identified by the Committee on Earth Observation Satellites through the virtual constellation concept and the Global Space-based Inter-Calibration System.
Much more work is needed to take advantage of observations already made. A key part of the overall strategy in creating climate data records is the need to have a vibrant programme of re-processing of past data (GCOS, 2005) and re-analysis of all the data into global fields. The AR4 IPCC report demonstrates shortcomings in many climate records, especially those from space. Related research has also demonstrated, however, the potential for improvements in the records as progress is made on algorithm development and solutions are found to problems. These include discontinuities in the record across different instruments and satellites, drifts in orbit effects and all issues related to the creation of true climate data records.
The WCRP Observations and Assimilation Panel has posted a set of guidelines on when and whether it is appropriate to carry out re-processing. Coordination among the major space agencies is highly desirable to agree on algorithms and calibration procedures. The fields would include temperature, water vapour, clouds, radiation, sea-surface temperatures, sea ice and snow cover and, especially, tropical storms and hurricanes. The research community can do this; but it requires substantial resources and coordination among international space agencies, in particular.
Global atmospheric analyses are produced in real-time operationally. As the assimilating model used to analyse the observations is improved, the analyses may change in character. Re-analysis is the name given to the re-processing of all these and other observations with a state-of-the-art system that is held constant in time, thereby improving the continuity of the resulting climate record. The challenge of dealing with the changing observing system is still before us. The result is a more coherent description of the changing atmosphere, ocean, land surface and other climate components that can be utilized by the many customers for climate products, including those that can not be directly observed. Re-analysis thus contributes to the capacity-building objectives of programmes such as the Global Earth Observing System of Systems and should be considered an essential component of a climate observing system. WOAP promotes re-analysis and the Third International Re-analysis Conference is being held in Tokyo, Japan in January 2008.
Building a climate information system (see Figure 2) potentially integrates research and output from WCRP, the International Geosphere-Biosphere Programme, Diversitas (an international programme of biodiversity science (ESSP/IGBP/IHDP/WCRP)), the International Human Dimensions Programme on Global Environmental Change (ESSP/DIVERSITAS/IGBP/WCRP) and the Global Climate Observing System. Basic research feeds into applied and operational research that, in turn, develops climate products and services.
Many aspects require research on assembling observations, analysis and assimilation, attribution studies, establishing relationships among physical and environmental impact variables, running models and coping with model biases, predictions and projections, downscaling and regionalizing results and developing information systems and ways of interacting with users. Meeting the challenges in the above research requires adequate funding but potentially pays off with a valuable information system.
Global Climate Observing System, 2004: Implementation Plan for the Global Observing System for Climate in support of the UNFCCC. GCOS-92. 143 pp.
Giannini, A., R. Saravanan and P. Chang, 2003: Oceanic forcing of Sahel rainfall on interannual to interdecadal time scales. Science, 302, 1027−1030.
Hoerling, M. and A. Kumar, 2003: The perfect ocean for drought. Science, 299, 691−694.
Hurrell, J.W., M.P. Hoerling, A.S. Phillips and T. Xu, 2004: Twentieth Century North Atlantic climate change. Part I: Assessing determinism. Climate Dyn., 23, 371-389.
Intergovernmental Panel on Climate change (IPCC), 2007: Climate Change 2007—The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the IPCC. (S. Solomon, D. Qin, M. Manning, Z. Chen, M.C. Marquis, K. B. Avery, M. Tignor and H.L. Miller (Eds)). Cambridge University Press. Cambridge, UK, and New York, NY, USA, 996 pp.
National Research Council, 2004: Climate Data Records from Environmental Satellites: Interim Report, National Academy Press, Washington, DC, USA. 105 pp.
National Research Council, 2007: Earth Science and Applications from Space: National Imperatives for the Next Decade and Beyond. The National Academies Press, Washington, DC, USA.
Schubert, S. D., M.J. Suarez, P.J. Region, R.D. Koster and J.T. Bacmeister, 2004: Causes of long-term drought in the United States Great Plains. J. Climate, 17, 485−503.
Seager, R., Y. Kushnir, C. Herweijer, N. Naik and J. Velez, 2005: Modeling of tropical forcing of persistent droughts and pluvials over western North America: 1856-2000. J. Climate, 18, 4065-4088.
Smith, D.M., S. Cusack, A.W. Colman, C.K. Folland, G.R. Harris and J.M. Murphy, 2007: Improved surface temperature prediction for the coming decade from a global climate model. Science, 317, 796-799.
Trenberth, K.E., T.R. Karl and T.W. Spence, 2002: The need for a systems approach to climate observations. Bull. Amer. Meteor. Soc., 83, 1593–1602.
Trenberth, K.E., B. Moore, T.R. Karl and C. Nobre, 2006: Monitoring and prediction of the Earth’s climate: A future perspective. J. Climate, 19, 5001−5008.
Trenberth, K.E., P.D. Jones, P. Ambenje, R. Bojariu, D. Easterling, A. Klein Tank, D. Parker, F. Rahimzadeh, J.A. Renwick, M. Rusticucci, B. Soden and P. Zhai, 2007: Observations: Surface and Atmospheric Climate Change. In: Climate Change 2007. The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. (S. Solomon, D. Qin, M. Manning, Z. Chen, M.C. Marquis, K.B. Avery, M. Tignor and H.L. Miller (Eds)). Cambridge University Press. Cambridge, UK, and New York, NY, USA, 235−336, plus annex online.
* National Center for Atmospheric Research, Boulder, CO 80307, USA E-mail: trenbert [at]ucar.edu