PresentationPDF Available

Robust Adaptation to Multi-Scale Climate Variability

  • Carnegie Institution of Science

Abstract and Figures

Evaluating and optimizing investments in climate adaptation requires projecting future climate risk over the operational life of each proposed investment. While many studies have considered that different climate change scenarios may emerge over the course of this M-year future period, adaptation policies remain vulnerable to the temporal and spatial clustering of climate risk which dominates much of the observational record. Large-scale, low-frequency climate variability can induce spatial shocks by favoring simultaneous extremes around the world, and can also cause a historical record to be a misleading indicator of future risk. In this work we consider whether the limited information in an N-year observational record permits the identification and projection of quasi-periodic climate variability and secular change, and what the resulting bias and uncertainty portends for risk mitigation instruments with a service life ranging from a few years to several decades. We present a set of stylized experiments to assess how well one can learn and predict the two kinds of risk for the design life (M years) and the probability of over- or under-design of a climate adaptation strategy based on these projections. We consider different temporal structures for the underlying risk which encompass quasi-periodic, regime-like, and secular variability, as well as statistical models for estimating this risk from an N-year historical record. The relative importance of estimating the short- or long-term risk associated with these extremes depends on the design life M, but the potential to understand and predict these different types of variability depends on the informational uncertainty in the N-year historical record. Though we use floods as an example, the framework also applies to other forms of climate extremes.
Content may be subject to copyright.
H52F-05H: Robust Adaptation to
Multi-Scale Climate Variability
Toward Better Water Planning and Management in an
Uncertain World I
James Doss-Gollin1, David J. Farnham2, Scott Steinschneider3, Upmanu Lall1
14 December 2018
1Columbia University Department of Earth and Environmental Engineering
2Carnegie Institution for Science
3Department of Biological and Environmental Engineering, Cornell University
Motivating Example
What to do after Sandy? [City of New York, 2013]
James Doss-Gollin (
Idea 1: Risk Estimates over Finite Future Periods
Typical Approach:
Cost-Benet Analysis (CBA), probably with discounting, over a nite
planning horizon of Myears.
Project should be evaluated on climate conditions over this nite
planning period:
For “mega-project”, M50 years
For small, exible project, M5 years
James Doss-Gollin (
Idea 2: Hydroclimate Systems Vary on Many Scales
Inter-annual to multi-decadal cyclical variability key (for small M)
1920 1940 1960 1980 2000
0 50 100 150 200
American River at Folsom
Water Year
Annual Maximum Streamflow
1500 1600 1700 1800 1900 2000
−6 −4 −2 0 2 4 6 8
Living Blended Drought Analysis
Drought Severity
20−Year Moving Average
0 0.1 0.2 0.3 0.4 0.5 0.6
average wavelet power
0 0.05 0.1 0.15 0.2 0.25
average wavelet power
Figure 1: (a) 500 year reconstruction of summer rainfall over Arizona from LBDA [Cook et al.,
2010]. (b) A 100 year record of annual-maximum streamows for the American River at Folsom.
(c),(d): wavelet global (average) spectra.
James Doss-Gollin (
Idea 3: Physical Drivers of Risk Depend on M
The physical drivers of hazard depend on the projection horizon (M),
but our ability to identify these mechanisms depends on information
available (e.g., the length of an N-year observational record).
James Doss-Gollin (
Stylized Experiments
Experiment Setup
Research Objective
How well can one identify & predict cyclical and secular climate signals
over a nite planning period (M), given limited information?
Let P(X>X). Note that the insurance premium (or risk factor) is:
R=E[P] + λV[P]
Systematic, stylized experiments:
what happens as we vary M,N,
climate structure, estimating model?
James Doss-Gollin (
Stationary Scenario (LFV Only)
With limited data, the uncertainties caused by extrapolating from
complex models lead to poor performance.
James Doss-Gollin (
Nonstationary Scenario I (Secular Change Only)
Long planning periods need trend estimation, but this demands lots of
information. For short planning periods, simple models may be better.
James Doss-Gollin (
Nonstationary Scenario II (Secular Change + LFV)
As the system becomes more complex, more data is needed to
understand it.
James Doss-Gollin (
Investment evaluation depends
on climate condition over nite
planning period
Physical hydroclimate systems
vary on many scales
Physical drivers of risk depend
on planning period
Ability to identify and predict
dierent climate signals
depends on information
available (e.g., N)
Importance of predicting
dierent climate signals
depends on extrapolation
desired (i.e., planning period)
In general, low risk tolerance
and/or limited information
favor investments with short
planning periods.
James Doss-Gollin (
References i
Carpenter, B., et al., Stan: A Probabilistic Programming Language, Journal Of Statistical
Software,76(1), 1–29, doi:10.18637/jss.v076.i01, 2017.
City of New York, A Stronger, More Resilient New York, Tech. rep., New York, 2013.
Cook, E. R., R. Seager, R. R. Heim Jr, R. S. Vose, C. Herweijer, and C. Woodhouse,
Megadroughts in North America: Placing IPCC projections of hydroclimatic change in a
long-term palaeoclimate context, Journal of Quaternary Science,25(1), 48–61,
doi:10.1002/jqs.1303, 2010.
Doss-Gollin, J., D. J. Farnham, S. Steinschneider, and U. Lall, Robust adaptation to multi-scale
climate variability.
Rabiner, L., and B. Juang, An Introduction to Hidden Markov Models, IEEE ASSP Magazine,
3(1), 4–16, doi:10.1109/MASSP.1986.1165342, 1986.
Ramesh, N., M. A. Cane, R. Seager, and D. E. Lee, Predictability and prediction of persistent cool
states of the Tropical Pacic Ocean, Climate Dynamics,49(7-8), 2291–2307,
doi:10.1007/s00382-016- 3446-3, 2016.
Schreiber, J., Pomegranate: Fast and exible probabilistic modeling in python,, 2017.
Zebiak, S. E., and M. A. Cane, A Model El Niño-Southern Oscillation, Monthly Weather Review,
115(10), 2262–2278, doi:10.1175/1520-0493(1987)115<2262:AMENO>2.0.CO;2, 1987.
James Doss-Gollin (
Thanks for your attention!
Interested in making these ideas more
concrete? I’d love to collaborate!
James Doss-Gollin (
Supplemental Discussion
Idealized Experiments Real World
The idealized models used here are analogs:
Analysis Real World
N-year record Total informational uncertainty of an
Statistical models of increasing
complexity and # parameters
Statistical and dynamical model
chains of increasing complexity and #
Linear trends Secular changes of unknown form
low-frequency climate
variability (LFV) from the El
Niño-Southern Oscillation
LFV from many sources
LFV and trend additive LFV and trend interact
Generating Synthetic Streamow
Example Sequences and Fits
Figure A1: Example of sequences generated with M=100 and N=50
Equations for Synthetic Streamow Generation
log Q(t) N (µ(t), σ(t)).(A1)
Where σ(t) = ξµ(t), with σ(t)σmin >0. Then,
µ(t) = µ0+βx(t) + γ(tt0),(A2)
and where x(t)is NINO3.4 index from realistic ENSO model [Zebiak and
Cane, 1987; Ramesh et al., 2016]
Spectrum of LFV Used
Figure A2: Wavelet spectrum of (sub-set of) ENSO model used to embed synthetic streamow
sequences with low-frequency variability. ENSO data from Ramesh et al. [2016].
Climate Risk Estimation
Stationary LN2 Model
Treat the Nhistorical observations as independent and identically
distributed (IID) draws from stationary distribution
log Qhist N (µ, σ)
µ N (7,1.5)
σ N +(1,1)
where Ndenotes the normal distribution and N+denotes a half-normal
distribution. Fit in Bayesian framework using stan [Carpenter et al.,
Trend LN2 Model
Treat the Nhistorical observations as IID draws from log-normal
distribution with linear trend
log Qhist N (µ, ξµ)
µ0 N (7,1.5)
βµ N (0,0.1)
log ξ N (0.1,0.1)
where ξis an estimated coecient of variation. Also t in stan.
Hidden Markov Model
Two-state hidden Markov model (HMM) [see Rabiner and Juang, 1986]
implemented using pomegranate python package [Schreiber, 2017]. See
package documentation for reference.
... Given that the uncertainty of future rainfall predictions is high, these safety factors could still be used as long as the engineer is willing to accept the risks associated with under-design. To balance risks and maintain reasonable performance, all infrastructure systems will need to be monitored in order to understand when redesign or adaptation is needed (Olsen 2015; Kim et al. 2017;Doss-Gollin et al. 2018;Gilrein et al. 2019). ...
Full-text available
Intensity-duration-frequency (IDF) curves, commonly used in stormwater infrastructure design to represent characteristics of extreme rainfall, are gradually being updated to reflect expected changes in rainfall under climate change. The modeling choices used for updating lead to large uncertainties; however, it is unclear how much these uncertainties affect the design and cost of stormwater systems. This study investigates how the choice of spatial resolution of the regional climate model (RCM) ensemble and the spatial adjustment technique affect climate-corrected IDF curves and resulting stormwater infrastructure designs in 34 US cities for the period 2020 to 2099. In most cities, IDF values are significantly different between three spatial adjustment techniques and two RCM spatial resolutions. These differences have the potential to alter the size of stormwater systems designed using these choices and affect the results of climate impact modeling more broadly. The largest change in the engineering decision results when the design storm is selected from the upper bounds of the uncertainty distribution of the IDF curve, which changes the stormwater pipe design size by five increments in some cases, nearly doubling the cost. State and local agencies can help reduce some of this variability by setting guidelines, such as avoiding the use of the upper bound of the future uncertainty range as a design storm and instead accounting for uncertainty by tracking infrastructure performance over time and preparing for adaptation using a resilience plan.
Full-text available
Hydrologic variability can present severe financial challenges for organizations that rely on water for the provision of services, such as water utilities and hydropower producers. While recent decades have seen rapid growth in decision‐support innovations aimed at helping utilities manage hydrologic uncertainty for multiple objectives, support for managing the related financial risks remains limited. However, the mathematical similarities between multi‐objective reservoir control and financial risk management suggest that the two problems can be approached in a similar manner. This paper demonstrates the utility of Evolutionary Multi‐Objective Direct Policy Search for developing adaptive policies for managing the drought‐related financial risk faced by a hydropower producer. These policies dynamically balance a portfolio, consisting of snowpack‐based financial hedging contracts, cash reserves, and debt, based on evolving system conditions. Performance is quantified based on four conflicting objectives, representing the classic tradeoff between “risk” and “return” in addition to decision‐makers’ unique preferences toward different risk management instruments. The dynamic policies identified here significantly outperform static management formulations that are more typically employed for financial risk applications in the water resources literature. Additionally, this paper combines visual analytics and information theoretic sensitivity analysis to improve understanding about how different candidate policies achieve their comparative advantages through differences in how they adapt to real‐time information. The methodology presented in this paper should be applicable to any organization subject to financial risk stemming from hydrology or other environmental variables (e.g., wind speed, insolation), including electric utilities, water utilities, agricultural producers, and renewable energy developers.
Full-text available
Extreme precipitation and consequent floods are some of California's most damaging natural disasters, but they are also critical to the state's water supply. This motivates the need to better understand the long‐term variability of these events across the region. This study examines the possibility of reconstructing extreme precipitation occurrences in the Sacramento River Watershed (SRW) of Northern California using a network of tree‐ring based moisture proxies across the Western US. We first develop a gridded reconstruction of the cold‐season standardized precipitation index (SPI) west of 100°W. We then develop an annual index of regional extreme precipitation occurrences in the SRW and use elastic net regression to relate that index to the gridded, tree‐ring based SPI. These regressions, built using SPI data across the SRW only and again across a broader region of the Western US, are used to develop reconstructions of interannual variability in extreme precipitation frequency back to 1400 CE. The SPI reconstruction is skillful across much of the West, including the Sacramento Valley and Central Oregon. The reconstructed SPI also captures historical interannual variations in extreme SRW precipitation, although individual events may be under‐ or over‐estimated. The reconstructions show more SRW extremes from 1580 to 1700 and 1850 to 1915, a dearth of extremes prior to 1550, and a 2–8 year oscillation after 1550. Using tree‐ring proxies beyond the SRW often dampens the reconstructed extremes, but these data occasionally help to identify known extreme events. Overall, reconstructions of SRW extreme precipitation can help to understand better the historic variability of these events.
Full-text available
Hydrologic variability poses an important source of financial risk for hydropower‐reliant electric utilities, particularly in snow‐dominated regions. Drought‐related reductions in hydropower production can lead to decreased electricity sales or increased procurement costs to meet firm contractual obligations. This research contributes a methodology for characterizing the trade‐offs between cash flows and debt burden for alternative financial risk management portfolios, and applies it to a hydropower producer in the Sierra Nevada mountains (San Francisco Public Utilities Commission). A newly designed financial contract, based on a snow water equivalent depth (SWE) index, provides payouts to hydropower producers in dry years in return for the producers making payments in wet years. This contract, called a capped contract for differences (CFD), is found to significantly reduce cash flow volatility and is considered within a broader risk management portfolio that also includes reserve funds and debt issuance. Our results show that solutions relying primarily on a reserve fund can manage risk at low cost but may require a utility to take on significant debt during severe droughts. More risk‐averse utilities with less access to debt should combine a reserve fund with the proposed CFD instrument in order to better manage the financial losses associated with extreme droughts. Our results show that the optimal risk management strategies and resulting outcomes are strongly influenced by the utility's fixed cost burden and by CFD pricing, while interest rates are found to be less important. These results are broadly transferable to hydropower systems in snow‐dominated regions facing significant revenue volatility.
Full-text available
Climate change introduces substantial uncertainty to water resources planning and raises the key question: when, or under what conditions, should adaptation occur? A number of recent studies aim to identify policies mapping future observations to actions—in other words, framing climate adaptation as an optimal control problem. This paper uses the control paradigm to review and classify recent dynamic planning studies according to their approaches to uncertainty characterization, policy structure, and solution methods. We propose a set of research gaps and opportunities in this area centered on the challenge of characterizing uncertainty, which prevents the unambiguous application of control methods to this problem. These include exogenous uncertainty in forcing, model structure, and parameters propagated through a chain of climate and hydrologic models; endogenous uncertainty in human‐environmental system dynamics across multiple scales; and sampling uncertainty due to the finite length of historical observations and future projections. Recognizing these challenges, several opportunities exist to improve the use of control methods for climate adaptation, namely, how problem context and understanding of climate processes might assist with uncertainty quantification and experimental design, out‐of‐sample validation and robustness of optimized adaptation policies, and monitoring and data assimilation, including trend detection, Bayesian inference, and indicator variable selection. We conclude with a summary of recommendations for dynamic water resources planning under climate change through the lens of optimal control.
Full-text available
Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectation propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.
Full-text available
The Tropical Pacific Ocean displays persistently cool sea surface temperature (SST) anomalies that last several years to a decade, with either no El Niño events or a few weak El Niño events. These cause large-scale droughts in the extratropics, including major North American droughts such as the 1930s Dust Bowl, and also modulate the global mean surface temperature. Here we show that two models with different levels of complexity—the Zebiak–Cane intermediate model and the Geophysical Fluid Dynamics Laboratory Coupled Model version 2.1—are able to produce such periods in a realistic manner. We then test the predictability of these periods in the Zebiak–Cane model using an ensemble of experiments with perturbed initial states. Our results show that in most cases the cool mean state is predictable. We then apply this method to make retrospective forecasts of shifts in the decadal mean state and to forecast the mean state of the Tropical Pacific Ocean for the upcoming decade. Our results suggest that the Pacific will undergo a shift to a warmer mean state after the 2015–2016 El Niño. This could imply the cessation of the drier than normal conditions that have generally afflicted southwest North America since the 1997–1998 El Niño, as well as the twenty-first-century pause in global warming. Implications for our understanding of the origins of such persistent cool states and the possibility of improving predictions of large-scale droughts are discussed.
Full-text available
IPCC Assessment Report 4 model projections suggest that the subtropical dry zones of the world will both dry and expand poleward in the future due to greenhouse warming. The US Southwest is particularly vulnerable in this regard and model projections indicate a progressive drying there out to the end of the 21st century. At the same time, the USA has been in a state of drought over much of the West for about 10 years now. While severe, this turn of the century drought has not yet clearly exceeded the severity of two exceptional droughts in the 20th century. So while the coincidence between the turn of the century drought and projected drying in the Southwest is cause for concern, it is premature to claim that the model projections are correct. At the same time, great new insights into past drought variability over North America have been made through the development of the North American Drought Atlas from tree rings. Analyses of this drought atlas have revealed past megadroughts of unprecedented duration in the West, largely in the Medieval period about 1000 years ago. A vastly improved Living Blended Drought Atlas (LBDA) for North America now under development reveals these megadroughts in far greater detail. The LBDA indicates the occurrence of the same Medieval megadroughts in the West and similar-scale megadroughts in the agriculturally and commercially important Mississippi Valley. Possible causes of these megadroughts and their implications for the future are discussed. Copyright © 2009 John Wiley & Sons, Ltd.
We present pomegranate, an open source machine learning package for probabilistic modeling in Python. Probabilistic modeling encompasses a wide range of methods that explicitly describe uncertainty using probability distributions. Three widely used probabilistic models implemented in pomegranate are general mixture models, hidden Markov models, and Bayesian networks. A primary focus of pomegranate is to abstract away the complexities of training models from their definition. This allows users to focus on specifying the correct model for their application instead of being limited by their understanding of the underlying algorithms. An aspect of this focus involves the collection of additive sufficient statistics from data sets as a strategy for training models. This approach trivially enables many useful learning strategies, such as out-of-core learning, minibatch learning, and semi-supervised learning, without requiring the user to consider how to partition data or modify the algorithms to handle these tasks themselves. pomegranate is written in Cython to speed up calculations and releases the global interpreter lock to allow for built-in multithreaded parallelism, making it competitive with---or outperform---other implementations of similar algorithms. This paper presents an overview of the design choices in pomegranate, and how they have enabled complex features to be supported by simple code.