ArticlePDF Available

Global Warming: Forecasts by Scientists versus Scientific Forecasts

Authors:

Abstract and Figures

In 2007, the Intergovernmental Panel on Climate Change's Working Group One, a panel of experts established by the World Meteorological Organization and the United Nations Environment Programme, issued its Fourth Assessment Report. The Report included predictions of dramatic increases in average world temperatures over the next 92 years and serious harm resulting from the predicted temperature increases. Using forecasting principles as our guide we asked: Are these forecasts a good basis for developing public policy? Our answer is "no". To provide forecasts of climate change that are useful for policy-making, one would need to forecast (1) global temperature, (2) the effects of any temperature changes, and (3) the effects of feasible alternative policies. Proper forecasts of all three are necessary for rational policy making. The IPCC WG1 Report was regarded as providing the most credible long-term forecasts of global average temperatures by 31 of the 51 scientists and others involved in forecasting climate change who responded to our survey. We found no references in the 1056-page Report to the primary sources of information on forecasting methods despite the fact these are conveniently available in books, articles, and websites. We audited the forecasting processes described in Chapter 8 of the IPCC's WG1 Report to assess the extent to which they complied with forecasting principles. We found enough information to make judgments on 89 out of a total of 140 forecasting principles. The forecasting procedures that were described violated 72 principles. Many of the violations were, by themselves, critical. The forecasts in the Report were not the outcome of scientific procedures. In effect, they were the opinions of scientists transformed by mathematics and obscured by complex writing. Research on forecasting has shown that experts' predictions are not useful in situations involving uncertainly and complexity. We have been unable to identify any scientific forecasts of global warming. Claims that the Earth will get warmer have no more credence than saying that it will get colder.
Content may be subject to copyright.
Electronic copy available at: http://ssrn.com/abstract=1153120
MULTI-SCIENCE PUBLISHING CO. LTD.
5 Wates Way, Brentwood, Essex CM15 9TB, United Kingdom
Reprinted from
ENERGY &
ENVIRONMENT
VOLUME 18 No. 7+8 2007
GLOBAL WARMING: FORECASTS BY SCIENTISTS
VERSUS SCIENTIFIC FORECASTS
by
Kesten C. Green and J. Scott Armstrong
EE 18-7+8_Green 25/10/07 3:56 pm Page 1
Electronic copy available at: http://ssrn.com/abstract=1153120
GLOBAL WARMING: FORECASTS BY SCIENTISTS
VERSUS SCIENTIFIC FORECASTS*
Kesten C. Green
1
and J. Scott Armstrong
2†
1
Business and Economic Forecasting Unit, Monash University, Victoria 3800, Australia.
Contact: PO Box 10800, Wellington 6143, New Zealand. kesten@kestencgreen.com;
T +64 4 976 3245; F +64 4 976 3250
2
The Wharton School, University of Pennsylvania 747 Huntsman,
Philadelphia, PA 19104, USA. armstrong@wharton.upenn.edu
ABSTRACT
In 2007, the Intergovernmental Panel on Climate Change’s Working Group One, a
panel of experts established by the World Meteorological Organization and the
United Nations Environment Programme, issued its Fourth Assessment Report.
The Report included predictions of dramatic increases in average world
temperatures over the next 92 years and serious harm resulting from the predicted
temperature increases. Using forecasting principles as our guide we asked: Are
these forecasts a good basis for developing public policy? Our answer is “no”.
To provide forecasts of climate change that are useful for policy-making, one
would need to forecast (1) global temperature, (2) the effects of any temperature
changes, and (3) the effects of feasible alternative policies. Proper forecasts of all
three are necessary for rational policy making.
The IPCC WG1 Report was regarded as providing the most credible long-term
forecasts of global average temperatures by 31 of the 51 scientists and others involved
in forecasting climate change who responded to our survey. We found no references
in the 1056-page Report to the primary sources of information on forecasting methods
despite the fact these are conveniently available in books, articles, and websites. We
audited the forecasting processes described in Chapter 8 of the IPCC’s WG1 Report
to assess the extent to which they complied with forecasting principles. We found
enough information to make judgments on 89 out of a total of 140 forecasting
principles. The forecasting procedures that were described violated 72 principles.
Many of the violations were, by themselves, critical.
The forecasts in the Report were not the outcome of scientific procedures. In
effect, they were the opinions of scientists transformed by mathematics and
obscured by complex writing. Research on forecasting has shown that experts’
predictions are not useful in situations involving uncertainly and complexity. We
have been unable to identify any scientific forecasts of global warming. Claims that
the Earth will get warmer have no more credence than saying that it will get colder.
Keywords: accuracy, audit, climate change, evaluation, expert judgment,
mathematical models, public policy.
997
*
Neither of the authors received funding for this paper.
Information about J. Scott Armstrong can be found on Wikipedia.
08-Green 25/10/07 3:32 pm Page 997
“A trend is a trend,
But the question is, will it bend?
Will it alter its course
Through some unforeseen force
And come to a premature end?”
Alec Cairncross, 1969
Research on forecasting has been conducted since the 1930s. Empirical studies that
compare methods in order to determine which ones provide the most accurate
forecasts in specified situations are the most useful source of evidence. Findings, along
with the evidence, were first summarized in Armstrong (1978, 1985). In the mid-
1990s, the Forecasting Principles Project was established with the objective of
summarizing all useful knowledge about forecasting. The knowledge was codified as
evidence-based principles, or condition-action statements, in order to provide
guidance on which methods to use when. The project led to the Principles of
Forecasting handbook (Armstrong 2001): the work of 40 internationally-known
experts on forecasting methods and 123 reviewers who were also leading experts on
forecasting methods. The summarizing process alone required a four-year effort.
The forecasting principles are easy to find: They are freely available on
forecastingprinciples.com, a site sponsored by the International Institute of
Forecasters. The Forecasting Principles site has been at the top of the list of sites in
Internet searches for “forecasting” for many years. A summary of the principles,
currently numbering 140, is provided as a checklist in the Forecasting Audit software
available on the site. The site is often updated in order to incorporate new evidence on
forecasting as it comes to hand. A recent review of new evidence on some of the key
principles was published in Armstrong (2006). There is no other source that provides
evidence-based forecasting principles.
The strength of evidence is different for different principles, for example some
principles are based on common sense or received wisdom. Such principles are
included when there is no contrary evidence. Other principles have some empirical
support, while 31 are strongly supported by empirical evidence.
Many of the principles go beyond common sense, and some are counter-intuitive.
As a result, those who forecast in ignorance of the forecasting research literature are
unlikely to produce useful predictions. Here are some well-established principles that
apply to long-term forecasts for complex situations where the causal factors are
subject to uncertainty (as with climate):
Unaided judgmental forecasts by experts have no value. This applies whether
the opinions are expressed in words, spreadsheets, or mathematical models. It
applies regardless of how much scientific evidence is possessed by the experts.
Among the reasons for this are:
a) Complexity: People cannot assess complex relationships through
unaided observations.
b) Coincidence: People confuse correlation with causation.
c) Feedback: People making judgmental predictions typically do not
998 Energy & Environment · Vol. 18, No. 7+8, 2007
08-Green 25/10/07 3:32 pm Page 998
receive unambiguous feedback they can use to improve
their forecasting.
d) Bias: People have difficulty in obtaining or using evidence that
contradicts their initial beliefs. This problem is especially
serious for people who view themselves as experts.
Agreement among experts is weakly related to accuracy. This is especially true
when the experts communicate with one another and when they work together
to solve problems, as is the case with the IPCC process.
Complex models (those involving nonlinearities and interactions) harm
accuracy because their errors multiply. Ascher (1978), refers to the Club of
Rome’s 1972 forecasts where, unaware of the research on forecasting, the
developers proudly proclaimed, “in our model about 100,000 relationships are
stored in the computer.” Complex models also tend to fit random variations in
historical data well, with the consequence that they forecast poorly and lead to
misleading conclusions about the uncertainty of the outcome. Finally, when
complex models are developed there are many opportunities for errors and the
complexity means the errors are difficult to find. Craig, Gadgil, and Koomey
(2002) came to similar conclusions in their review of long-term energy forecasts
for the US that were made between 1950 and 1980.
Given even modest uncertainty, prediction intervals are enormous. Prediction
intervals (ranges outside which outcomes are unlikely to fall) expand rapidly as
time horizons increase, for example, so that one is faced with enormous intervals
even when trying to forecast a straightforward thing such as automobile sales for
General Motors over the next five years.
When there is uncertainty in forecasting, forecasts should be conservative.
Uncertainty arises when data contain measurement errors, when the series are
unstable, when knowledge about the direction of relationships is uncertain, and
when a forecast depends upon forecasts of related (causal) variables. For
example, forecasts of no change were found to be more accurate than trend
forecasts for annual sales when there was substantial uncertainty in the trend
lines (Schnaars and Bavuso 1986). This principle also implies that forecasts
should revert to long-term trends when such trends have been firmly established,
do not waver, and there are no firm reasons to suggest that they will change.
Finally, trends should be damped toward no-change as the forecast horizon
increases.
THE FORECASTING PROBLEM
In determining the best policies to deal with the climate of the future, a policy maker
first has to select an appropriate statistic to use to represent the changing climate. By
convention, the statistic is the averaged global temperature as measured with
thermometers at ground stations throughout the world, though in practice this is a far
from satisfactory metric (see, e.g., Essex et al., 2007).
It is then necessary to obtain forecasts and prediction intervals for each of the
following:
Global Warming: Forecasts by Scientists versus Scientific Forecasts 999
08-Green 25/10/07 3:32 pm Page 999
1. Mean global temperature in the long-term (say 10 years or longer).
2. Effects of temperature changes on humans and other living things.
If accurate forecasts of mean global temperature can be obtained and the
changes are substantial, then it would be necessary to forecast the effects of the
changes on the health of living things and on the health and wealth of humans.
The concerns about changes in global mean temperature are based on the
assumption that the earth is currently at the optimal temperature and that
variations over years (unlike variations within days and years) are undesirable.
For a proper assessment, costs and benefits must be comprehensive. (For
example, policy responses to Rachel Carson’s Silent Spring should have been
based in part on forecasts of the number of people who might die from malaria
if DDT use were reduced).
3. Costs and benefits of feasible alternative policy proposals.
If valid forecasts of the effects of the temperature changes on the health of living
things and on the health and wealth of humans can be obtained and the forecasts
are for substantial harmful effects, then it would be necessary to forecast the
costs and benefits of proposed alternative policies that could be successfully
implemented.
A policy proposal should only be implemented if valid and reliable forecasts of the
effects of implementing the policy can be obtained and the forecasts show net benefits.
Failure to obtain a valid forecast in any of the three areas listed above would render
forecasts for the other areas meaningless. We address primarily, but not exclusively,
the first of the three forecasting problems: obtaining long-term forecasts of global
temperature.
But is it necessary to use scientific forecasting methods? In other words, to use
methods that have been shown by empirical validation to be relevant to the types of
problems involved with climate forecasting? Or is it sufficient to have leading
scientists examine the evidence and make forecasts? We address this issue before
moving on to our audits.
ON THE VALUE OF FORECASTS BY EXPERTS
Many public policy decisions are based on forecasts by experts. Research on
persuasion has shown that people have substantial faith in the value of such forecasts.
Faith increases when experts agree with one another.
Our concern here is with what we refer to as unaided expert judgments. In such
cases, experts may have access to empirical studies and other information, but they use
their knowledge to make predictions without the aid of well-established forecasting
principles. Thus, they could simply use the information to come up with judgmental
forecasts. Alternatively, they could translate their beliefs into mathematical statements
(or models) and use those to make forecasts.
Although they may seem convincing at the time, expert forecasts can make for
humorous reading in retrospect. Cerf and Navasky’s (1998) book contains 310 pages
of examples, such as Fermi Award-winning scientist John von Neumann’s 1956
prediction that “A few decades hence, energy may be free”. Examples of expert
1000 Energy & Environment · Vol. 18, No. 7+8, 2007
08-Green 25/10/07 3:32 pm Page 1000
climate forecasts that turned out to be completely wrong are easy to find, such as UC
Davis ecologist Kenneth Watt’s prediction in a speech at Swarthmore College on Earth
Day, April 22, 1970:
If present trends continue, the world will be about four degrees colder in 1990, but
eleven degrees colder in the year 2000. This is about twice what it would take to
put us into an ice age.
Are such examples merely a matter of selective perception? The second authors
review of empirical research on this problem led him to develop the “Seer-sucker
theory,” which can be stated as “No matter how much evidence exists that seers do not
exist, seers will find suckers” (Armstrong 1980). The amount of expertise does not
matter beyond a basic minimum level. There are exceptions to the Seer-sucker Theory:
When experts get substantial well-summarized feedback about the accuracy of their
forecasts and about the reasons why their forecasts were or were not accurate, they can
improve their forecasting. This situation applies for short-term (up to five day)
weather forecasts, but we are not aware of any such regime for long-term global
climate forecasting. Even if there were such a regime, the feedback would trickle in
over many years before it became useful for improving forecasting.
Research since 1980 has provided much more evidence that expert forecasts are of
no value. In particular, Tetlock (2005) recruited 284 people whose professions
included, “commenting or offering advice on political and economic trends.” He
asked them to forecast the probability that various situations would or would not
occur, picking areas (geographic and substantive) within and outside their areas of
expertise. By 2003, he had accumulated over 82,000 forecasts. The experts barely if at
all outperformed non-experts and neither group did well against simple rules.
Comparative empirical studies have routinely concluded that judgmental
forecasting by experts is the least accurate of the methods available to make forecasts.
For example, Ascher (1978, p. 200), in his analysis of long-term forecasts of electricity
consumption found that was the case.
Experts’ forecasts of climate changes have long been newsworthy and a cause of
worry for people. Anderson and Gainor (2006) found the following headlines in their
search of the New York Times:
Sept. 18, 1924 MacMillan Reports Signs of New Ice Age
March 27, 1933 America in Longest Warm Spell Since 1776
May 21, 1974 Scientists Ponder Why World’s Climate is Changing:
A Major Cooling Widely Considered to be Inevitable
Dec. 27, 2005 Past Hot Times Hold Few Reasons to Relax About New
Warming
In each case, the forecasts behind the headlines were made with a high degree of
confidence.
In the mid-1970s, there was a political debate raging about whether the global climate
was changing. The United States’ National Defense University (NDU) addressed this
issue in their book, Climate Change to the Year 2000 (NDU 1978). This study involved
Global Warming: Forecasts by Scientists versus Scientific Forecasts 1001
08-Green 25/10/07 3:32 pm Page 1001
nine man-years of effort by the Department of Defense and other agencies, aided by
experts who received honoraria, and a contract of nearly $400,000 (in 2007 dollars). The
heart of the study was a survey of experts. The experts were provided with a chart of
“annual mean temperature, 0-80
0
N. latitude,” that showed temperature rising from 1870
to early 1940 then dropping sharply until 1970. The conclusion, based primarily on 19
replies weighted by the study directors, was that while a slight increase in temperature
might occur, uncertainty was so high that “the next twenty years will be similar to that
of the past” and the effects of any change would be negligible. Clearly, this was a
forecast by scientists, not a scientific forecast. However, it proved to be quite influential.
The report was discussed in The Global 2000 Report to the President (Carter) and at the
World Climate Conference in Geneva in 1979.
The methodology for climate forecasting used in the past few decades has shifted
from surveys of experts’ opinions to the use of computer models. Reid Bryson, the
world’s most cited climatologist, wrote in a 1993 article that a model is “nothing more
than a formal statement of how the modeler believes that the part of the world of his
concern actually works” (p. 798-790). Based on the explanations of climate models
that we have seen, we concur. While advocates of complex climate models claim that
they are based on “well established laws of physics”, there is clearly much more to the
models than the laws of physics otherwise they would all produce the same output,
which patently they do not. And there would be no need for confidence estimates for
model forecasts, which there most certainly are. Climate models are, in effect,
mathematical ways for the experts to express their opinions.
To our knowledge, there is no empirical evidence to suggest that presenting
opinions in mathematical terms rather than in words will contribute to forecast
accuracy. For example, Keepin and Wynne (1984) wrote in the summary of their study
of the International Institute for Applied Systems Analysis’s “widely acclaimed”
projections for global energy that “Despite the appearance of analytical rigor… [they]
are highly unstable and based on informal guesswork.” Things have changed little
since the days of Malthus in the 1800s. Malthus forecast mass starvation. He expressed
his opinions mathematically. His mathematical model predicted that the supply of food
would increase arithmetically while the human population grew at a geometric rate
and went hungry.
International surveys of climate scientists from 27 countries, obtained by Bray and
von Storch in 1996 and 2003, were summarized by Bast and Taylor (2007). Many
scientists were skeptical about the predictive validity of climate models. Of more than
1,060 respondents, 35% agreed with the statement, “Climate models can accurately
predict future climates,” and 47% percent disagreed. Members of the general public
were also divided. An Ipsos Mori poll of 2,031 people aged 16 and over found that
40% agreed that “climate change was too complex and uncertain for scientists to make
useful forecasts” while 38% disagreed (Eccleston 2007).
AN EXAMINATION OF CLIMATE FORECASTING METHODS
We assessed the extent to which those who have made climate forecasts used
evidence-based forecasting procedures. We did this by conducting Google searches.
We then conducted a “forecasting audit” of the forecasting process behind the IPCC
1002 Energy & Environment · Vol. 18, No. 7+8, 2007
08-Green 25/10/07 3:32 pm Page 1002
forecasts. The key tasks of a forecasting audit are to:
examine all elements of the forecasting process,
• use principles that are supported by evidence (or are self-evidently true and
unchallenged by evidence) against which to judge the forecasting process,
rate the forecasting process against each principle, preferably using more than
one independent rater,
disclose the audit.
To our knowledge, no one has ever published a paper that is based on a forecasting
audit, as defined here. We suggest that for forecasts involving important public
policies, such audits should be expected and perhaps even
required. In addition, they
should be fully disclosed with respect to who did the audit, what biases might be
involved, and what were the detailed findings from the audit.
REVIEWS OF CLIMATE FORECASTS
We could not find any comprehensive reviews of climate forecasting efforts. With the
exception of Stewart and Glantz (1985), the reviews did not refer to evidence-based
findings. None of the reviews provided explicit ratings of the processes and, again
with the exception of Stewart and Glantz, little attention was given to full disclosure
of the reviewing process. Finally, some reviews ignored the forecasting methods and
focused on the accuracy of the forecasts.
Stewart and Glantz (1985) conducted an audit of the National Defense University
(NDU 1978) forecasting process that we described above. They were critical of the
report because it lacked an awareness of proper forecasting methodology. Their audit
was hampered because the organizers of the study said that the raw data had been
destroyed and a request to the Institute for the Future about the sensitivity of the
forecasts to the weights went unanswered. Judging from a Google Scholar search,
climate forecasters have paid little attention to this paper.
In a wide-ranging article on the broad topic of science and the environment, Bryson
(1993) was critical of the use of models for forecasting climate. He wrote:
…it has never been demonstrated that the GCMs [General Circulation Models] are
capable of prediction with any level of accuracy. When a modeler says that his
model shows that doubling the carbon dioxide in the atmosphere will raise the
global average temperature two to three degrees Centigrade, he really means that a
simulation of the present global temperature with current carbon dioxide levels
yields a mean value two to three degrees Centigrade lower than his model
simulation with doubled carbon dioxide. This implies, though it rarely appears in
the news media, that the error in simulating the present will be unchanged in
simulating the future case with doubled carbon dioxide. That has never been
demonstrated—it is faith rather than science.” (pp. 790-791)
Balling (2005), Christy (2005), Frauenfeld (2005), and Posmentier and Soon
(2005) each assess different aspects of the use of climate models for forecasting and
Global Warming: Forecasts by Scientists versus Scientific Forecasts 1003
08-Green 25/10/07 3:32 pm Page 1003
each comes to broadly the same conclusion: The models do not represent the real
world sufficiently well to be relied upon for forecasting.
Carter, et al. (2006) examined the Stern Review (Stern 2007). They concluded that
the authors of the Review made predictions without reference to scientific validation
and without proper peer review.
Pilkey and Pilkey-Jarvis (2007) examined long-term climate forecasts and
concluded that they were based only on the opinions of the scientists. The scientists’
opinions were expressed in complex mathematical terms without evidence on the
validity of chosen approach. The authors provided the following quotation on their
page 45 to summarize their assessment: “Today’s scientists have substituted
mathematics for experiments, and they wander off through equation after equation and
eventually build a structure which has no relation to reality (Nikola Telsa, inventor and
electrical engineer, 1934).” While it is sensible to be explicit about beliefs and to
formulate these in a model, forecasters must also demonstrate that the relationships are
valid.
Carter (2007) examined evidence on the predictive validity of the general
circulation models (GCMs) used by the IPCC scientists. He found that while the
models included some basic principles of physics, scientists had to make “educated
guesses” about the values of many parameters because knowledge about the physical
processes of the earth’s climate is incomplete. In practice, the GCMs failed to predict
recent global average temperatures as accurately as simple curve-fitting approaches
(Carter 2007, pp. 64 – 65). They also forecast greater warming at higher altitudes in
the tropics when the opposite has been the case (p. 64). Further, individual GCMs
produce widely different forecasts from the same initial conditions and minor changes
in parameters can result in forecasts of global cooling (Essex and McKitrick, 2002).
Interestingly, when models predict global cooling, the forecasts are often rejected as
“outliers” or “obviously wrong” (e.g., Stainforth et al., 2005).
Roger Pielke Sr. (Colorado State Climatologist, until 2006) gave an assessment of
climate models in a 2007 interview (available via http://tinyurl.com/2wpk29):
You can always reconstruct after the fact what happened if you run enough model
simulations. The challenge is to run it on an independent dataset, say for the next
five years. But then they will say “the model is not good for five years because
there is too much noise in the system”. That’s avoiding the issue then. They say you
have to wait 50 years, but then you can’t validate the model, so what good is it?
…Weather is very difficult to predict; climate involves weather plus all these
other components of the climate system, ice, oceans, vegetation, soil etc. Why
should we think we can do better with climate prediction than with weather
prediction? To me it’s obvious, we can’t!
I often hear scientists say “weather is unpredictable, but climate you can predict
because it is the average weather”. How can they prove such a statement?
In his assessment of climate models, physicist Freeman Dyson (2007) wrote:
I have studied the climate models and I know what they can do. The models solve
1004 Energy & Environment · Vol. 18, No. 7+8, 2007
08-Green 25/10/07 3:32 pm Page 1004
the equations of fluid dynamics, and they do a very good job of describing the fluid
motions of the atmosphere and the oceans. They do a very poor job of describing
the clouds, the dust, the chemistry and the biology of fields and farms and forests.
They do not begin to describe the real world that we live in.
Bellamy and Barrett (2007) found serious deficiencies in the general circulation
models described in the IPCC’s Third Assessment Report. In particular, the models (1)
produced very different distributions of clouds and none was close the actual
distribution of clouds, (2) parameters for incoming radiation absorbed by the
atmosphere and for that absorbed by the Earth’s surface varied considerably, (3) did
not accurately represent what is known about the effects of CO
2
and could not
represent the possible positive and negative feedbacks about which there is great
uncertainty. The authors concluded:
The climate system is a highly complex system and, to date, no computer models
are sufficiently accurate for their predictions of future climate to be relied upon. (p.
72)
Trenberth (2007), a lead author of Chapter 3 in the IPCC WG1 report wrote in a
Nature.com blog “… the science is not done because we do not have reliable or
regional predictions of climate.”
Taylor (2007) compared seasonal forecasts by New Zealand’s National Institute of
Water and Atmospheric Research (NIWA) with outcomes for the period May 2002 to
April 2007. He found NIWAs forecasts of average regional temperatures for the
season ahead were 48% correct, which was no more accurate than chance. That this is
a general result was confirmed by New Zealand climatologist Jim Renwick, who
observed that NIWAs low success rate was comparable to that of other forecasting
groups worldwide. He added that “Climate prediction is hard, half of the variability in
the climate system is not predictable, and so we don’t expect to do terrifically well.”
Renwick is a co-author with Working Group I of the IPCC 4th Assessment Report, and
also serves on the World Meteorological Organization Commission for Climatology
Expert Team on Seasonal Forecasting. His expert view is that current GCM climate
models are unable to predict future climate any better than chance (New Zealand
Climate Science Coalition 2007).
Similarly, Vizard, Anderson, and Buckley (2005) found seasonal rainfall forecasts
for Australian townships were insufficiently accurate to be useful to intended
consumers such as farmers planning for feed requirements. The forecasts were
released only 15 days ahead of each three month period.
A SURVEY TO IDENTIFY THE MOST CREDIBLE LONG-TERM
FORECASTS OF GLOBAL TEMPERATURE
We surveyed scientists involved in long-term climate forecasting and policy makers.
Our primary concern was to identify the most important forecasts and how those
forecasts were made. In particular, we wished to know if the most widely accepted
forecasts of global average temperature were based on the opinions of experts or were
Global Warming: Forecasts by Scientists versus Scientific Forecasts 1005
08-Green 25/10/07 3:32 pm Page 1005
derived using scientific forecasting methods. Given the findings of our review of
reviews of climate forecasting and the conclusion from our Google search that many
scientists are unaware of evidence-based findings related to forecasting methods, we
expected that the forecasts would be based on the opinions of scientists.
We sent a questionnaire to experts who had expressed diverse opinions on global
warming. We generated lists of experts by identifying key people and asking them to
identify others. (The lists are provided in Appendix A.) Most (70%) of the 240 experts
on our lists were IPCC reviewers and authors.
Our questionnaire asked the experts to provide references for what they regarded as
the most credible source of long-term forecasts of mean global temperatures. We
strove for simplicity to minimize resistance to our request. Even busy people should
have time to send a few references, especially if they believe that it is important to
evaluate the quality of the forecasts that may influence major decisions. We asked:
“We want to know which forecasts people regard as the most credible and how
those forecasts were derived…
In your opinion, which scientific article is the source of the most credible
forecasts of global average temperatures over the rest of this century?”
We received useful responses from 51 of the 240 experts, 42 of whom provided
references to what they regarded as credible sources of long-term forecasts of mean
global temperatures. Interestingly, eight respondents provided references in support of
their claims that no credible forecasts exist. Of the 42 expert respondents who were
associated with global warming views, 30 referred us to the IPCC’s report. A list of
the papers that were suggested by respondents is provided at
publicpolicyforecasting.com in the “Global Warming” section.
Based on the replies to our survey, it was clear that the IPCC’s Working Group 1
Report contained the forecasts that are viewed as most credible by the bulk of the
climate forecasting community. These forecasts are contained in Chapter 10 of the
Report and the models that are used to forecast climate are assessed in Chapter 8,
“Climate Models and Their Evaluation” (Randall et al. 2007). Chapter 8 provided the
most useful information on the forecasting process used by the IPCC to derive
forecasts of mean global temperatures, so we audited that chapter.
We also posted calls on email lists and on the forecastingprinciples.com site asking
for help from those who might have any knowledge about scientific climate forecasts.
This yielded few responses, only one of which provided relevant references.
Does the IPCC report provide climate forecasts?
Trenberth (2007) and others have claimed that the IPCC does not provide forecasts but
rather presents “scenarios” or “projections.” As best as we can tell, these terms are
used by the IPCC authors to indicate that they provide “conditional forecasts.”
Presumably the IPCC authors hope that readers, especially policy makers, will find at
least one of their conditional forecast series plausible and will act as if it will come
true if no action is taken. As it happens, the word “forecast” and its derivatives
occurred 37 times, and “predict” and its derivatives occurred 90 times in the body of
1006 Energy & Environment · Vol. 18, No. 7+8, 2007
08-Green 25/10/07 3:32 pm Page 1006
Chapter 8. Recall also that most of our respondents (29 of whom were IPCC authors
or reviewers) nominated the IPCC report as the most credible source of forecasts (not
“scenarios” or “projections”) of global average temperature. We conclude that the
IPCC does provide forecasts.
A FORECASTING AUDIT FOR GLOBAL WARMING
In order to audit the forecasting processes described in Chapter 8 of the IPCC’s report,
we each read it prior to any discussion. The chapter was, in our judgment, poorly
written. The writing showed little concern for the target readership. It provided
extensive detail on items that are of little interest in judging the merits of the
forecasting process, provided references without describing what readers might find,
and imposed an incredible burden on readers by providing 788 references. In addition,
the Chapter reads in places like a sales brochure. In the three-page executive summary,
the terms, “new” and “improved” and related derivatives appeared 17 times. Most
significantly, the chapter omitted key details on the assumptions and the forecasting
process that were used. If the authors used a formal structured procedure to assess the
forecasting processes, this was not evident.
We each made a formal, independent audit of IPCC Chapter 8 in May 2007. To do
so, we used the Forecasting Audit Software on the forecastingprinciples.com site,
which is based on material originally published in Armstrong (2001). To our
knowledge, it is the only evidence-based tool for evaluating forecasting procedures.
While Chapter 8 required many hours to read, it took us each about one hour,
working independently, to rate the forecasting approach described in the Chapter using
the Audit software. We have each been involved with developing the Forecasting
Audit program, so other users would likely require much more time.
Ratings are on a 5-point scale from -2 to +2. A rating of +2 indicates the forecasting
procedures were consistent with a principle, and a rating of -2 indicates failure to comply
with a principle. Sometimes some aspects of a procedure are consistent with a principle
but others are not. In such cases, the rater must judge where the balance lays. The Audit
software also has options to indicate that there is insufficient information to rate the
procedures or that the principle is not relevant to a particular forecasting problem.
Reliability is an issue with rating tasks. For that reason, it is desirable to use two or
more raters. We sent out general calls for experts to use the Forecasting Audit
Software to conduct their own audits and we also asked a few individuals to do so. At
the time of writing, none have done so.
Our initial overall average ratings were similar at -1.37 and -1.35. We compared our
ratings for each principle and discussed inconsistencies. In some cases we averaged
the ratings, truncating toward zero. In other cases we decided that there was
insufficient information or that the information was too ambiguous to rate with
confidence. Our final ratings are fully disclosed in the Special Interest Group section
of the forecastingprinciples.com site that is devoted to Public Policy
(publicpolicyforecasting.com) under Global Warming.
Of the 140 principles in the Forecasting Audit, we judged that 127 were relevant
for auditing the forecasting procedures described in Chapter 8. The Chapter provided
insufficient information to rate the forecasting procedures that were used against 38 of
Global Warming: Forecasts by Scientists versus Scientific Forecasts 1007
08-Green 25/10/07 3:32 pm Page 1007
1008 Energy & Environment · Vol. 18, No. 7+8, 2007
Table 1. Clear Violations
Setting Objectives
Describe decisions that might be affected by the forecast.
Prior to forecasting, agree on actions to take assuming
different possible forecasts.
•Make sure forecasts are independent of politics.
•Consider whether the events or series can be forecasted.
Identifying Data Points
•Avoid biased data sources.
Collecting Data
Use unbiased and systematic procedures to collect data.
Ensure that information is reliable and that measurement
error is low.
Ensure that the information is valid.
Selecting Methods
List all important selection criteria before selecting
methods.
Ask unbiased experts to rate potential methods.
Select simple methods unless empirical evidence calls for a
more complex approach.
Compare track records of various forecasting methods.
Assess acceptability and understandability of methods to
users
Examine the value of alternative forecasting methods.
Implementing Methods: General
Keep forecasting methods simple.
Be conservative in situations of high uncertainty or
instability.
Implementing Quantitative Methods
•Tailor the forecasting model to the horizon.
Do not use “fit” to develop the model.
Implementing Methods: Quantitative Models with Explanatory
Variables
Apply the same principles to forecasts of explanatory
variables.
Shrink the forecasts of change if there is high uncertainty
for predictions of the explanatory variables.
Integrating Judgmental and Quantitative Methods
Use structured procedures to integrate judgmental and
quantitative methods.
Use structured judgments as inputs of quantitative models.
Use prespecified domain knowledge in selecting, weighing,
and modifying quantitative models.
Combining Forecasts
•Combine forecasts from approaches that differ.
Use trimmed means, medians, or modes.
Use track records to vary the weights on component
forecasts.
Evaluating Methods
Compare reasonable methods.
•Tailor the analysis to the decision.
Describe the potential biases of the forecasters.
Assess the reliability and validity of the data.
Provide easy access to the data.
Provide full disclosure of methods.
•Test assumptions for validity.
•Test the client’s understanding of the methods.
Use direct replications of evaluations to identify mistakes.
Replicate forecast evaluations to assess their reliability.
Compare forecasts generated by different methods.
Examine all important criteria.
Specify criteria for evaluating methods prior to analyzing
data.
Assess face validity.
Use error measures that adjust for scale in the data.
Ensure error measures are valid.
Use error measures that are not sensitive to the degree of
difficulty in forecasting.
•Avoid error measures that are highly sensitive to outliers.
Use out of sample (ex-ante) error measures.
(Revised) Tests of statistical significance should not be
used.
Do not use root mean square error (RMSE) to make
comparisons among forecasting methods.
Base comparisons of methods on large samples of forecasts.
Conduct explicit cost-benefit analysis.
Assessing Uncertainty
Use objective procedures to estimate explicit prediction.
Develop prediction intervals by using empirical estimates
based on realistic representations of forecasting situations.
When assessing PIs, list possible outcomes and assess their
likelihoods.
Obtain good feedback about forecast accuracy and the
reasons why errors occurred.
Combine prediction intervals from alternative forecast
methods.
Use safety factors to adjust for overconfidence in PIs.
Presenting Forecasts
Present forecasts and supporting data in a simple and
understandable form.
Provide complete, simple, and clear explanations of methods.
Present prediction intervals.
Learning That Will Improve Forecasting Procedures
Establish a formal review process for forecasting methods.
Establish a formal review process to ensure that forecasts
are used properly.
08-Green 25/10/07 3:32 pm Page 1008
these 127 principles. For example, we did not rate the Chapter against Principle 10.2:
“Use all important variables.” At least in part, our difficulty in auditing the Chapter
was due to the fact that it was abstruse. It was sometimes difficult to know whether the
information we sought was present or not.
Of the 89 forecasting principles that we were able to rate, the Chapter violated 72.
Of these, we agreed that there were clear violations of 60 principles. Principle 1.3
“Make sure forecasts are independent of politics” is an example of a principle that is
clearly violated by the IPCC process. This principle refers to keeping the forecasting
process separate from the planning process. The term “politics” is used in the broad
sense of the exercise of power. David Henderson, a former Head of Economics and
Statistics at the OECD, gave a detailed account of how the IPCC process is directed
by non-scientists who have policy objectives and who believe that anthropogenic
global warming is real and dangerous (Henderson 2007). The clear violations we
identified are listed in Table 1.
We also found 12 “apparent violations”. These principles, listed in Table 2, are ones
for which one or both of us had some concerns over the coding or where we did not
agree that the procedures clearly violated the principle.
Table 2. Apparent Violations
Setting Objectives
Obtain decision makers’ agreement on methods.
Structuring the Problem
•Identify possible outcomes prior to making forecast.
Decompose time series by level and trend.
Identifying Data Sources
Ensure the data match the forecasting situation.
Obtain information from similar (analogous) series or cases. Such information may help to
estimate trends.
Implementing Judgmental Methods
Obtain forecasts from heterogeneous experts.
Evaluating Methods
Design test situations to match the forecasting problem.
Describe conditions associated with the forecasting problem.
Use multiple measures of accuracy.
Assessing Uncertainty
Do not assess uncertainty in a traditional (unstructured) group meeting.
Incorporate the uncertainty associated with the prediction of the explanatory variables in the
prediction intervals.
Presenting Forecasts
Describe your assumptions.
Finally, we lacked sufficient information to make ratings on many of the relevant
principles. These are listed in Table 3.
Global Warming: Forecasts by Scientists versus Scientific Forecasts 1009
08-Green 25/10/07 3:32 pm Page 1009
Table 3. Lack of Information
Structuring the Problem
•Tailor the level of data aggregation (or segmentation) to the decisions.
Decompose the problem into parts.
Decompose time series by causal forces.
Structure problems to deal with important interactions among causal variables.
Structure problems that involve causal chains.
Identifying Data Sources
Use theory to guide the search for information on explanatory variables.
Collecting Data
Obtain all the important data.
•Avoid collection of irrelevant data.
Preparing Data
Clean the data.
Use transformations as required by expectations.
Adjust intermittent series.
Adjust for unsystematic past events.
Adjust for systematic events.
Use graphical displays for data.
Implementing Methods: General
Adjust for events expected in the future.
Pool similar types of data.
Ensure consistency with forecasts of related series and related time periods.
Implementing Judgmental Methods
Ask experts to justify their forecasts in writing.
Obtain forecasts from enough respondents.
Obtain multiple forecasts of an event from each expert.
Implementing Quantitative Methods
Match the model to the underlying phenomena.
•Weigh the most relevant data more heavily.
Update models frequently.
Implementing Methods: Quantitative Models with Explanatory Variables
Use all important variables.
Rely on theory and domain expertise when specifying directions of relationships.
Use theory and domain expertise to estimate or limit the magnitude of relationships.
Use different types of data to measure a relationship.
•Forecast for alternative interventions.
Integrating Judgmental and Quantitative Methods
Limit subjective adjustments of quantitative forecasts.
Combining Forecasts
Use formal procedures to combine forecasts.
•Start with equal weights.
Use domain knowledge to vary weights on component forecasts.
1010 Energy & Environment · Vol. 18, No. 7+8, 2007
08-Green 25/10/07 3:32 pm Page 1010
Table 3. continued
Evaluating Methods
Use objective tests of assumptions.
•Avoid biased error measures.
Do not use R-square (either standard or adjusted) to compare forecasting models.
Assessing Uncertainty
Ensure consistency of the forecast horizon.
Ask for a judgmental likelihood that a forecast will fall within a pre-defined minimum-
maximum interval.
Learning That Will Improve Forecasting Procedures
•Seek feedback about forecasts.
Some of these principles might be surprising to those who have not seen the
evidence—“Do not use R-square (either standard or adjusted) to compare forecasting
models.” Others are principles that any scientific paper should be expected to
address—“Use objective tests of assumptions.” Many of these principles are important
for climate forecasting, such as “Limit subjective adjustments of quantitative
forecasts.”
Some principles are so important that any forecasting process that does not adhere
to them cannot produce valid forecasts. We address four such principles, all of which
are based on strong empirical evidence. All four of these key principles were violated
by the forecasting procedures described in IPCC Chapter 8.
Consider whether the events or series can be forecasted (Principle 1.4)
This principle refers to whether a forecasting method can be used that would do better
than a naïve method. A common naïve method is to assume that things will not
change.
Interestingly, naïve methods are often strong competitors with more sophisticated
alternatives. This is especially so when there is much uncertainty. To the extent that
uncertainty is high, forecasters should emphasize the naïve method. (This is illustrated
by regression model coefficients: when uncertainty increases, the coefficients tend
towards zero.) Departures from the naïve model tend to increase forecast error when
uncertainty is high.
In our judgment, the uncertainty about global mean temperature is extremely high.
We are not alone. Dyson (2007), for example, wrote in reference to attempts to model
climate that “The real world is muddy and messy and full of things that we do not yet
understand.” There is even controversy among climate scientists over something as
basic as the current trend. One researcher, Carter (2007, p. 67) wrote:
…the slope and magnitude of temperature trends inferred from time-series data
depend upon the choice of data end points. Drawing trend lines through highly
variable, cyclic temperature data or proxy data is therefore a dubious exercise.
Accurate direct measurements of tropospheric global average temperature have
only been available since 1979, and they show no evidence for greenhouse
warming. Surface thermometer data, though flawed, also show temperature stasis
since 1998.
Global Warming: Forecasts by Scientists versus Scientific Forecasts 1011
08-Green 25/10/07 3:32 pm Page 1011
Global climate is complex and scientific evidence on key relationships is weak or
absent. For example, does increased CO
2
in the atmosphere cause high temperatures or
do high temperatures increase CO
2
? In opposition to the major causal role assumed for
CO
2
by the IPCC authors (Le Treut et al. 2007), Soon (2007) presents evidence that the
latter is the case and that CO
2
variation plays at most a minor role in climate change.
Measurements of key variables such as local temperatures and a representative
global temperature are contentious and subject to revision in the case of modern
measurements because of inter alia the distribution of weather stations and possible
artifacts such as the urban heat island effect, and are often speculative in the case of
ancient ones, such as those climate proxies derived from tree ring and ice-core data
(Carter 2007).
Finally, it is difficult to forecast the causal variables. Stott and Kettleborough
(2002, p. 723) summarize:
Even with perfect knowledge of emissions, uncertainties in the representation of
atmospheric and oceanic processes by climate models limit the accuracy of any
estimate of the climate response. Natural variability, generated both internally and
from external forcings such as changes in solar output and explosive volcanic
eruptions, also contributes to the uncertainty in climate forecasts.
The already high level of uncertainty rises rapidly as the forecast horizon increases.
While the authors of Chapter 8 claim that the forecasts of global mean temperature
are well-founded, their language is imprecise and relies heavily on such words as
“generally,” “reasonable well,” “widely,” and “relatively” [to what?]. The Chapter
makes many explicit references to uncertainty. For example, the phrases “. . . it is not
yet possible to determine which estimates of the climate change cloud feedbacks are the
most reliable” and “Despite advances since the TAR, substantial uncertainty remains in
the magnitude of cryospheric feedbacks within AOGCMs” appear on p. 593. In
discussing the modeling of temperature, the authors wrote, “The extent to which these
systematic model errors affect a model’s response to external perturbations is unknown,
but may be significant” (p. 608), and, “The diurnal temperature range… is generally too
small in the models, in many regions by as much as 50%” (p. 609), and “It is not yet
known why models generally underestimate the diurnal temperature range.” The
following words and phrases appear at least once in the Chapter: unknown, uncertain,
unclear, not clear, disagreement, not fully understood, appears, not well observed,
variability, variety, unresolved, not resolved, and poorly understood.
Given the high uncertainty regarding climate, the appropriate naïve method for this
situation would be the “no-change” model. Prior evidence on forecasting methods
suggests that attempts to improve upon the naïve model might increase forecast error.
To reverse this conclusion, one would have to produce validated evidence in favor of
alternative methods. Such evidence is not provided in Chapter 8 of the IPCC report.
We are not suggesting that we know for sure that long-term forecasting of climate
is impossible, only that this has yet to be demonstrated. Methods consistent with
forecasting principles such as the naïve model with drift, rule-based forecasting, well-
specified simple causal models, and combined forecasts might prove useful. The
1012 Energy & Environment · Vol. 18, No. 7+8, 2007
08-Green 25/10/07 3:32 pm Page 1012
methods are discussed in Armstrong (2001). To our knowledge, their application to
long-term climate forecasting has not been examined to date.
Keep forecasting methods simple (Principle 7.1)
We gained the impression from the IPPC chapters and from related papers that climate
forecasters generally believe that complex models are necessary for forecasting
climate and that forecast accuracy will increase with model complexity. Complex
methods involve such things as the use of a large number of variables in forecasting
models, complex interactions, and relationships that employ nonlinear parameters.
Complex forecasting methods are only accurate when there is little uncertainty about
relationships now and in the future, where the data are subject to little error, and where
the causal variables can be accurately forecast. These conditions do not apply to
climate forecasting. Thus, simple methods are recommended.
The use of complex models when uncertainty is high is at odds with the evidence
from forecasting research (e.g., Allen and Fildes 2001, Armstrong 1985, Duncan, Gorr
and Szczypula 2001, Wittink and Bergestuen 2001). Models for forecasting variations
in climate are not an exception to this rule. Halide and Ridd (2007) compared
predictions of El Niño-Southern Oscillation events from a simple univariate model
with those from other researchers’ complex models. Some of the complex models
were dynamic causal models incorporating laws of physics. In other words, they were
similar to those upon which the IPCC authors depended. Halide and Ridd’s simple
model was better than all eleven of the complex models in making predictions about
the next three months. All models performed poorly when forecasting further ahead.
The use of complex methods makes criticism difficult and prevents forecast users
from understanding how forecasts were derived. One effect of this exclusion of others
from the forecasting process is to reduce the chances of detecting errors.
Do not use fit to develop the model (Principle 9.3)
It was not clear to us to what extent the models described in Chapter 8 (or in Chapter 9
by Hegerl et al. 2007) are either based on, or have been tested against, sound empirical
data. However, some statements were made about the ability of the models to fit
historical data, after tweaking their parameters. Extensive research has shown that the
ability of models to fit historical data has little relationship to forecast accuracy (See
“Evaluating forecasting methods” in Armstrong 2001.) It is well known that fit can be
improved by making a model more complex. The typical consequence of increasing
complexity to improve fit, however, is to decrease the accuracy of forecasts.
Use out-of-sample (ex ante) error measures (Principle 13.26)
Chapter 8 did not provide evidence on the relative accuracy of ex ante long-term
forecasts from the models used to generate the IPCC’s forecasts of climate change. It
would have been feasible to assess the accuracy of alternative forecasting methods for
medium- to long-term forecasts by using “successive updating.” This involves
withholding data on a number of years, then providing forecasts for one-year ahead,
then two-years ahead, and so on up to, say, 20 years. The actual years could be
disguised during these validation procedures. Furthermore, the years could be reversed
Global Warming: Forecasts by Scientists versus Scientific Forecasts 1013
08-Green 25/10/07 3:32 pm Page 1013
(without telling the forecasters) to assess back-casting accuracy. If, as is suggested by
forecasting principles, the models were unable to improve on the accuracy of forecasts
from the naïve method in such tests, there would be no reason to suppose that accuracy
would improve for longer forecasts. “Evaluating forecasting methods” in Armstrong
2001 provides evidence on this principle.
SUMMARY OF AUDIT FINDINGS
Our ratings of the processes used to generate the forecasts presented in the IPCC report
are provided on the Public Policy Forecasting Special Interest Group Page at
forecastingprinciples.com. These ratings have been posted since the time that our
paper was presented at the International Symposium on Forecasting in New York in
late June 2007.
Prior to the publication of this paper, we invited other researchers, using messages
to email lists and web sites, to replicate our audit by providing their own ratings. In
addition, we asked for information about any relevant principles that have not been
included in the Forecasting Audit. At the time of writing, we have received neither
alternative ratings nor evidence for additional relevant principles.
The many violations provide further evidence that the IPCC authors were unaware
of evidence-based principles for forecasting. If they were aware of them, it would have
been incumbent on them to present evidence to justify their departures from the
principles. They did not do so. We conclude that because the forecasting processes
examined in Chapter 8 overlook scientific evidence on forecasting, the IPCC forecasts
of climate change are not scientific.
We invite others to provide evidence-based audits of what they believe to be
scientific forecasts relevant to climate change. These can be posted on web sites to
ensure that readers have access to the audits. As with peer review, we will require all
relevant information on the people who conduct the audits prior to posting the audits
on publicpolicyforecasting.com.
Climate change forecasters and their clients should use the Forecasting Audit early
and often. Doing so would help to ensure that they are using appropriate forecasting
procedures. Outside evaluators should also be encouraged to conduct audits. The audit
reports should be made available to both the sponsors of the study and the public by
posting on an open web site such as publicpolicyforecasting.com.
CLIMATE FORECASTERS’ USE OF THE SCIENTIFIC LITERATURE ON
FORECASTING METHODS
Bryson (1993) wrote that while it is obvious that when a statement is made about what
climate will result from a doubling CO
2
it is a forecast, “I have not yet heard, at any
of the many environmental congresses and symposia that I have attended, a discussion
of forecasting methodology applicable to the environment” (p. 791).
We looked for evidence that climate modelers relied on scientific studies on the
proper use of forecasting methods. In one approach, in April and June 2007, we used the
Advanced Search function of Google Scholar to get a general sense of the extent to
which climate forecasters refer to scientific studies on forecasting. When we searched
for “global warming” and “forecasting principles,” we found no relevant sites. Nor did
1014 Energy & Environment · Vol. 18, No. 7+8, 2007
08-Green 25/10/07 3:32 pm Page 1014
we find any relevant citations of “forecastingprinciples.com” and “global warming.” Nor
were there any relevant citations of the relevant-sounding paper, “Forecasting for
Environmental Decision-Making” (Armstrong 1999) published in a book with a relevant
title: Tools to Aid Environmental Decision Making. A search for “global warming” and
the best selling textbook on forecasting methods (Makridakis et al. 1998) revealed two
citations, neither related to the prediction of global mean temperatures. Finally, there
were no citations of research on causal models (e.g., Allen and Fildes 2001).
Using the titles of the papers, we independently examined the references in Chapter
8 of the IPCC Report. The Chapter contained 788 references. Of these, none had any
apparent relationship to forecasting methodology. Our examination was not difficult as
most papers had titles such as, “Using stable water isotopes to evaluate basin-scale
simulations of surface water budgets,” and, “Oceanic isopycnal mixing by coordinate
rotation.”
Finally, we examined the 23 papers that we were referred to by our survey
respondents. These included Chapter 10 of the IPCC Report (Meehl et al. 2007). One
respondent provided references to eight papers all by the same author
(Abdussamatov). We obtained copies of three of those papers and abstracts of three
others and found no evidence that the author had referred to forecasting research. Nor
did any of the remaining 15 papers include any references to research on forecasting.
We also examined the 535 references in Chapter 9. Of these, 17 had titles that
suggested the article might be concerned at least in part with forecasting methods.
When we inspected the 17 articles, we found that none of them referred to the
scientific literature on forecasting methods.
It is difficult to understand how scientific forecasting could be conducted without
reference to the research literature on how to make forecasts. One would expect to see
empirical justification for the forecasting methods that were used. We concluded that
climate forecasts are informed by the modelers’ experience and by their models—but
that they are unaided by the application of forecasting principles.
CONCLUSIONS
To provide forecasts of climate change that are useful for policy-making, one would
need to prepare forecasts of (1) temperature changes, (2) the effects of any temperature
changes, and (3) the effects of feasible proposed policy changes. To justify policy
changes based on climate change, policy makers need scientific forecasts for all three
forecasting problems. If governments implement policy changes without such
justification, they are likely to cause harm.
We have shown that failure occurs with the first forecasting problem: predicting
temperature over the long term. Specifically, we have been unable to find a scientific
forecast to support the currently widespread belief in “global warming.” Climate is
complex and there is much uncertainty about causal relationships and data. Prior
research on forecasting suggests that in such situations a naïve (no change) forecast
would be superior to current predictions. Note that recommending the naïve forecast
does not mean that we believe that climate will not change. It means that we are not
convinced that current knowledge about climate is sufficient to make useful long-term
forecasts about climate. Policy proposals should be assessed on that basis.
Global Warming: Forecasts by Scientists versus Scientific Forecasts 1015
08-Green 25/10/07 3:32 pm Page 1015
Based on our literature searches, those forecasting long-term climate change have
no apparent knowledge of evidence-based forecasting methods, so we expect that
similar conclusions would apply to the other two necessary parts of the forecasting
problem.
Many policies have been proposed in association with claims of global warming. It
is not our purpose in this paper to comment on specific policy proposals, but it should
be noted that policies may be valid regardless of future climate changes. To assess this,
it would be necessary to directly forecast costs and benefits assuming that climate does
not change or, even better, to forecasts costs and benefits under a range of possible
future climates.
Public policy makers owe it to the people who would be affected by their policies
to base them on scientific forecasts. Advocates of policy changes have a similar
obligation. We hope that in the future, climate scientists with diverse views will
embrace forecasting principles and will collaborate with forecasting experts in order
to provide policy makers with scientific forecasts of climate.
ACKNOWLEDGEMENTS
We thank P. Geoffrey Allen, Robert Carter, Alfred Cuzán, Robert Fildes, Paul
Goodwin, David Henderson, Jos de Laat, Ross McKitrick, Kevin Trenberth, Timo van
Druten, Willie Soon, and Tom Yokum for helpful suggestions on various drafts of the
paper. We are also grateful for the suggestions of three anonymous reviewers. Our
acknowledgement does not imply that all of the reviewers agreed with all of our
findings. Rachel Zibelman and Hester Green provided editorial support.
REFERENCES
Allen, P.G. and Fildes, R. (2001). Econometric Forecasting in Armstrong, J.S. ed. Principles of
Forecasting: A Handbook for Researchers and Practitioners. Norwell, MA: Kluwer.
Anderson, R.W. and Gainor, D. (2006). Fire and Ice: Journalists have warned of climate change
for 100 years, but can’t decide weather we face an ice age or warming. Business and Media
Institute, May 17. Available at
http://www
.businessandmedia.org/specialreports/2006/fireandice/FireandIce.pdf
Armstrong, J.S. (1980). The Seer-Sucker theory: The value of experts in forecasting.
Technology Review, 83 (June-July), 16-24.
Armstrong, J.S. (1978; 1985). Long-Range Forecasting: From Crystal Ball to Computer. New
York: Wiley-Interscience.
Armstrong, J.S. (1999). Forecasting for environmental decision-making, in Dale, V.H. and
English, M.E. eds., Tools to Aid Environmental Decision Making. New York: Springer-Verlag,
192-225.
Armstrong, J.S. (2001). Principles of Forecasting: A Handbook for Researchers and
Practitioners. Kluwer Academic Publishers.
Armstrong, J.S. (2006). Findings from evidence-based forecasting: Methods for reducing
forecast error. International Journal of Forecasting, 22, 583-598.
Ascher W. (1978). Forecasting: An Appraisal for Policy Makers and Planners. Baltimore:
1016 Energy & Environment · Vol. 18, No. 7+8, 2007
08-Green 25/10/07 3:32 pm Page 1016
Johns Hopkins University Press.
Balling, R. C. (2005). Observational surface temperature records versus model predictions, In
Michaels, P. J. ed. Shattered Consensus: The True State of Global Warming. Lanham, MD:
Rowman & Littlefield, 50-71.
Bast, J. and Taylor, J.M. (2007). Scientific consensus on global warming. The Heartland
Institute: Chicago, Illinois. Available at http://downloads.heartland.or
g/20861.pdf. [The
responses to all questions in the 1996 and 2003 surveys by Bray and von Storch are included as
an appendix.]
Bellamy, D. and Barrett, J. (2007). Climate stability: an inconvenient proof. Proceedings of the
Institution of Civil Engineers – Civil Engineering, 160, 66-72.
Bryson, R.A. (1993). Environment, environmentalists, and global change: A skeptic’s
evaluation, New Literary History, 24, 783-795.
Carter, R.M. (2007). The myth of dangerous human-caused climate change. The Aus/MM New
Leaders Conference, Brisbane May 3, 2007. Available at
http://members.iinet.net.au/~glrmc/new_page_1.htm
Carter, R.M., de Freitas, C.R., Goklany, I.M., Holland, D. and Linzen, R.S. (2006). The Stern
review: A dual critique: Part 1. World Economics, 7, 167-198.
Cerf, C. and Navasky, V. (1998). The Experts Speak. New York: Pantheon. Baltimore, MD:
Johns Hopkins University Press.
Christy, J. (2005). Temperature Changes in the Bulk Atmosphere: Beyond the IPCC, In
Michaels, P. J. ed. Shattered Consensus: The True State of Global Warming. Lanham, MD:
Rowman & Littlefield, 72-105.
Craig, P.P., Gadgil, A., and Koomey, J.G. (2002). What can history teach us? A retrospective
examination of long-term energy forecasts for the United States. Annual Review of Energy and
the Environment, 27, 83-118.
Dyson, F. (2007). Heretical thoughts about science and society. Edge: The Third Culture,
08/08/07. Available at http://www
.edge.org/3rd_culture/dysonf07/dysonf07_index.html
Duncan, G. T., Gorr W. L. and Szczypula, J. (2001). Forecasting Analogous Time Series, in
Armstrong, J. S. ed. Principles of Forecasting: A Handbook for Researchers and Practitioners.
Norwell, MA: Kluwer.
Eccleston, P. (2007). Public ‘in denial’ about climate change. telegraph.co.uk, 12:01 BST
03/07/2007. Available at
http://www
.telegraph.co.uk/core/Content/displayPrintable.jhtml;jse...MGSFFOAVCBQWIV0?
xml=/earth/2007/07/03/eawarm103.xml&site=30&page=0
Essex, C., McKitrick, R. and Andresen, B. (2007). Does a global temperature exist? Journal of
Non-Equilibrium Thermodynamics, 32, 1-27. Working paper available at
http://www
.uoguelph.ca/~rmckitri/research/globaltemp/globaltemp.html
Essex, C. and McKitrick, R. (2002). Taken by Storm. The Troubled Science, Policy & Politics
of Global Warming, Toronto: Key Porter Books.
Frauenfeld, O.W. (2005). Predictive Skill of the El Nino-Southern Oscillation and Related
Global Warming: Forecasts by Scientists versus Scientific Forecasts 1017
08-Green 25/10/07 3:32 pm Page 1017
Atmospheric Teleconnections, In Michaels, P.J. ed. Shattered Consensus: The True State of
Global Warming. Lanham, MD: Rowman & Littlefield, 149-182.
Henderson, D. (2007). Governments and climate change issues: The case for rethinking. World
Economics, 8, 183-228.
Halide, H. and Ridd, P. (2007). Complicated ENSO models do not significantly outperform very
simple ENSO models. International Journal of Climatology, in press.
Hegerl, G.C., Zwiers, F.W., Braconnot, P., Gillett, N.P., Luo, Y., Marengo Orsini, J.A., Nicholls,
N., Penner, J.E. and Stott, P.A. (2007). Understanding and Attributing Climate Change, in
Solomon, S., Qin, D., Manning, M., Chen, Z., Marquis, M., Averyt, K.B., Tignor, M. and Miller,
H.L. (eds.), Climate Change 2007: The Physical Science Basis. Contribution of Working Group
I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change.
Cambridge, United Kingdom and New York, NY, USA: Cambridge University Press.
Keepin, B. and Wynne, B. (1984). Technical analysis of IIASA energy scenarios. Nature, 312,
691-695.
Le Treut, H., Somerville, R., Cubasch, U., Ding, Y., Mauritzen, C., Mokssit, A., Peterson, T. and
Prather, M. (2007). Historical Overview of Climate Change, in Solomon, S., Qin, D., Manning,
M., Chen, Z., Marquis, M., Averyt, K.B., Tignor, M. and Miller, H.L. (eds.), Climate Change
2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment
Report of the Intergovernmental Panel on Climate Change. Cambridge, United Kingdom and
New York, NY, USA: Cambridge University Press.
Makridakis, S., Wheelwright, S.C., and Hyndman, R.J. (1998). Forecasting: Methods and
Applications (3
rd
ed.), Hoboken, NJ: John Wiley.
NDU (1978). Climate Change to the Year 2000. Washington, D.C.: National Defense
University.
New Zealand Climate Science Coalition (2007). World climate predictors right only half the
time. Media release 7 June. Available at http://www
.scoop.co.nz/stories/SC0706/S00026.htm
Pilkey, O.H. and Pilkey-Jarvis, L. (2007). Useless Arithmetic Why Environmental Scientists
Can’t predict the Future. New York: Columbia University Press.
Posmentier, E. S. and Soon, W. (2005). Limitations of Computer Predictions of the Effects of
Carbon Dioxide on Global Temperature, In Michaels, P. J. ed. Shattered Consensus: The True
State of Global Warming. Lanham, MD: Rowman & Littlefield, 241-281.
Randall, D.A., Wood, R.A., Bony, S., Colman, R., Fichefet, T., Fyfe, J., Kattsov, V., Pitman, A.,
Shukla, J., Srinivasan, J., Stouffer, R. J., Sumi, A. and Taylor, K.E. (2007). Climate Models and
Their Evaluation, in Solomon, S., Qin, D., Manning, M., Chen, Z., Marquis, M., Averyt, K.B.,
Tignor, M. and Miller, H.L. eds., Climate Change 2007: The Physical Science Basis.
Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental
Panel on Climate Change. Cambridge, UK and New York, NY, USA: Cambridge University
Press.
Schnaars, S.P. and Bavuso, R.J. (1986). Extrapolation models on very short-term forecasts.
Journal of Business Research, 14, 27-36.
Soon, W. (2007). Implications of the secondary role of carbon dioxide and methane forcing in
1018 Energy & Environment · Vol. 18, No. 7+8, 2007
08-Green 25/10/07 3:32 pm Page 1018
climate change: Past, present and future. Physical Geography, in press.
Stainforth, D.A., Aina, T., Christensen, C., Collins, M., Faull, N., Frame, D.J., Kettleborough,
J.A., Knight, S., Martin, A., Murphy, J.M., Piani, C., Sexton, D., Smith, L.A., Spicer, R.A.,
Thorpe, A.J. and Allen, M.R. (2005). Uncertainty in predictions of the climate response to rising
levels of greenhouse gases, Nature, 433, 403-406.
Stern, N. (2007). The Economics of Climate Change: The Stern Review. New York: Cambridge
University Press. Available from
http://www
.hmtreasury.gov.uk/independent_reviews/stern_review_economics_climate_change/
sternreview_index.cfm
Stewart, T.R. and Glantz, M.H. (1985). Expert judgment and climate forecasting: A
methodological critique of ‘Climate Change to the Year 2000’. Climate Change, 7, 159-183.
Stott, P.A. and Kettleborough, J.A. (2002). Origins and estimates of uncertainty in predictions
of twenty-first century temperature rise, Nature, 416, 723-726.
Taylor, M. (2007). An evaluation of NIWAs climate predictions for May 2002 to April 2007.
Climate Science Coalition. Available at
http://www
.climatescience.org.nz/assets/2007691051580.ClimateUpdateEvaluationText.pdf
Data available at
http://www
.climatescience.org.nz/assets/2007691059100.ClimateUpdateEvaluationCalc.xls.pdf
Tetlock, P.E. (2005). Expert Political Judgment: How Good Is It? How Can We Know?
Princeton, NJ: Princeton University Press.
Trenberth, K.E. (2007). Predictions of climate. Climate Feedback: The Climate Change Blog,
Nature.com, June 4. Available at
http://blogs.nature.com/climatefeedback/2007/06/predictions_of_climate.html
Vizard, A.L., Anderson, G.A., and Buckley, D.J. (2005). Verification and value of the Australian
Bureau of Meteorology township seasonal rainfall forecasts in Australia, 1997-2005.
Meteorological Applications, 12, 343-355.
Wittink D., and Bergestuen T. (2001). Forecasting with Conjoint Analysis, in Armstrong, J.S.
ed. Principles of Forecasting: A Handbook for Researchers and Practitioners. Norwell, MA:
Kluwer.
Global Warming: Forecasts by Scientists versus Scientific Forecasts 1019
08-Green 25/10/07 3:32 pm Page 1019
APPENDIX A: PEOPLE TO WHOM WE SENT OUR QUESTIONNAIRE
(* indicates a relevant response)
IPCC Working Group 1
Myles Allen, Richard Alley, Ian Allison, Peter Ambenje, Vincenzo Artale, Paulo
Artaxo, Alphonsus Baede, Roger Barry, Terje Berntsen, Richard A. Betts, Nathaniel
L. Bindoff, Roxana Bojariu, Sandrine Bony, Kansri Boonpragob, Pascale Braconnot,
Guy Brasseur, Keith Briffa, Aristita Busuioc, Jorge Carrasco, Anny Cazenave,
Anthony Chen*, Amnat Chidthaisong, Jens Hesselbjerg Christensen, Philippe Ciais*,
William Collins, Robert Colman*, Peter Cox, Ulrich Cubasch, Pedro Leite Da Silva
Dias, Kenneth L. Denman, Robert Dickinson, Yihui Ding, Jean-Claude Duplessy,
David Easterling, David W. Fahey, Thierry Fichefet*, Gregory Flato, Piers M. de F.
Forster*, Pierre Friedlingstein, Congbin Fu, Yoshiyuki Fuji, John Fyfe, Xuejie Gao,
Amadou Thierno Gaye*, Nathan Gillett*, Filippo Giorgi, Jonathan Gregory*, David
Griggs, Sergey Gulev, Kimio Hanawa, Didier Hauglustaine, James Haywood,
Gabriele Hegerl*, Martin Heimann*, Christoph Heinze, Isaac Held*, Bruce Hewitson,
Elisabeth Holland, Brian Hoskins, Daniel Jacob, Bubu Pateh Jallow, Eystein Jansen*,
Philip Jones, Richard Jones, Fortunat Joos, Jean Jouzel, Tom Karl, David Karoly*,
Georg Kaser, Vladimir Kattsov, Akio Kitoh, Albert Klein Tank, Reto Knutti, Toshio
Koike, Rupa Kumar Kolli, Won-Tae Kwon, Laurent Labeyrie, René Laprise, Corrine
Le Quéré, Hervé Le Treut, Judith Lean, Peter Lemke, Sydney Levitus, Ulrike
Lohmann, David C. Lowe, Yong Luo, Victor Magaña Rueda, Elisa Manzini, Jose
Antonio Marengo, Maria Martelo, Valérie Masson-Delmotte, Taroh Matsuno, Cecilie
Mauritzen, Bryant Mcavaney, Linda Mearns, Gerald Meehl, Claudio Guillermo
Menendez, John Mitchell, Abdalah Mokssit, Mario Molina, Philip Mote*, James
Murphy, Gunnar Myhre, Teruyuki Nakajima, John Nganga, Neville Nicholls, Akira
Noda, Yukihiro Nojiri, Laban Ogallo, Daniel Olago, Bette Otto-Bliesner, Jonathan
Overpeck*, Govind Ballabh Pant, David Parker, Wm. Richard Peltier, Joyce Penner*,
Thomas Peterson*, Andrew Pitman, Serge Planton, Michael Prather*, Ronald Prinn,
Graciela Raga, Fatemeh Rahimzadeh, Stefan Rahmstorf, Jouni Räisänen, Srikanthan
(S.) Ramachandran, Veerabhadran Ramanathan, Venkatachalam Ramaswamy,
Rengaswamy Ramesh, David Randall*, Sarah Raper, Dominique Raynaud, Jiawen
Ren, James A. Renwick, David Rind, Annette Rinke, Matilde M. Rusticucci,
Abdoulaye Sarr, Michael Schulz*, Jagadish Shukla, C. K. Shum, Robert H. Socolow*,
Brian Soden, Olga Solomina*, Richard Somerville*, Jayaraman Srinivasan, Thomas
Stocker, Peter A. Stott*, Ron Stouffer, Akimasa Sumi, Lynne D. Talley, Karl E.
Taylor*, Kevin Trenberth*, Alakkat S. Unnikrishnan, Rob Van Dorland, Ricardo
Villalba, Ian G. Watterson*, Andrew Weaver*, Penny Whetton, Jurgen Willebrand,
Steven C. Wofsy, Richard A. Wood, David Wratt, Panmao Zhai, Tingjun Zhang, De’er
Zhang, Xiaoye Zhang, Zong-Ci Zhao, Francis Zwiers*
Union of Concerned Scientists
Brenda Ekwurzel, Peter Frumhoff, Amy Lynd Luers
1020 Energy & Environment · Vol. 18, No. 7+8, 2007
08-Green 25/10/07 3:32 pm Page 1020
Channel 4 “The Great Global Warming Swindle” documentary (2007)
Bert Bolin, Piers Corbyn*, Eigil Friis-Christensen, James Shitwaki, Frederick Singer,
Carl Wunsch*
Wikipedia’s list of global warming “skeptics”
Khabibullo Ismailovich Abdusamatov*, Syun-Ichi Akasofu*, Sallie Baliunas, Tim
Ball, Robert Balling*, Fred Barnes, Joe Barton, Joe Bastardi, David Bellamy, Tom
Bethell, Robert Bidinotto, Roy Blunt, Sonja Boehmer, Andrew Bolt, John Brignell*,
Nigel Calder, Ian Castles*, George Chilingarian, John Christy*, Ian Clark, Philip
Cooney, Robert Davis, David Deming*, David Douglass, Lester Hogan, Craig Idso,
Keith Idso, Sherwood Idso, Zbigniew Jaworowski, Wibjorn Karlen, William
Kininmonth, Nigel Lawson, Douglas Leahey, David Legates, Richard Lindzen*, Ross
Mckitrick*, Patrick Michaels, Lubos Motl*, Kary Mullis, Tad Murty, Tim Patterson,
Benny Peiser*, Ian Plimer, Arthur Robinson, Frederick Seitz, Nir Shaviv, Fred Smith,
Willie Soon, Thomas Sowell, Roy Spencer, Philip Stott, Hendrik Tennekes, Jan Veizer,
Peter Walsh, Edward Wegman
Other sources
Daniel Abbasi, Augie Auer, Bert Bolin, Jonathan Boston, Daniel Botkin*, Reid
Bryson, Robert Carter*, Ralph Chapman, Al Gore, Kirtland C. Griffin*, David
Henderson, Christopher Landsea*, Bjorn Lomborg, Tim Osborn, Roger Pielke*,
Henrik Saxe, Thomas Schelling*, Matthew Sobel, Nicholas Stern*, Brian Valentine*,
Carl Wunsch*, Antonio Zichichi.
Global Warming: Forecasts by Scientists versus Scientific Forecasts 1021
08-Green 25/10/07 3:32 pm Page 1021
08-Green 25/10/07 3:32 pm Page 1022
... ScoO responded enthusias9cally, and the outcome was a string of papers in which we found the hypothesis of manmade global warming that would substan9vely harm humankind and the natural environment was not supported by scien9fic forecas9ng. Our key papers on the topic were Green and Armstrong (2007) and Green, Armstrong, and Soon (2009). Our coauthor on the laOer paper was astrophysicist, Willie Soon. ...
Article
Full-text available
Preprint of: Green, K. C. (2024). J. Scott Armstrong (1937 - 2023): Iconoclast and Champion of Science for Practical Purposes. Foresight: The International Journal of Applied Forecasting, 72 (1), 5-7. ...Much of Scott’s career had been devoted to making useful scientific findings on forecasting accessible to practitioners as well as to researchers across diverse disciplines. As readers of Foresight will realise better than many, making good decisions depends on accurate forecasts. And researchers know that predictive validity is a critical test of hypotheses....
... In terms of juridical inclination, the study's approach is dominated by the influence of Anthropogenic Global Warning theory which is the first theory of climate change and contends (Green& Armstrong, 2007) that human emissions of greenhouse gases, principally carbon dioxide (CO2 ), methane, and nitrous oxide, are causing a catastrophic rise in global temperatures. ...
Article
Full-text available
Africa, in the 21st century and for decades to come, is faced with a formidable adversary, climate change. As the global temperature rises, Africa is faced with a non-tangible enemy that threatens its continental security, development and interests in the global arena. Will Africa be able to stand the tests of time against global warming? Can anything be done to mitigate or promote the adaptation to the inevitable? This article seeks to give a comprehensive review of what climate change is and its causes, pointing out its impacts on the economic development of African states. It also hopes to view some of the obstacles faced by Africa in the face of this great challenge and revisit certain recommendations that must be adopted to effectively combat the menace that is caused by climate change. The article portrays that the war against global warming is not to be fought by the governments and agencies alone, but also by the individuals living in both rural and urban areas. As a sore that has been allowed to fester and grow infected, climate change must be dealt with quickly and effectively before an entire limb or continent, in this case, is destroyed.
... Precipitation and temperature forecasting is mostly based on deterministic models as the Global Circulation Models (GCMs), which simulate the Earth's atmosphere using numerical equations; therefore, deviating from traditional time series forecasting, i.e. univariate time series forecasting. This particular deviation has been questioned by forecasting scientists (Green and Armstrong 2007, Green et al. 2009, Fildes and Kourentzes 2011, see also the comments in Keenlyside (2011) andMcSharry (2011)). Traditional time series forecasting can be performed using several classes of regression models, as reviewed in De Gooijer and Hyndman (2006), while the two major classes are stochastic and machine learning. ...
Article
Full-text available
The simplest way to forecast geophysical processes, an engineering problem with a widely recognised challenging character, is the so called “univariate time series forecasting” that can be implemented using stochastic or machine learning regression models within a purely statistical framework. Regression models are in general fast-implemented, in contrast to the computationally intensive Global Circulation Models, which constitute the most frequently used alternative for precipitation and temperature forecasting. For their simplicity and easy applicability, the former have been proposed as benchmarks for the latter by forecasting scientists. Herein, we assess the one-step ahead forecasting performance of 20 univariate time series forecasting methods, when applied to a large number of geophysical and simulated time series of 91 values. We use two real-world annual datasets, a dataset composed by 112 time series of precipitation and another composed by 185 time series of temperature, as well as their respective standardized datasets, to conduct several real-world experiments. We further conduct large-scale experiments using 12 simulated datasets. These datasets contain 24 000 time series in total, which are simulated using stochastic models from the families of Autoregressive Moving Average and Autoregressive Fractionally Integrated Moving Average. We use the first 50, 60, 70, 80 and 90 data points for model-fitting and model-validation and make predictions corresponding to the 51st, 61st, 71st, 81st and 91st respectively. The total number of forecasts produced herein is 2 177 520, among which 47 520 are obtained using the real-world datasets. The assessment is based on eight error metrics and accuracy statistics. The simulation experiments reveal the most and least accurate methods for long-term forecasting applications, also suggesting that the simple methods may be competitive in specific cases. Regarding the results of the real-world experiments using the original (standardized) time series, the minimum and maximum medians of the absolute errors are found to be 68 mm (0.55) and 189 mm (1.42) respectively for precipitation, and 0.23 °C (0.33) and 1.10 °C (1.46) respectively for temperature. Since there is an absence of relevant information in the literature, the numerical results obtained using the standardized real-world datasets could be used as rough benchmarks for the one-step ahead predictability of annual precipitation and temperature.
Article
Full-text available
Around 326 million trillion gallons of water are present on Earth. In 97.2% water hold by oceans, 2.1% in glaciers, 0.65% water is underground and rest water is present at different parts of atmosphere [1]. The function of Earth's atmosphere in trapping solar heat was originally recognized during this period by French scientist Joseph Fourier. Different gases in the Earth's atmosphere work as a blanket to trap solar heat and keep it from escaping back into space. The term "greenhouse effect" refers to both the mechanism and the gases involved. Carbon dioxide, methane, and nitrous oxide are the three primary greenhouse gases found in nature. The world would be too cold to support life without the greenhouse effect. But the increasing Mean temperature will accelerate the melting of the glaciers and the tsunami like situation will arise in future as warned by researchers. So, this work aims to forecast the global mean temperature using statistical to know when it will start touching borderline. Scope: The present study will help the researchers to find out the solutions for increasing global temperature. The planners will make the strategy to reduce pollution and other activities to as to reduce global warming.
Chapter
In the late 1950s, the fossil fuel industry began its coordinated effort to undermine environmental and later climate-related legislation to protect the industry from increasing regulation, oversight, and accountability. Headed by fossil fuel companies including ExxonMobil and the industry association the American Petroleum Institute, a multi-decadal campaign would emerge to undermine climate legislation and influence public opinion on the issue of climate change. This chapter recounts and describes the formation of the CCCM, first looking at the early and organized opposition from coalition groups of fossil fuel companies and other interested parties, to the emergence of a network of think tanks predominantly funded by fossil fuel corporations and conservative donors. It then explores how these organizations successfully helped create doubt that led to inaction during the George H.W. Bush administration before escalating their campaign throughout the Obama administration.
Article
This paper considers challenges resulting from the use of advanced artificial judicial intelligence (AAJI). We argue that these challenges should be considered through the lens of value alignment. Instead of discussing why specific goals and values, such as fairness and nondiscrimination, ought to be implemented, we consider the question of how AAJI can be aligned with goals and values more generally, in order to be reliably integrated into legal and judicial systems. This value alignment framing draws on AI safety and alignment literature to introduce two otherwise neglected considerations for AAJI safety: specification and assurance. We outline diverse research directions and suggest the adoption of assurance and specification mechanisms as the use of AI in the judiciary progresses. While we focus on specification and assurance to illustrate the value of the AI safety and alignment literature, we encourage researchers in law and philosophy to consider what other lessons may be drawn.
Article
Full-text available
This paper assessed the influence of Nigerian media on public knowledge of climate change. This work acknowledges that public understanding of climate change is a prerequisite to taking voluntary action to mitigate its effects. The work therefore sought to ascertain the specific ways in which Nigerian media reportage of climate change had influenced public knowledge of the subject matter. A survey was conducted in Abuja, Enugu, Ikeja and Port Harcourt. Editors of four newspapers in Nigeria were interviewed. Results indicated that the mass media did not rank the highest as sources of information for the audience on climate change, and they (media) did not significantly influence public knowledge of climate change. This finding differed from results of studies in the US. Respondents linked climate change to changes in weather patterns. This study recommended that opinion polls are necessary to see where audience interests lies as well as how much media views influence public understanding in terms of communication effectiveness.
Technical Report
Full-text available
The validity of the manmade global warming alarm requires the support of scientific forecasts of (1) a substantive long-term rise in global mean temperatures in the absence of regulations, (2) serious net harmful effects due to global warming, and (3) cost-effective regulations that would produce net beneficial effects versus alternatives policies, including doing nothing. Without scientific forecasts for all three aspects of the alarm, there is no scientific basis to enact regulations. In effect, the warming alarm is like a three-legged stool: each leg needs to be strong. Despite repeated appeals to global warming alarmists, we have been unable to find scientific forecasts for any of the three legs. We drew upon scientific (evidence-based) forecasting principles to audit the forecasting procedures used to forecast global mean temperatures by the Intergovernmental Panel on Climate Change (IPCC)—leg “1” of the stool. This audit found that the IPCC procedures violated 81% of the 89 relevant forecasting principles. We also audited forecasting procedures, used in two papers, that were written to support regulation regarding the protection of polar bears from global warming —leg “3” of the stool. On average, the forecasting procedures violated 85% of the 90 relevant principles. The warming alarmists have not demonstrated the predictive validity of their procedures. Instead, their argument for predictive validity is based on their claim that nearly all scientists agree with the forecasts. This count of “votes” by scientists is not only an incorrect tally of scientific opinion, it is also, and most importantly, contrary to the scientific method. We conducted a validation test of the IPCC forecasts that were based on the assumption that there would be no regulations. The errors for the IPCC model long-term forecasts (for 91 to 100 years in the future) were 12.6 times larger than those from an evidence-based “no change” model. Based on our own analyses and the documented unscientific behavior of global warming alarmists, we concluded that the global warming alarm is the product of an anti-scientific political movement. Having come to this conclusion, we turned to the “structured analogies” method to forecast the likely outcomes of the warming alarmist movement. In our ongoing study we have, to date, identified 26 similar historical alarmist movements. None of the forecasts behind the analogous alarms proved correct. Twenty-five alarms involved calls for government intervention and the government imposed regulations in 23. None of the 23 interventions was effective and harm was caused by 20 of them. Our findings on the scientific evidence related to global warming forecasts lead to the following recommendations: 1. End government funding for climate change research. 2. End government funding for research predicated on global warming (e.g., alternative energy; CO2 reduction; habitat loss). 3. End government programs and repeal regulations predicated on global warming. 4. End government support for organizations that lobby or campaign predicated on global warming.
Article
Full-text available
ABSTRACT Organizations that use time series forecasting on a regular basis generally forecast many variables, such as demand for many products or services. Within the population of variables forecasted by an organization, we can expect that there will be groups of analogous time series that follow similar, time-based patterns. The co-variation of analogous time series is a largely untapped source of information that can improve forecast accuracy (and explainability). This paper takes the Bayesian pooling approach to drawing information from analogous time series to model and forecast a given time series. Bayesian pooling uses data from analogous time series as multiple observations per time period in a group-level model. It then combines estimated parameters of the group model with conventional time series model parameters, using “shrinkage” weights estimated empirically from the data. Major benefits of this approach are that it 1) minimizes the number,of parameters to be estimated (many other pooling approaches suffer from too many parameters to estimate), 2) builds on conventional time series models already familiar to forecasters, and 3) combines,time series and cross-sectional perspectives in flexible and effective ways. Provided are the necessary terms, concepts, and methods to understand Bayesian pooling
Article
Review of J. Scott Armstrong's 1978 book on forecasting. Click on the DOI link above to read the review
Article
Physical, mathematical, and observational grounds are employed to show that there is no physically meaningful global temperature for the Earth in the context of the issue of global warming. While it is always possible to construct statistics for any given set of local temperature data, an infinite range of such statistics is mathematically permissible if physical principles provide no explicit basis for choosing among them. Distinct and equally valid statistical rules can and do show opposite trends when applied to the results of computations from physical models and real data in the atmosphere. A given temperature field can be interpreted as both “warming” and “cooling” simultaneously, making the concept of warming in the context of the issue of global warming physically ill-posed.
Article
A review of the recent refereed literature fails to confirm quantitatively that carbon dioxide (CO2) radiative forcing was the prime mover in the changes in tempera- ture, ice-sheet volume, and related climatic variables in the glacial and interglacial epi- sodes of the past 650,000 years, even under the "fast-response" framework where the convenient if artificial distinction between forcing and feedback is assumed. Atmospheric CO2 variations generally follow changes in temperature and other climatic variables rather than preceding them. Likewise, there is no confirmation of the often-posited significant supporting role of methane (CH4) forcing, which—despite its faster atmospheric response time—is simply too small, amounting to less than 0.2 W/m2 from a change of 400 ppb. We cannot quantitatively validate the numerous qualitative suggestions that the CO2 and CH4 forcings that occurred in response to Milankovic orbital cycles accounted for more than half of the amplitude of the changes in the glacial/interglacial cycles of global tempera- ture, sea level, and ice volume. Consequently, we infer that natural climatic variability— notably the persistence of insolation forcing at key seasons and geographical locations, taken with closely related thermal, hydrological, and cryospheric changes (such as the water vapor, cloud, and ice albedo feedbacks)— suffices in se to explain the proxy- derived, global and regional climatic and environmental phase-transitions in the paleocli- mate. If so, it may be appropriate to place anthropogenic greenhouse gas emissions in context by separating their medium-term climatic impacts from those of a host of natural forcings and feedbacks that may, as in paleoclimatological times, prove equally signifi- cant. (Key words: glacial-interglacial cycles; water vapor, cloud-and-ice insulator, and albedo feedback; Milankovic orbital insolation forcing; atmospheric CO2 and CH4 forc- ing.)
Article
This paper demonstrates that the widely prophesied doubling of atmospheric carbon dioxide levels from natural, pre-industrial values will enhance the so-called 'greenhouse effect' but will amount to less than 1°C of global warming. It also points out that such a scenario is unlikely to arise given our limited reserves of fossil fuels - certainly not before the end of this century. Furthermore, the paper argues that general circulation models are as yet insufficiently accurate for civil engineers to rely on their predictions in any forward-planning decisions - the omission of solar wind effects being a potentially significant shortcoming. It concludes that the only certainty is that the world's fossil fuel resources are finite and should be used prudently and with proper respect to the environment.
Article
We verified the Bureau of Meteorology's seasonal rainfall forecasts for 262 townships throughout Australia, from its inception in June 1997 to May 2005. The results indicate that the forecasting system had low skill. Brier Skill Score and the receiver operating characteristic values were uniformly close to the no skill value. Forecast variances were consistently small. The overall observed variance was 0.0048, 2.1% of the variance of a perfect system. The estimate of the gradient of the outcome against forecast was 0.42 and was imprecise. Definitive statements about bias cannot be made. The value of the forecasts for decision-makers was estimated using value score curves, calculated for six forecast scenarios. All curves indicated that no economic benefit could have been reliably derived by users of the seasonal rainfall forecasts, with the exception of users with decisions triggered by a small shift in the forecast from climatology, in which case small economic gains may have occurred. Small value scores were associated with the observed forecast variance, not the observed bias. We examined the expected change in value scores associated with any future increase in forecast variance. This showed that a moderate increase from the observed variance would bring limited benefits. Substantial value to a broad range of users will only occur with a large increase in forecast variance. To deliver this, new lead indicators with markedly better predictive characteristics may need to be developed for the seasonal rainfall forecasting system.