Technical ReportPDF Available

STATEMENT OF EVIDENCE OF KESTEN C GREEN ON BEHALF OF ROCH PATRICK SULLIVAN

Authors:

Abstract

BEFORE THE ENVIRONMENT COURT IN THE MATTER OF the Resource Management Act 1991 AND IN THE MATTER OF an Appeal under section 120 of the Act BETWEEN ROCH PATRICK SULLIVAN Appellant AND CENTRAL OTAGO DISTRICT COUNCIL First Respondent AND OTAGO REGIONAL COUNCIL Second Respondent AND MERIDIAN ENERGY LIMITED Applicant
BEFORE THE ENVIRONMENT COURT
IN THE MATTER OF
AND
IN THE MATTER OF
BETWEEN
AND
AND
AND
the Resource Management Act 1991
an Appeal under section 120 of the Act
ROCH PATRICK SULLIVAN
Appellant
CENTRAL OTAGO DISTRICT COUNCIL
First Respondent
OTAGO REGIONAL COUNCIL
Second Respondent
MERIDIAN ENERGY LIMITED
Applicant
STATEMENT OF EVIDENCE OF KESTEN C GREEN ON BEHALF OF ROCH
PATRICK SULLIVAN
Qualifications and Experience
1. My name is KESTEN C. GREEN. I am a Senior Research Fellow of Monash
University, Australia.
2. I was a founder in 1985 of Infometrics Limited and a director, and in 1982 a founder and
director of the publisher of Bettor Iriformed, a computerized horse-race forecasting
magazine. I have been involved in research on forecasting and the practice of it more-or-
less continuously ever since. In 1995 I founded the consulting firm Decision Research
Ltd. In 2003 I obtained my PhD from Victoria University of Wellington for a comparison
of forecasting methods. One of my articles based on that research was awarded
International Journal of Forecasting Best Paper for 2002-2003. In 2004 I was a visiting
senior lecturer at Monash University's Business and Economic Forecasting Unit and was
appointed a Senior Research Fellow at the end of that visit.
3. I am co-owner and co-director of the public service forecasting information web site,
http://forecastingprinciples.com. The site is the top non-sponsored item in the Dogpile1
list of sites from a search for "forecasting". Among other items on the site, I am co-author
of "Answers to Frequently Asked Questions", "Selection Tree for Forecasting Methods",
and "Methodology Tree for Forecasting", each of which has had more than 18,000 visits
sme June 2006. I recently established the special interest group pages
http://publicpolicyforecasting.com to encourage the use of scientific forecasting for
public policy decisions.
4. I am a member of the Institute for Operations Research and the Management Sciences
(INFORMS), the Decision Analysis Society, the International Association for Conflict
Management (IACM), the Society for Judgment and Decision Making (SJDM), and the
International Institute of Forecasters (TIF). I am on the editorial board of the IIF journal
Foresight: The International Journal of Applied Forecasting.
Ihttp://Dogpile.com: An internet search engine that compiles findings from the best search engines.
/<7
2
5. Articles
Armstrong, J. S., Green, K. C., &Soon, W. (2008). Polar bear population forecasts: A
public-policy forecasting audit. Interfaces, accepted for publication.
Green, K. C. &Tashman, L. (2008). Should we define forecast error as e =F - A or e =
A - F? Foresight, 10,38-40.
Green, K. C. &Armstrong, J. S. (2007). Global warming: Forecasts by scientists versus
scientific forecasts. Energy and Environment, 18, 997-1021.
Green, K. C., Armstrong, J. S., &Graefe, A. (2007). Methods to Elicit Forecasts from
Groups: Delphi and Prediction Markets Compared. Foresight, 8, 17-20.
Green, K. C. &Armstrong, J. S. (2007). Structured analogies for forecasting.
International Journal of Forecasting, 23, 365-376. *
Green, K. C. &Armstrong, J. S. (2007). The value of expertise for forecasting
decisions in conflicts. Interfaces, 37(3),287-299. t
Armstrong, J. S. & Green, K. C. (2007). Competitor-oriented objectives: The myth of
market share. International Journal of Business, 12 (1), 117-136.
Green, K. C. (2005). Game theory, simulated interaction, and unaided judgement for
forecasting decisions in conflicts: Further evidence. International Journal of
Forecasting, 21, 463-472. ~
Green, K. C. &Armstrong, J. S. (2005). The war in Iraq: Should we have expected
better forecasts? Foresight, 2, 50-52.
Green, K. C. (2005). What can forecasting do for you? Foresight, 1 (1), 53-54.
Green, K. C. (2003). Do practitioners care about findings from management research?
Interfaces, 33(6), 105-107.
Green, K. C. (2002). Forecasting decisions in conflict situations: a comparison of game
theory, role-playing, and unaided judgement International Journal of Forecasting, 18,
321-344. §
Green, K. C. (2002). Embroiled in a conflict: who do you call? International Journal of
Forecasting, 18, 389-395.
6. Working papers and unpublished reports
Armstrong, J. S., Green, K. C., Jones, R., &Wright, M. (2008). Predicting elections
from politicians' faces. MPRA PaperNo. 9150.
Green, K. C. (2008). Assessing probabilistic forecasts about particular situations.
MPRA Paper No. 8836.
Green, K. C., Armstrong, J. S., Bush, R. M., &Morse, E. L. (2006). Impact of role
playing on the accuracy of predictions in the intelligence community. Report on
research commissioned by Disruptive Technology Office, contract 06-894-6336.
Armstrong, J. S. & Green, K. C. (2005). Demand forecasting: Evidence-based methods,
Monash University Department of Econometrics and Business Statistics Working
Paper 24-05.
Armstrong, J. S. & Green, K. C. (2005). Evidence-based methods for predicting
terrorists' decisions: Two new methods and one old method. Paper commissioned by
CENTRA Technology, Inc.
7. The article "Polar bear population forecasts: A public-policy forecasting audit" (listed
above) was the basis of testimony presented to the U.S. Senate Environment and Public
Works Committee hearing on the proposal to list polar bears as an endangered species.
3
8. I have been engaged by the appellant, Roch Patrick Sullivan to present scientific evidence
on whether forecasts of dangerous manmade global warming are valid. The evidence I
present is within my area of expertise as a forecasting researcher. I have not omitted to
consider material facts known to me that might alter or detract from my statement.
9. I have read the Code of Conduct for Expert Witnesses in the Environment Court set out
in the Consolidated Practice Note 2006. My statement of evidence has been prepared in
conformity with the principles and practices in the Practice Note.
Validity of Climate Forecasts
10. There are currently no scientific forecasts oflong-term climate. To take expensive actions
on the basis of speculation that the Earth's climate will become dangerously warmer, or
colder, over the rest of this Century would be irresponsible.
11. Dangerous manmade global warmmg is not an empirical phenomenon but rather is
asserted on the basis of an assortment of forecasts. The principal of these is a conditional
forecast that temperatures will increase substantially over coming decades if human
emissions of the odorless and life-promoting gas carbon dioxide are not dramatically
curtailed.
12. Associated forecasts are that dramatic weather and diseases will increase, sea levels will
rise, and food production will suffer as a consequence of increased temperatures. Sti11
further forecasts are that proposed policies will be effective in reducing human emissions
of CO2 and that the cost of that reduction will be much less than the benefit of having
done so.
13. These are heroic forecasts. They are also important forecasts because their acceptance
implies costly actions should be taken. Should we believe them?
14. One way to decide whether to believe forecasts of dangerous manmade global warming is
how believable they seem. Research on persuasion shows that repetition, vivid imagery,
and detailed scenarios are very effective. We have all seen these methods used to sell
forecasts of dangerous global warming, I'm sure.
4
15. Another way to decide would be to submit to authority. The pnmary source of the
forecasts of dangerous manmade global warming promulgated in the media, the United
Nations' Intergovernmental Panel on Climate Change and the senior people associated
with it, certainly convey a sense of authority.
16. A third way would be to assess whether the procedures used to derive this dramatic
assortment of forecasts were scientific. In other words, whether there is any empirical
evidence that the procedures could be expected to produce valid forecasts. Only this third
way is consistent with science.
17. Scientific research on forecasting has been conveniently summarized in the form of
principles or guidelines. Here is an example of a principle ...
Be conservative in situations of high uncertainty or instability (Principle 7.3)
Forecasts should be conservative when a situation is unstable, complex or
uncertain. Being conservative means moving forecasts towards "no change" or,
in cases that exhibit a well established long-term trend and where there is no
reason to expect the trend to change, being conservative means moving forecasts
toward the trend line. A long-term trend is one that has been evident over a
period that is much longer than the period being forecast. Conservatism is a
fundamental principle in forecasting.
18. The principles are the distillation of more than half-a-century of scientific research on
forecasting in many fields-including demography, economics, engineering, finance,
management, medicine, psychology, politics, and weather-in order to ensure that all
relevant evidence was taken into account and so that the principles would apply to all
types of forecasting problem. The work of summarizing the research was done by 39
authors and 123 reviewers for Professor J. Scott Armstrong's 2001 handbook, Principles
of Forecasting. The principles are also available on the internet at
forecastingprinciples.com.
19. Some important principles are counter-intuitive. As a consequence, it is reasonable that
decision makers and the public should expect people who make forecasts to be familiar
5
with the principles of forecasting, just as a patient expects his physician to be familiar
with the procedures dictated by medical science.
20. You have read statements from scientists who have described the current understanding
about climate change. Perhaps the major problem with the {PCC's forecasts is that, in
such a complex and uncertain situation, they are based on unaided expert judgment. By
unaided, I mean unaided by evidence-based forecasting procedures.
Experts' forecasts of climate changes have long been newsworthy and a cause of worry
for people. Here are some headlines from the New York Times:
Sept. 18, 1924:
March 27, 1933:
May 21,1974:
Dec. 27, 2005:
MacMillan Reports Signs of New Ice Age
America in Longest Warm Spell Since 1776
Scientists Ponder Why World's Climate IS
Changing: A Major Cooling Widely Considered to
be Inevitable
Past Hot Times Hold Few Reasons to Relax About
New Warming
21. Professor Armstrong, in a summary of the empirical evidence on unaided forecasts by
experts (1980), found that the amount of experience does, beyond a basic minimum level,
not make a difference to forecast accuracy. More recently, Berkeley Professor Phil
Tetlock studied expert predictions about matters of economics and global politics;
subjects that are arguably much less complex than changes to the Earth's climate over the
next 100 years. His book about the research is titled "Expert Political Judgment" (Tetlock
2005). He found that the 284 experts who participated in his research by making more
than 82,000 forecasts made forecasts that were no more accurate than novices' or
choosing at random what would happen.
22. The IPCC forecasts of dangerous manmade global warming were derived using computer
models. The use of computer models, however, does not amount to a scientific
forecasting procedure. With so much uncertainty about climate (Soon et al. 2001), the
models only reflect what the modelers think might happen, much like a Hollywood
6
disaster movie reflects the vision of the makers. One of the IPCC lead authors, Kevin
Trenberth, wrote:
, ... there are no predictions by IPCC at all. And there never have been. The IPCC
instead proffers "what if' projections of future climate ... '
(Trenberth 2007) 2
23. While I agree with this comment by Professor Trenberth in the sense that the IPPC has
not provided scientific forecasts, other IPCC authors and the general public appear to
believe that the IPCC does provide forecasts. It is critical that forecasts that have
important public policy implications should be fit for their intended use; in other words
derived using scientific procedures that are known to produce valid forecasts.
24. I and Professor Armstrong 3independently assessed the procedures used to derive the
IPCC's long-term global temperature "forecasts"-the linchpin forecasts for dangerous
manmade global warming-against the scientific evidence on what methods were
appropriate for the task (Green and Armstrong 2007).
25. The first thing we found was that the IPCC authors seemed to be completely unaware of
scientific research on the subject of forecasting. Among the many articles that were cited
by the authors, none had any relevance to the scientific testing of forecasting methods.
26. When we conducted our audit, we found that the IPCC's Fourth Assessment Report
provided sufficient information for us to make judgments on whether their procedures
followed forecasting principles for just 89 out of 140 principles. Of the 89 principles, the
fPCC procedures violated 81% or 72 principles.
2Written by Kevin Trenberth of the Climate Analysis Section, National Center for Atmospheric Research,
and posted on ClimateFeedback at nature.com on June 4.
3Professor J. Scott Armstrong is a full professor at The Wharton School of the University of Pennsylvania.
He was a founder ofthe International Symposium on Forecasting, theJournal of Forecasting, the
International Journal of Forecasting, and the http://forecastingprinciples.com website. He is the author of
Long-range Forecasting: From Crystal Ball to Computer (1978,1985) and Forecasting Principles (2001)
and is the most-cited author on forecasting methods. His CV is available at http://jscottarmstrong.com.
7
27. Some individual principles that were violated are so important that violation of anyone
of them alone invalidates the IPCC's forecasts. One of the key principles that were
violated was:
Keep forecasting methods simple (Principle 7.1)
The IPCC climate forecasters appear to believe that complex models are
appropriate for forecasting climate and that forecast accuracy will increase with
model complexity. That is not the case. Complex methods involve large
numbers of variables, complex interactions, and relationships that employ
nonlinear parameters. Complex forecasting methods are only accurate when
there is great certainty about relationships now and in the future, where the data
are subject to little error, and where the causal variables can be accurately
forecast. These conditions do not apply to climate forecasting, and thus simple
methods are recommended.
28. Models for forecasting climate are not an exception to this principle. Halide and Ridd
(2008) compared predictions of EI Nifio-Southern Oscillation events from a simple one-
variable model with those from other researchers' complex models. Some of the complex
models were dynamic causal models incorporating laws of physics. In other words, they
were similar to those upon which the IPCC authors depended. Halide and Ridd's simple
model was better than all eleven of the complex models in making predictions about the
next three months. All models performed poorly when forecasting further ahead.
29. Moreover, the use of complex methods makes criticism difficult and prevents forecast
users from understanding how forecasts were derived. One effect of this exclusion of
others from the forecasting process is to reduce the chances of detecting errors.
30. The cost of taking action on the basis of invalid forecasts is so high for climate change,
that there is no good reason why efforts to forecast climate should not follow all relevant
principles.
8
Conclusion
31. In conclusion, forecasts of dangerous manmade global warming are not valid and there is
currently no more reason to believe that temperatures will increase over the coming
century than there is to believe they will decrease.
References
Armstrong, J. S. (1978; 1985). Long-Range Forecasting: From Crystal Ball to Computer. New
York: Wiley-Interscience.
Armstrong, J. S. (1980). The Seer-sucker theory: The value of experts in forecasting. Technology
Review, 83 (June-July), 16-24.
Armstrong, J. S. (2001). Principles of Forecasting: A Handbook for Researchers and
Practitioners. Kluwer Academic Publishers.
Green, K. C. and Armstrong, J. S. (2007). Global warming: Forecasts by scientists versus
scientific forecasts. Energy &Environment, 18, 997-1022.
Halide, H. and Ridd, P. (2008). Complicated ENSO models do not significantly outperform very
simple ENSO models. International Journal of Climatology, 28,219-233.
Soon, W., Baliunas, S., Idso, S. B., Kondratyev, K. Ya, Posmentier, E. S. (2001). Modeling
climatic effects of anthropogenic carbon dioxide emissions: Unknowns and uncertainties. Climate
Research, 18,259 - 275.
Tetlock, P. E. (2005). Expert Political Judgment: How Good Is It? How Can We Know?
Princeton, NJ: Princeton University Press.
Trenberth, K. (2007). Predictions of climate. [Retrieved June 2, 2008 from
http://blogs.nature.com! climatefeedback/2007 /06/predictions of c1imate.htrn I].
KESTEN C GREEN
9
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Can game theory aid in forecasting the decision making of parties in a conflict? A review of the literature revealed diverse opinions but no empirical evidence on this question. When put to the test, game theorists’ predictions were more accurate than those from unaided judgement but not as accurate as role-play forecasts. Twenty-one game theorists made 99 forecasts of decisions for six conflict situations. The same situations were described to 290 research participants, who made 207 forecasts using unaided judgement, and to 933 participants, who made 158 forecasts in active role-playing. Averaged across the six situations, 37 percent of the game theorists’ forecasts, 28 percent of the unaided-judgement forecasts, and 64 percent of the role-play forecasts were correct.
Article
Full-text available
When people in conflicts can accurately forecast how others will respond, they should be able to make better decisions. Contrary to expectations, earlier research found game theorists' forecasts were less accurate than forecasts from student role players. To assess whether game theorists had been disadvantaged by the selection of conflicts, I obtained forecasts for three new conflicts of types preferred by game theory experts. As before, role-players in simulated interactions were students, and other students forecast using their judgement. Game theorists did better than previously. However, when the three new and five earlier conflicts are combined, 101 forecasts by 23 game theorists were no more accurate (31%) than 354 forecasts by students who used unaided judgement (31%). Experienced game theorists were not more accurate. Neither were those who spent more time on the task. Of 105 simulated-interaction forecasts, 62% were accurate: an average error reduction of 47% over game-theorist forecasts and a halving of error relative to the current method. Forecasts can sometimes have value without being strictly accurate. Assessing the usefulness of forecasts led to the same conclusions about the relative merits of the methods. Finally, by combining simulated interaction forecasts, accurate forecasts were obtained for seven of the eight situations.
Article
Full-text available
I address commentators’ concerns about the research reported in my paper. These concerns do not threaten the conclusion that role-playing should be preferred ahead of game theory and unaided judgement for forecasting decisions in conflicts. I provide additional evidence and argument that the relative forecasting accuracy of game theory is a legitimate subject for research. I discuss non-forecasting uses for game theory and suggest that, without forecasting validity, such applications may be ill-founded. Replication of the Green research (Green, K. C. (2002) International Journal of Forecasting 18, 321–344) by game-theory advocates would be valuable. Extending the research with forecasts for more conflicts would allow greater confidence in recommendations to managers. Extensions should aim to increase the variety of conflicts so that managers can match research findings with their own forecasting problems. More data may allow researchers to identify conditions that favour particular forecasting methods and to estimate the effects of variations in conflict descriptions and decision options.
Article
Full-text available
Competitor-oriented objectives, such as market-share targets, are promoted by academics and are commonly used by firms. A 1996 review of the evidence, summarized in this paper, indicated that competitor-oriented objectives reduce profitability. However, we found that this evidence has been ignored by managers. We then describe evidence from 12 new studies, one of which is introduced in this paper. This evidence supports the conclusion that competitor-oriented objectives are harmful, especially when managers receive information about market shares of competitors. Unfortunately, we expect that many firms will continue to use competitor-oriented objectives to the detriment of their profitability.
Book
Principles of Forecasting: A Handbook for Researchers and Practitioners summarizes knowledge from experts and from empirical studies. It provides guidelines that can be applied in fields such as economics, sociology, and psychology. It applies to problems such as those in finance (How much is this company worth?), marketing (Will a new product be successful?), personnel (How can we identify the best job candidates?), and production (What level of inventories should be kept?). The book is edited by Professor J. Scott Armstrong of the Wharton School, University of Pennsylvania. Contributions were written by 40 leading experts in forecasting, and the 30 chapters cover all types of forecasting methods. There are judgmental methods such as Delphi, role-playing, and intentions studies. Quantitative methods include econometric methods, expert systems, and extrapolation. Some methods, such as conjoint analysis, analogies, and rule-based forecasting, integrate quantitative and judgmental procedures. In each area, the authors identify what is known in the form of `if-then principles', and they summarize evidence on these principles. The project, developed over a four-year period, represents the first book to summarize all that is known about forecasting and to present it so that it can be used by researchers and practitioners. To ensure that the principles are correct, the authors reviewed one another's papers. In addition, external reviews were provided by more than 120 experts, some of whom reviewed many of the papers. The book includes the first comprehensive forecasting dictionary.
Article
A likelihood of disastrous global environmental consequences has been surmised as a result of projected increases in anthropogenic greenhouse gas emissions. These estimates are based on computer climate modeling, a branch of science still in its infancy despite recent substantial strides in knowledge. Because the expected anthropogenic climate forcings are relatively small compared to other background and forcing factors (internal and external), the credibility of the modeled global and regional responses rests on the validity of the models. We focus on this important question of climate model validation. Specifically, we review common deficiencies in general circulation model (GCM) calculations of atmospheric temperature, surface temperature, precipitation and their spatial and temporal variability. These deficiencies arise from complex problems associated with parameterization of multiply interacting climate components, forcings and feedbacks, involving especially clouds and oceans. We also review examples of expected climatic impacts from anthropogenic CO 2 forcing. Given the host of uncertainties and unknowns in the difficult but important task of climate modeling, the unique attribution of observed current climate change to increased atmospheric CO 2 concentration, including the relatively well-observed latest 20 yr, is not possible. We further conclude that the incautious use of GCMs to make future climate projections from incomplete or unknown forcing scenarios is antithetical to the intrinsically heuristic value of models. Such uncritical application of climate models has led to the commonly held but erroneous impression that modeling has proven or substantiated the hypothesis that CO 2 added to the air has caused or will cause significant global warming. An assessment of the merits of GCMs and their use in suggesting a discernible human influence on global climate can be found in the joint World Meteorological Organisation and United Nations Environmental Programme's Intergovernmental Panel on Climate Change (IPCC) reports (1990, 1995 and the upcoming 2001 report). Our review highlights only the enormous scientific difficulties facing the calculation of climatic effects of added atmospheric CO 2 in a GCM. The purpose of such a limited review of the deficiencies of climate model physics and the use of GCMs is to illuminate areas for improvement. Our review does not disprove a significant anthropogenic influence on global climate.
Article
The intelligence failures surrounding the invasion of Iraq dramatically illustrate the necessity of developing standards for evaluating expert opinion. This book fills that need. Here, Philip E. Tetlock explores what constitutes good judgment in predicting future events, and looks at why experts are often wrong in their forecasts. Tetlock first discusses arguments about whether the world is too complex for people to find the tools to understand political phenomena, let alone predict the future. He evaluates predictions from experts in different fields, comparing them to predictions by well-informed laity or those based on simple extrapolation from current trends. He goes on to analyze which styles of thinking are more successful in forecasting. Classifying thinking styles using Isaiah Berlin's prototypes of the fox and the hedgehog, Tetlock contends that the fox--the thinker who knows many little things, draws from an eclectic array of traditions, and is better able to improvise in response to changing events--is more successful in predicting the future than the hedgehog, who knows one big thing, toils devotedly within one tradition, and imposes formulaic solutions on ill-defined problems. He notes a perversely inverse relationship between the best scientific indicators of good judgement and the qualities that the media most prizes in pundits--the single-minded determination required to prevail in ideological combat. Clearly written and impeccably researched, the book fills a huge void in the literature on evaluating expert opinion. It will appeal across many academic disciplines as well as to corporations seeking to develop standards for judging expert decision-making.
Article
Kesten Green suggests several answers to this question.
Article
An extremely simple univariate statistical model called ‘IndOzy’ was developed to predict El Niño-Southern Oscillation (ENSO) events. The model uses five delayed-time inputs of the Niño 3.4 sea surface temperature anomaly (SSTA) index to predict up to 12 months in advance. The prediction skill of the model was assessed using both short- and long-term indices and compared with other operational dynamical and statistical models. Using ENSO-CLIPER(climatology and persistence) as benchmark, only a few statistical models including IndOzy are considered skillful for short-range prediction. All models, however, do not differ significantly from the benchmark model at seasonal Lead-3–6. None of the models show any skill, even against a no-skill random forecast, for seasonal Lead-7. When using the Niño 3.4 SSTA index from 1856 to 2005, the ultra simple IndOzy shows a useful prediction up to 4 months lead, and is slightly less skillful than the best dynamical model LDEO5. That such a simple model such as IndOzy, which can be run in a few seconds on a standard office computer, can perform comparably with respect to the far more complicated models raises some philosophical questions about modelling extremely complicated systems such as ENSO. It seems evident that much of the complexity of many models does little to improve the accuracy of prediction. If larger and more complex models do not perform significantly better than an almost trivially simple model, then perhaps future models that use even larger data sets, and much greater computer power may not lead to significant improvements in both dynamical and statistical models. Investigating why simple models perform so well may help to point the way to improved models. For example, analysing dynamical models by successively stripping away their complexity can focus in on the most important parameters for a good prediction. Copyright © 2007 Royal Meteorological Society