Project

Conflict Forecasting (conflictforecasting.com & terrorismforecasting.com)

Goal: Conduct research to develop better methods for forecasting the decisions of parties in situations involving conflict, including:
buyer-seller negotiations
negotiations among distribution channel members
competitor reactions to new product introductions
industrial disputes
corporate takeovers
inter-communal conflicts
political negotiations
diplomatic and military confrontations
counter terrorism operations

Updates
0 new
0
Recommendations
0 new
0
Followers
0 new
1
Reads
3 new
3086

Project log

Kesten Green
added 2 research items
Simulated interaction was developed to forecast the decisions that people will make in conflict situations such as buyer-seller negotiations, employer-union disputes, commercial competition, hostile takeover bids, civil unrest, international trade negotiations, counter-terrorism, and warfare. These situations can be characterized as conflicts involving a small number of parties that are interacting with each other, perhaps indirectly. There is often a great deal of money at stake in such situations and, in the cases of civil unrest, terrorism, and warfare, lives. And yet predictions are typically made using unaided judgment. Research has shown that simulated interaction provides forecasts that are more accurate than those from unaided experts.
The structured-analogies method is likely to be useful for forecasting whenever experts know about similar situations from the past or when databases of situations that are more-or-less analogous to the target are available. Structured analogies was developed to forecast the decisions that people will make in conflict situations such as buyer-seller negotiations, employer-union disputes, commercial competition, hostile takeover bids, civil unrest, international trade negotiations, counter-terrorism, and warfare. Decisions in conflict situations are difficult to forecast. For example, when experts use their unaided judgment to make predictions about such situations, their forecasts are no better than guessing. The structured-analogies method makes better use of experts by eliciting in a formal way i) their knowledge about situations that were similar to the target situation, and ii) their judgments of the similarity of these situations to the target. An administrator then analyzes the information the experts provide to derive forecasts. Research to date suggests that these forecasts are likely to be more accurate than forecasts from experts' unaided judgment. The materials in this course mostly relate to the problem of conflict forecasting. For other applications, such as predicting software costs or demand forecasting, the tasks of formally describing the target situation and identifying and rating analogies will be more straightforward because the structures of these situations are likely to be relatively homogenous.
J. Scott Armstrong
added a research item
People often use analogies when forecasting, but in an unstructured manner. We propose a structured judgmental procedure whereby experts list analogies, rate their similarity to the target, and match outcomes with possible target outcomes. An administrator would then derive a forecast from the information. When predicting decisions made in eight conflict situations, unaided experts' forecasts were little better than chance, at 32% accurate. In contrast, 46% of structured-analogies forecasts were accurate. Among experts who were able to think of two or more analogies and who had direct experience with their closest analogy, 60% of forecasts were accurate. Collaboration did not help.
J. Scott Armstrong
added a research item
Role-playing and unaided opinions were used to forecast the outcome of three negotiations. Consistent with prior research, role-playing yielded more accurate predictions. In two studies on marketing negotiations, the predictions based on role-playing were correct for 53% of the predictions while unaided opinions were correct for only 7% (p < 0.001).
J. Scott Armstrong
added 3 research items
In 1975, a consortium sponsored by the Argentine government tried to purchase the stock of the Britishowned Falkland Islands Company, a monopoly that owned 43 percent of the land in the Falklands, employed 51 per cent of the labor force, had a monopoly on all wool exports, and operated the steamship run to South America. The stockholders were willing to sell especially because the Argentine consortium was reportedly willing to pay “almost any price.” But the British government stepped in to prevent the sale, (Murray N. Rothbard, as quoted in The Wall Street Journal, 8 April 1982). In my opinion, the actual solution in the Falklands War left both sides worse off than before. In contrast, a sale of the Falklands would have benefited both sides in the short run, and, as companies seldom wage shooting wars, this would probably have been a good long-range solution. Apparently, Britain did not predict how the Argentine generals would act when it blocked the sale, and the Argentine generals did not predict how Britain would respond when they occupied the islands. Accurate forecasting by each side in this situation might have led to a superior solution. This study examines the evidence on alternative procedures that can be used to forecast outcomes in conflict situations. I first define what is meant here by conflict situations. Next, I describe alternative forecasting methods. This is followed by a presentation of hypotheses on which method is more appropriate. The evidence is reviewed in two stages: first the prior research, then research that we have done.
Policymakers need to know whether prediction is possible and, if so, whether any proposed forecasting method will provide forecasts that are substantially more accurate than those from the relevant benchmark method. An inspection of global temperature data suggests that temperature is subject to irregular variations on all relevant time scales, and that variations during the late 1900s were not unusual. In such a situation, a "no change" extrapolation is an appropriate benchmark forecasting method. We used the UK Met Office Hadley Centre's annual average thermometer data from 1850 through 2007 to examine the performance of the benchmark method. The accuracy of forecasts from the benchmark is such that even perfect forecasts would be unlikely to help policymakers. For example, mean absolute errors for the 20- and 50-year horizons were 0.18 � oC and 0.24 � oC respectively. We nevertheless demonstrate the use of benchmarking with the example of the Intergovernmental Panel on Climate Change's 1992 linear projection of long-term warming at a rate of 0.03 � oC per year. The small sample of errors from ex ante projections at 0.03 � oC per year for 1992 through 2008 was practically indistinguishable from the benchmark errors. Validation for long-term forecasting, however, requires a much longer horizon. Again using the IPCC warming rate for our demonstration, we projected the rate successively over a period analogous to that envisaged in their scenario of exponential CO2 growth--the years 1851 to 1975. The errors from the projections were more than seven times greater than the errors from the benchmark method. Relative errors were larger for longer forecast horizons. Our validation exercise illustrates the importance of determining whether it is possible to obtain forecasts that are more useful than those from a simple benchmark before making expensive policy decisions.
Green's study (Int. J. Forecasting (forthcoming)) on the accuracy of forecasting methods for conflicts does well against traditional scientific criteria. Moreover, it is useful, as it examines actual problems by comparing forecasting methods as they would be used in practice. Some biases exist in the design of the study and they favor game theory. As a result, the accuracy gain of game theory over unaided judgment may be illusory, and the advantage of role playing over game theory is likely to be greater than the 44% error reduction found by Green. The improved accuracy of role playing over game theory was consistent across situations. For those cases that simulated interactions among people with conflicting roles, game theory was no better than chance (28% correct), whereas role-playing was correct in 61% of the predictions.  2002 International Institute of Forecasters. Published by Elsevier Science B.V. All rights reserved.
Kesten Green
added 15 research items
Problem How to help practitioners, academics, and decision makers use experimental research findings to substantially reduce forecast errors for all types of forecasting problems. Methods Findings from our review of forecasting experiments were used to identify methods and principles that lead to accurate forecasts. Cited authors were contacted to verify that summaries of their research were correct. Checklists to help forecasters and their clients undertake and commission studies that adhere to principles and use valid methods were developed. Leading researchers were asked to identify errors of omission or commission in the analyses and summaries of research findings. Findings Forecast accuracy can be improved by using one of 15 relatively simple evidence-based forecasting methods. One of those methods, knowledge models, provides substantial improvements in accuracy when causal knowledge is good. On the other hand, data models – developed using multiple regression, data mining, neural nets, and “big data analytics” – are unsuited for forecasting. Originality Three new checklists for choosing validated methods, developing knowledge models, and assessing uncertainty are presented. A fourth checklist, based on the Golden Rule of Forecasting, was improved. Usefulness Combining forecasts within individual methods and across different methods can reduce forecast errors by as much as 50%. Forecasts errors from currently used methods can be reduced by increasing their compliance with the principles of conservatism (Golden Rule of Forecasting) and simplicity (Occam’s Razor). Clients and other interested parties can use the checklists to determine whether forecasts were derived using evidence-based procedures and can, therefore, be trusted for making decisions. Scientists can use the checklists to devise tests of the predictive validity of their findings.
This article proposes a unifying theory, or the Golden Rule, of forecasting. The Golden Rule of Forecasting is to be conservative. A conservative forecast is consistent with cumulative knowledge about the present and the past. To be conservative, forecasters must seek out and use all knowledge relevant to the problem, including knowledge of methods validated for the situation. Twenty-eight guidelines are logically deduced from the Golden Rule. A review of evidence identified 105 papers with experimental comparisons; 102 support the guidelines. Ignoring a single guideline increased forecast error by more than two-fifths on average. Ignoring the Golden Rule is likely to harm accuracy most when the situation is uncertain and complex, and when bias is likely. Non-experts who use the Golden Rule can identify dubious forecasts quickly and inexpensively. To date, ignorance of research findings, bias, sophisticated statistical procedures, and the proliferation of big data, have led forecasters to violate the Golden Rule. As a result, despite major advances in evidence-based forecasting methods, forecasting practice in many fields has failed to improve over the past half-century.
This article introduces this JBR Special Issue on simple versus complex methods in forecasting. Simplicity in forecasting requires that (1) method, (2) representation of cumulative knowledge, (3) relationships in models, and (4) relationships among models, forecasts, and decisions are all sufficiently uncomplicated as to be easily understood by decision-makers. Our review of studies comparing simple and complex methods - including those in this special issue - found 97 comparisons in 32 papers. None of the papers provide a balance of evidence that complexity improves forecast accuracy. Complexity increases forecast error by 27 percent on average in the 25 papers with quantitative comparisons. The finding is consistent with prior research to identify valid forecasting methods: all 22 previously identified evidence-based forecasting procedures are simple. Nevertheless, complexity remains popular among researchers, forecasters, and clients. Some evidence suggests that the popularity of complexity may be due to incentives: (1) researchers are rewarded for publishing in highly ranked journals, which favor complexity; (2) forecasters can use complex methods to provide forecasts that support decision-makers’ plans; and (3) forecasters’ clients may be reassured by incomprehensibility. Clients who prefer accuracy should accept forecasts only from simple evidence-based procedures. They can rate the simplicity of forecasters’ procedures using the questionnaire at simple-forecasting.com.
Kesten Green
added a project goal
Conduct research to develop better methods for forecasting the decisions of parties in situations involving conflict, including:
buyer-seller negotiations
negotiations among distribution channel members
competitor reactions to new product introductions
industrial disputes
corporate takeovers
inter-communal conflicts
political negotiations
diplomatic and military confrontations
counter terrorism operations