People often use analogies when forecasting, but in an unstructured manner. We propose a structured judgmental procedure whereby experts list analogies, rate their similarity to the target, and match outcomes with possible target outcomes. An administrator would then derive a forecast from the information. When predicting decisions made in eight conflict situations, unaided experts' forecasts were little better than chance, at 32% accurate. In contrast, 46% of structured-analogies forecasts were accurate. Among experts who were able to think of two or more analogies and who had direct experience with their closest analogy, 60% of forecasts were accurate. Collaboration did not help.
Structured Analogies for Forecasting
Kesten C. Green,* Department of Econometrics and Business Statistics,
Monash University, VIC 3800, Australia
e-mail: kesten@kestencgreen.com
Phone +61-3-990-55438
J. Scott Armstrong, The Wharton School, University of Pennsylvania
Philadelphia, PA 19104
e-mail: Armstrong@wharton.upenn.edu
Phone 610-622-6480
Fax 215-898-2534
Version 107
September 16, 2004
Structured Analogies for Forecasting
When people forecast, they often use analogies but in an unstructured manner. We propose a structured judgmental
procedure that involves asking experts to list as many analogies as they can, rate how similar the analogies are to the
target situation, and match the outcomes of the analogies with possible outcomes of the target. An administrator
would then derive a forecast from the experts’ information. We compared structured analogies with unaided
judgments for predicting the decisions made in eight conflict situations. These were difficult forecasting problems;
the 32% accuracy of the unaided experts was only slightly better than chance. In contrast, 46% of structured
analogies forecasts were accurate. Among experts who were independently able to think of two or more analogies
and who had direct experience with their closest analogy, 60% of forecasts were accurate. Collaboration did not
improve accuracy.
Key words: accuracy, analogies, collaboration, conflict, expert, forecasting, judgment.
* Corresponding author. Address correspondence to e-mail address or to Department of Econometrics and Business
Statistics, PO Box 11E, Monash University, Victoria 3800, Australia.
It seems natural to use analogies when making decisions or forecasts as, by definition, they contain information
about how people have behaved in similar situations in the past. For example, Breuning (2003) found that one-third
of testimony at the Senate hearing on proposals for the first U.S. program for development aid was based on
analogies. Khong (1992) concluded that most of the decisions made early in the Vietnam War were based on
forecasts derived from analogies. Indeed, Kokinov (2003, p. 168) asserts “…we may explain human behavior by
assuming that decisions are made by analogy with previous cases…” In the belief that such information is useful,
MIT professor Lincoln P. Bloomfield has developed a historical database of post-World War II conflicts
(web.mit.edu/cascon) in order to help policy analysts and others identify appropriate analogies.
We agree that information about analogies should be useful for forecasting, but we suspect that without structure
people rush to forecast. To the extent that they think about analogies, they will find one that supports their forecast
and then stop their search. For example, when the U.S. Environmental Protection Agency approved a new oil
refinery in Eastport, Maine, decision makers relied on the analogy of Milford Haven in the U.K (Stewart and
Leschine, 1986). The EPA decision makers considered Milford Haven was the most comparable site, and looked no
further, but Stewart and Leschine observed that Milford Haven had not been in operation long enough to provide
evidence that it was safe. They were right. The supertanker Sea Empress ran aground near Milford Haven on 15
February, 1996, spilling 70,000 tonnes of crude oil (Canada Centre for Remote Sensing, 1996). Neustadt and May
(1986) described how inappropriate selection and inadequate analysis of analogies led U.S. government decision
makers to make poor forecasts of the decisions of other governments’ leaders. Drawing on their litany of poor
decisions by political leaders, they described an elaborate structured approach to analyzing current and historical
information that they suggested should lead to a more effective use of experts’ knowledge and hence to improved
Many areas of judgmental decision making and forecasting have shown that structured judgmental processes make
more effective use of the information people possess. This occurs, for example, when people are asked explicitly to
decompose a problem (MacGregor, 2001). More generally, Armstrong (1985, Chapter 6) summarizes evidence that
structured methods of judgmental forecasting are more accurate than unstructured ones. A structured approach to
forecasting with analogies, then, might encourage experts to consider more information on analogies, and to process
it in an effective way.
Kahneman and Lovallo (1993) report an anecdote that illustrates how inducing an expert to use analogies in a
structured way can affect predictions. Kahneman had worked with a small team of academics to design a new
judgmental decision making curriculum for Israeli high schools. He asked each team member to predict the number
of months it would take them to prepare a proposal for the Ministry of Education. Predictions ranged from 18 to 30
months. Kahneman then turned to a member of the team who had considerable experience developing new curricula
and asked him to think of analogous projects. After some consideration, the man stated that, among the many
analogous situations he could recall, about 40% of the teams eventually gave up. Of those that completed the task,
he said, none did so in less than seven years. Furthermore, he thought that the present team was probably below
average in terms of resources and potential. In the event, the project took eight years to complete.
Procedure for forecasting with structured analogies
Because the literature provides no evidence on how to structure forecasting with analogies, we started with a simple
procedure. If analogies are useful, it is because they are similar to a target. Imposing structure on experts’
assessments of similarity should encourage more complete processing of information and reduction of biases. We
also wanted a procedure that would be easy for experts to use. At a minimum then, a structured approach to using
analogies for forecasting requires experts to identify analogies and the outcomes they imply for the target, and to
assess the analogies’ similarity to the target in a structured way. Our structured analogies procedure involves five
steps, two of which involve experts analyzing analogies. The administrator (1) describes the target situation, and (2)
selects experts; the experts each (3) identify and describe analogies, and (4) rate similarity; the administrator (5)
derives forecasts.
(1) Describe the target situation
The administrator prepares an accurate, comprehensive, and brief description. To do so, the administrator should
seek advice either from unbiased experts or from experts with opposing biases. When feasible, include a list of
possible outcomes for the target situation to make coding easier.
(2) Select experts
The administrator recruits experts who are likely to know about situations that are similar to the target situation. The
administrator should decide how many experts to recruit based on how much knowledge they have about analogous
situations, the variability in responses among experts, and the importance of obtaining accurate forecasts. Drawing
upon the research on the desirable number of forecasts to combine, we suggest enlisting the help of at least five
experts (Armstrong, 2001).
(3) Identify and describe analogies
Ask the experts to describe as many analogies as they can without considering the extent of the similarity to the
target situation.
(4) Rate similarity
Ask the experts to list similarities and differences between their analogies and the target situation, and then to rate
the similarity of each analogy to the target. We suggest providing a scale against which the experts can rate the
similarity of their analogies. Ask them to match their analogies’ outcomes with target outcomes.
(5) Derive forecasts
To promote logical consistency and replicability, the administrator should decide on the rules to derive a forecast
from experts’ analogies. Many rules are reasonable to use. For example, one could select the analogy that the expert
rated as most similar to the target and adopt the outcome implied by that analogy as the forecast.
We examined predictive validity using conflicts. Prior research has shown that the method currently used for making
predictions for conflicts, unaided judgment, produces inaccurate forecasts (see, for example, Green 2002). We
hypothesized that forecasts derived from experts’ structured analysis of analogies would be more accurate than
forecasts by experts who used their unaided judgment.
Our structured analogies procedure is based on the assumption that while unaided experts can provide useful
information, they are not good at processing complex information reliably. For that reason, we did not rely on the
experts to make forecasts but instead used a rule. On the other hand, perhaps experts’ understanding of their own
analogies might enable them to forecast more accurately than we could by using rules. To test this aspect of our
procedure, we asked our experts to predict the decision made in the target situation after they had described and
rated their analogies.
Does it help if experts collaborate and discuss analogies with others? Collaboration could help experts to produce
more analogies and flesh out the details, or it could hinder them by suppressing their creativity and search. Both
positions are reasonable, so we had no prior hypothesis on collaboration. We asked some experts to collaborate with
others and all experts were asked to report the number of people they discussed the forecasting problem with.
Prior evidence
We searched for evidence on methods for forecasting with analogies. Schrodt (2002) searched for empirical
evidence on the accuracy of forecasts for decisions in conflicts in the foreign policy arena. He found no evidence on
the accuracy of forecasts based on analogies relative to that of forecasts based on any other method.
In a marketing study, McIntyre, Achabal and Miller (1993) tested a procedure called case-based reasoning, which is
a way to structure analogies, for forecasting sales during sales promotions. When tested on two products, the
forecasts were no more accurate than those of an expert buyer.
We conducted a further search for evidence by using the Social Sciences Citation Index (SSCI) for the period 1978
to August 24, 2004 using the terms “analogies” and “forecasting,” and then “analogies” and “prediction.” We
searched the Internet on August 24, 2004 using Google™ and the terms “comparative”, “forecasting,” “prediction,”
“accuracy,” and “analogies”. We conducted similar searches on JSTOR. In November 2001, we sent e-mail appeals
to the 278 members of the International Institute of Forecasters list server and to the 579 members of the Judgment
and Decision Making mailing list. We also contacted key researchers. The only relevant study we uncovered was
Buehler, Griffin, and Ross’s (1994). The authors asked 123 participants to estimate how long it would take to
complete a computer assignment. Their predictions, made using unaided judgment, were inaccurate as they were
overly optimistic. Participants who had been asked to think of analogous situations were less biased, especially
when they described how the analogies related to the assignment. As a consequence, proportionately twice as many
of those who recalled analogies finished their assignments before their estimated completion times.
In sum, prior to the research we describe here, little evidence was available on the accuracy of forecasts based on the
use of analogies relative to the accuracy of forecasts made using other methods. Furthermore, no prior evidence
exists on the use of structured analogies.
Procedures used for the study
Preparing materials
We compiled descriptions of conflicts, including brief descriptions of the roles of the parties involved in the conflict.
The conflict descriptions were accounts of real situations. We abstracted all but one (Personal Grievance) from mass
media reports or experts’ accounts. The lead author developed the Personal Grievance from information collected in
interviews and from exchanges of e-mail messages with the parties involved in the dispute. In the case of Nurses
Dispute, he gathered information from published sources (Langdon, 2000a; 2000b; 2000c; Radio New Zealand,
2000a; 2000b; 2000c) and by interviewing representatives of the two disputant parties. When we considered it
necessary, we disguised the conflicts that had already occurred to reduce the chance that our participants would
know the outcomes. As a precaution, we asked our experts whether they recognized the situations. In eight cases,
experts correctly identified a conflict, and their responses were eliminated.
In all, we used eight conflict situations in our research. We provided between three and six possible outcome options
for each of them (Table 1). Our descriptions were short, running to no more than two pages. The full descriptions are
provided at conflictforecasting.com. [For reviewers, descriptions are attached as Reviewer Appendix 1 and outcome
options as Reviewer Appendix 2.] The materials, identity of the disguised conflicts, and descriptions of actual
outcomes are available to researchers on request.
Table 1
Conflict Situations
Artists protest: Members of a rich nation’s artists’ union occupied a major gallery and demanded generous financial
support from their government. What will be the final resolution of the artists’ sit-in? (6 options)
Distribution channel: An appliance manufacturer proposed to a supermarket chain a novel arrangement for
retailing its wares. Will the management of the supermarket chain agree to the plan? (3 options)
55% Pay plan: Professional sports players demanded a 55 percent share of gross revenues and threatened to go on
strike if the owners didn’t concede. Will there be a strike and, if so, how long will it last? (4 options)
Nurses dispute: Angry nurses increased their pay demand and threatened more strike action after specialist nurses
and junior doctors received big increases. What will the outcome of their negotiations be? (3 options)
Personal grievance: An employee demanded a meeting with a mediator when her job was downgraded after her
new manager re-evaluated it. What will the outcome of the meeting with the mediator be? (4 options)
Telco takeover: An acquisitive telecommunications provider, after rejecting a seller’s mobile business offer, made a
hostile bid for the whole corporation. How will the standoff between the companies be resolved? (4 options)
Water dispute: Troops from neighboring nations moved to their common border, and the downstream nation
threatened to bomb the upstream nation’s new dam. Will the upstream neighbor agree to release additional water
and, if not, how will the downstream nation’s government respond? (3 options)
Zenith investment: Under political pressure, a large manufacturer evaluated an investment in expensive new
technology. How many new manufacturing plants will it decide to commission? (3 options)
Selecting experts
To select experts, we sent e-mail messages to ten public list servers, two organizations’ e-mail lists, the faculty of a
university political science department, and a convenience sample of 15 experts. We chose lists that were likely to
include high proportions of experts on conflicts or on judgmental forecasting. We took additional steps to ensure
people were suitably qualified for these tasks. In our appeals, which were personalized when possible, the lead
author wrote “I am writing to you because you are an expert…” and “I am engaged in a research project on the
accuracy of different methods for predicting the outcomes of conflicts…” (Appendix A). We sent only descriptions
of conflicts that were likely to be relevant to the particular recipients. For example, we did not send a situation
dealing with a proposed new marketing channel to experts in employment relationship disputes. Most importantly,
we counted on people to recognize when they had expertise on a topic.
We sent as many as three reminders. Details of the lists and participation are provided at conflictforecasting.com.
[For the purpose of review, the details are attached as Reviewer Appendix 3.]
Using the methods
In our e-mail appeal, we gave experts instructions on how to participate (Appendix A). For structured-analogies
participants, our one-page questionnaires asked the experts to (1) describe each analogous situation; (2) describe
their source of knowledge about it; (3) list similarities and differences compared to the target conflict; and (4)
provide an overall similarity rating (where 0 = no similarity… 5 = similar…10 = high similarity). Finally, we asked
the experts to select (from a list of possible outcomes that we prepared for each target conflict) the outcome closest
to the outcome of their analogy. To illustrate, a completed structured-analogies treatment questionnaire for one of the
conflicts, Telco Takeover is provided as Appendix B.
Questionnaires for unaided-judgment participants first asked them to select the outcome they thought would occur.
We gave them the same lists of possible outcomes that we gave to the structured-analogies participants.
We varied the order in which we attached the conflict documents to our e-mail appeals. To test our hypotheses, with
our appeals we sought responses for each of the following treatments:
1. unaided judgment (no instructions on how to forecast) without collaboration,
2. unaided judgment with collaboration,
3. structured analogies without collaboration,
4. structured analogies with collaboration.
For our first appeal, we sent equal numbers of each treatment to members of the International Association of
Conflict Management mailing list. The structured-analogy and collaboration treatments were more onerous for
participants than unaided judgment, so we obtained relatively few responses for those treatments. As a consequence,
in most of our subsequent appeals we sought responses for structured analogies with collaboration. Finally, we
sought responses for combinations of conflict and treatment for which we needed more forecasts. Because we were
seeking participants for their expertise, rather than as part of a representative sample of some larger group, random
assignment to treatments was unnecessary. The form of collaboration was at the discretion of the participants.
Coding responses
We obtained two groups of unaided-judgment forecasts from experts. One was from the unaided-judgment treatment
(62 forecasts), and the other from experts who were asked to use structured analogies but could think of no analogies
(44 forecasts). We analyzed results separately for each group and the forecasts were similar; the latter group’s being
somewhat more accurate. We combined the two groups under the title “unaided judgment” for our analyses,
reasoning that neither of these groups used structured analyses and that our action favored unaided judgment relative
to the structured analogies method.
For each conflict, we derived a structured-analogies forecast from each expert’s analogy information, where the
information was available. It is trivial to derive a forecast from analogies information when an expert provides a
single analogy. On the other hand, many mechanical schemes could be used to derive a forecast when an expert
provides information on more than one analogy. To obtain a forecast, we selected the target conflict outcome implied
by the analogy given the highest similarity rating by the expert. Our reasoning was that predictive validity should
increase with relative similarity. Where there was a tie, we selected the outcome that had the most support from the
expert’s analysis of analogies. (Details on the rules for determining support are provided at conflictforecasting.com).
[For the purpose of review, details of the rules are attached as Reviewer Appendix 4.] Given our uncertainties about
the best procedure, we subsequently analyzed other mechanical schemes.
We asked a convenience sample of five people who knew the actual outcomes of the conflicts to rate the outcome
options we provided to the research participants. The raters were told that an option that matched the actual outcome
of a conflict should be given a rating of 10. Forecasts were counted as accurate if the outcome option chosen by our
rule was the option that had been given the highest median rating by our raters. Outcome options were unconditional
statements of decisions and did not specify timing, for example, “Expander’s takeover succeeded at, or close to,
their August 14 offer price of $43-per-share.”
As Tetlock (1999) demonstrated, it is difficult for experts to forecast decisions made in conflicts situations. He found
that forecasts by 20 experts of the outcomes of foreign-policy conflicts were no more accurate than could be
expected from chance. Our results were similar. Our 66 unaided experts were correct for 32% of predictions in an
unweighted average across the eight conflicts (Table 2).
As hypothesized, forecasts from structured analogies were more accurate. They were more accurate for seven of the
eight conflicts. Averaging the accuracy figures across the conflicts, structured-analogies forecasts were 46% accurate
(P = 0.04, one-tailed permutation test for paired replicates; Siegel and Castellan, 1988). We used the permutation
test for paired replicates to compare the differences in the percentage of correct forecasts between the two methods
for each conflict (e.g., for Artists Protest, the difference between structured analogies and unaided judgment was
17%). Viewed another way, structured analogies reduced the average forecast error by 21% (where forecast error is
the percentage of forecasts that were wrong)1.
Table 2
Accuracy of structured-analogies
and unaided-judgment forecasts by experts
Percent correct forecasts a (number of forecasts)
Chance Unaided
Telco Takeover 25 0 (8) 8(12)
Artists Protest 17 10 (20) 27 (11)
55% Pay Plan 25 18 (11) 57 (14)
Personal Grievance 25 31 (13) 36 (14)
Zenith Investment 33 36 (14) 38 (8)
Distribution Channel 33 38 (17) 50 (12)
Water Dispute 33 50 (8) 92 (12)
Nurses Dispute 33 73 (15) 57 (14)
Averages (unweighted) 28 32 (106) 46 (97)
a Bold figures denote the most accurate forecasts for each conflict,
and overall.
Value of experts’ experience
We tested whether structured-analogies forecasts were more accurate when they came from experts with more
experience than when from those with less. We used two measures: (1) we asked our experts how many years
experience they had as “a conflict management specialist,” and (2) we asked them to rate their experience (on a
scale from 0 to 10) with situations similar to the target conflict.
1 We calculate average error reduction figures as {(100 – AC) – (100 – AX)} / (100 – AC) * 100, where AC is the
unweighted average percentage accuracy across conflicts of the comparison forecasts (or chance) and AX is the
corresponding figure for the forecasts of interest.
Structured-analogies forecasts from experts with five or more years experience as conflict management specialists
were less accurate (average across conflicts) with 21% error reduction compared to chance, than those with less
experience (26% error reduction). Furthermore, where experts gave high ratings to their experience with similar
conflicts their forecasts were less accurate (16% error reduction) than where they gave themselves lower ratings
Effect of number of analogies
We found that forecasts based on data from experts who could think of two or more (plural) analogies were more
accurate than those based on data from experts who recalled a single analogy for six of the eight conflicts. Accuracy
averaged 38% for forecasts derived from single-analogy data, but 56% for those derived from plural-analogy data (P
= 0.02, one-tailed permutation test for paired replicates).
All else being equal, conflicts with more outcome options are more difficult to forecast than those with fewer
options. To control for this, we examined the reduction in error versus chance. Forecasts based on recall of a single
analogy reduced error by an average of 15% compared to chance, while forecasts derived from plural analogies
reduced error by 39% (Table 3). The difference in error between single-analogy forecasts and plural-analogy
forecasts is P = 0.02 using the one-tailed permutation test for paired replicates. The error was reduced by 42%
versus chance by accepting data only from experts who described three or more analogies. Thus the usefulness of an
individual expert was related to the number of analogies he described.
Table 3
Accuracy of forecasts by number of analogies
Percent error reduction versus chance a (number of forecasts)
None bOne only Two or more
Telco Takeover -33 (8) -33 (5) -14 (7)
55% Pay Plan -33 (2) 26 (9) 73 (5)
Distribution Channel -19 (5) 0 (6) 50 (6)
Artists Protest -3 (7) -3 (7) 40 (4)
Personal Grievance 20 (5) 0 (8) 33 (6)
Water Dispute 25 (8) 100 (4) 81 (8)
Zenith Investment 25 (6) -12 (4) 25 (4)
Nurses Dispute 100 (3) 40 (10) 25 (4)
Average error reduction
10 (44) 15 (53) 39 (44)
Average % correct
34 38 56
a Bold figures denote the most accurate forecasts for each conflict, and overall.
b Forecasts from experts we asked to use the structured analogies method, who were
unable to think of analogies. We classified these forecasts as unaided judgment
forecasts in all our other analyses.
Effect of experts’ familiarity with their analogies
We expected that the information experts provided would be more useful the more closely involved they had been in
the analogous situations they identified, because they would be likely to know more about the situations. For
example, someone who was an adult during the Vietnam War is likely to know more about that situation than
someone born since, and someone who fought in the war is likely to know more again. To examine this, we
identified forecasts that had been based on analogies from either experts’ own experiences (45) or that of close
others (5 forecasts based on the experiences of, for example, a wife or brother-in-law). In an unweighted averaged
across the eight conflicts, these direct-experience forecasts were more accurate (49%) than the 45 forecasts based on
analogies from third-party accounts (37%); P = 0.07, one-tailed permutation test for paired replicates. Viewed
another way, the forecasts based on analogies from experiences close to experts reduced the average error across
conflicts by 31% (compared to chance) while forecasts that were based on indirect experience provided only 13%
error reduction.
Familiarity and plural analogies
The ideal situation when forecasting with structured analogies is to find experts who can think of many analogies
with which they have had direct experience. When our experts were able to think of two or more analogies and they
had direct experience with the analogy that was most similar to the target, structured analogies forecasts were 60%
accurate (23 forecasts). In other cases, 72 forecasts were 39% accurate (P = 0.04, one-tailed permutation test for
paired replicates).
Mechanical schemes to derive forecasts
We wondered whether experts who had used the structured analogies process then provided forecasts that were more
accurate than unaided experts. They did. Their predictions were on average 42% accurate (94 forecasts) compared to
32% for unaided-judgment forecasts (P = 0.06, one-tailed permutation test for paired replicates). As we anticipated,
however, a structured mechanical process was more effective for deriving forecasts from the experts’ analogies
information than experts’ own judgments. As we have seen, structured-analogies forecasts were 46% accurate. Why
the difference when experts derived their own forecasts? Analogies are only useful if they are used. In 22 cases,
experts made forecasts that were inconsistent with the outcomes of their own analogies; of these, 25% were
accurate. When the mechanical rule was used to derive forecasts from these experts’ analogies, 45% were accurate.
When experts thought of more that one analogy, our mechanical scheme did not use all of the analogical information
to make predictions. We tested four alternative approaches in order to determine whether we would improve
accuracy further if we derived combined forecasts from all of the 210 analogies with similarity ratings and implied
decisions. For example, if an expert provided information on three analogies, for the purpose of testing our four
combining alternatives we effectively derived three forecasts instead of the one we would have derived using the
structured analogies method.
For our first alternative, we used the outcome implied by the most analogies, and obtained an average accuracy of
40% across all conflicts, compared to 46% for the approach we had adopted. For the second, instead of assuming
that the analogies were all of equal value as we did for the first alternative, for each conflict we chose the option
with the highest aggregate similarity rating as the forecast (39% accurate). For the third alternative, we reallocated
each expert’s total number of analogies for a conflict to outcome options in proportion to each option’s share of the
expert’s similarity ratings (40%). For the fourth alternative, we calculated each expert’s average similarity rating for
each option; we then allocated all of each expert’s analogies to each option in proportion to the average similarity
rating for the decision as a fraction of the sum of the expert’s average similarity ratings (39%). In sum, all of these
alternatives provided forecasts that were less accurate than those derived by applying the mechanical scheme that we
had specified prior to the testing the accuracy of structured analogies.
Effect of collaboration
While we had no directional hypothesis about collaboration, we analyzed the data to see whether collaboration
among experts was useful. When experts using structured analogies collaborated with others, their median working
time was 45 minutes compared to 30 minutes for those who worked alone. (We do not know how much time the
collaborators spent on the task, nor do we know the nature of their collaboration.) As it happened, those who
collaborated claimed to have had much more experience with conflict-management (median of 14 years versus 5
years) and experience with similar conflicts (a median self-rating of 4.0 out of 10, versus 2.8). Despite the greater
investment of resources by more knowledgeable experts, collaboration produced no gain in accuracy: Forecasts from
solo experts were on average 44% accurate across conflicts (75 forecasts), compared to 42% for forecasts by
collaborating experts (22 forecasts).
Given our findings, we saw no need to distinguish between solo and collaborative forecasts in our analysis. In view
of the time savings, we recommend that structured analogies be done by individuals.
The structured analogies method is useful only in cases in which experts can think of analogies. This limitation can
be overcome in many situations by identifying people with relevant expertise. While this may be difficult to know in
advance, one can gauge people’s expertise from their responses – that is, did they provide analogies, if so, how
many, and did they have direct experience?
Using structured analogies is more costly than using unaided judgment. However, relative to the costs of making bad
decisions in many conflict situations, such as selecting strategies to achieve peace in the Middle East or to deal with
threatening behavior by the North Korean government, the costs are negligible.
Further research
Research on additional situations would help to better assess the improvements that might be expected, and the
conditions under which structured analogies is most effective. Our conclusions are based on a sample of only eight
This is the first published study on the use of structured analogies. More research needs to be done to develop the
operational procedures for the method. For example, what is the best way to frame the issues for the experts so that
they provide more and better analogies? Would a more structured approach to rating analogies’ similarity to a target
help administrators derive more accurate forecasts? To what extent might improvements in accuracy be obtained, in
the case of well-documented analogies, by checking the facts of the situation and correcting any errors in experts’
matching of analogy outcomes with potential target outcomes.
It seems plausible that the Delphi technique could be used to improve assessments of analogies’ similarity to a
target, potentially increasing accuracy further at a low cost. Rowe and Wright (2001) provide evidence on the value
of Delphi, and software for implementing of Delphi is provided at forecastingprinciples.com. Experts’ confidence
ratings may be useful for weighting structured-analogies forecasts in a combination (Arkes, 2001).
We have examined conflict situations because of their importance and the difficulty of obtaining useful forecasts.
Structured analogies might also improve forecasting for situations other than conflicts. We expect that it would be
most useful where situations are complex and where there are plural analogies.
Research is needed on how to encourage adoption of structured analogies. Currently, people use unaided judgment,
a method that is little better than chance, to decide whether to go to war, get a divorce, make a hostile takeover bid,
go on strike, or mount a competitive pricing campaign. Better forecasts would aid decision making in such
It is difficult to forecast decisions made in conflict situations. On average, unaided experts were correct for only
32% of their predictions. This was little better than chance at 28%.
For our structured analogies method, the two key criteria for identifying an expert were the number of analogies
generated, and the presence of direct knowledge about those analogies. When experts produced two or more
analogies from experience, forecasts from structured analogies were correct for 60% of the predictions. Given the
importance of forecasts in conflict situations and in other arenas, such improvement could have considerable
Appendix A
E-mail message appeal and instructions: Structured analogies / collaboration treatment
Subject: Using analogies to predict the outcomes of conflicts
Dear Dr _____________
I am writing to you because you are an expert on _________. I am engaged on a research project on the accuracy of
different methods for predicting decisions made in conflicts. At this stage, I’m investigating the formal use of
“analogies” for forecasting. That is, forecasting on the basis of the outcomes of similar conflicts that are known to
the forecaster.
What I would like you to do is to read the attached descriptions of some real (but disguised) conflict situations and
to predict the outcome of each conflict. If you can’t read the attachments, please let me know and I’ll send the
material in your preferred format if I’m able.
Each attached file contains a conflict description and a short questionnaire. Please follow these steps for each
1/ Read the description and
2/ try to think of several analogous situations and
3/ about how similar your analogies are to the conflict.
4/ Fill-in the questionnaire (electronically if you can)
a) describe your analogies
b) rate your analogies
c) make your prediction (either pick an outcome or assign probabilities)
d) record the total time you spent on all tasks
e) return the questionnaire.
One of the objectives of this research is to assess the effect of collaboration on forecast accuracy. You have been
allocated to the collaboration treatment, so please do discuss these forecasting problems with colleagues. Do not,
however, discuss them with other people who have received this material as I want independent responses from
Although I intend to acknowledge the help of all of the people who assist with this research, my report will not
associate any prediction with any individual.
Your prompt response is very important to the successful completion of my project. Please help me to prove the
sceptics wrong about the level of cooperation I get!
Best regards,
Appendix B
Telco Takeover Bid
1) (A) In the table below, please briefly describe
(i) your analogies,
(ii) their source (e.g. your own experience, media reports, history, literature, etc.), and
(iii) the main similarities and differences between your analogies and this situation.
(B) Rate analogies out of 10 (0 = no similarity… 5 = similar… 10 = high similarity).
(C) Enter the responses from question 2 (below) closest to the outcomes of your analogies.
(i) description, (ii) source, (iii) similarities & differences
a. Bank takeover Personal Issue same, industry different 8 C
b. Govt Agency merger Personal Takeover same, government, but
ordered takeover
4 D
c. Facility Merger Personal/family Combine similar operations 3 B
2) How was the standoff between Localville and Expander resolved? (check one , or %)
a. Expander’s takeover bid failed completely [__]
b. Expander purchased Localville’s mobile operation only [__]
c. Expander’s takeover succeeded at, or close to, their August 14 offer price of $43-per-share [X_]
d. Expander’s takeover succeeded at a substantial premium over the August 14 offer price [__]
3) If you have not given a prediction, please state your reasons:
4) Roughly, how long did you spend on this task?
{include the time you spent reading the description and instructions} [_1__] hours
5) How likely is it that taking more time would change your forecast?
{ 0 = almost no chance (1/100) … 10 = practically certain (99/100) } [_0_] 0-10
6) Do you recognise the actual conflict described in this file? Yes [__] No [X__]
If so, please identify it: [_________________________________________________]
7) How many people did you discuss this forecasting problem with? [_2___] people
8) Roughly, how many years experience do you have as a conflict management specialist? [20+] years
9) Please rate your experience (out of 10) with conflicts similar to this one [6____] 0-10
When you have completed this questionnaire, please return
either this document as an email attachment to…
or this questionnaire (with your initials at right) by fax to… Your initials: [_XYZ_]
Arkes, H. R. (2001). Overconfidence in judgmental forecasting, in J. S. Armstrong (ed.) Principles of Forecasting.
Boston: Kluwer Academic Publishers.
Armstrong, J. S. (2001). Combining forecasts, in J. S. Armstrong (ed.) Principles of Forecasting. Boston: Kluwer
Academic Publishers.
Armstrong, J. S. (1985). Long-Range Forecasting. New York: John Wiley. Full text at
Breuning, M. (2003). The role of analogies and abstract reasoning in decision-making: Evidence from the debate
over Truman’s proposal for development assistance. International Studies Quarterly, 47, 229-245.
Buehler, R., Griffin, D., Ross, M. (1994). Exploring the ‘planning fallacy’: Why people underestimate their task
completion times. Journal of Personality and Social Psychology, 67, 366-381.
Canada Centre for Remote Sensing (1996). ‘Sea Empress’ oil spill monitoring. RADARSAT Image, Milford Haven,
Wales, United Kingdom, February 22, 1996. Retrieved July 1, 2003, from
Green, K. C. (2002). Forecasting decisions in conflict situations: A comparison of game theory, role-playing, and
unaided judgement. International Journal of Forecasting, 18, 321-344. Full text at forecastingprinciples.com.
Kahneman, D., & Lovallo, D. (1993). Timid choices and bold forecasts: A cognitive perspective on risk taking.
Management Science, 39, 17-31.
Khong, Y. F. (1992). Analogies at War: Korea, Munich, Dien Bien Phu, and the Vietnam Decisions of 1965.
Princeton, NJ: Princeton University Press.
Kokinov, B. (2003). Analogy in decision-making, social interaction, and emergent rationality. Behavioral and Brain
Sciences, 26, 167-168. Full text at http://www.nbu.bg/cogs/personal/kokinov/bbskokinov.pdf
Langdon, C. (2000a). Nurses vote today on strike. The Dominion, Edition 2, 20 September, 3.
Langdon, C. (2000b). Nurses support call for strike. The Dominion, Edition 2, 21 September, 3.
Langdon, C. (2000c). Nurses’ pay boosted, strike off. The Dominion, Edition 2, 6 December, 3.
MacGregor, D. G, (2001), Decomposition for judgmental forecasting and estimation. In J. S. Armstrong (ed.)
Principles of Forecasting. Boston: Kluwer Academic Publishers.
McIntyre, S. H., Achabal, D. D., & Miller, C. M. (1993). Applying case-based reasoning to forecasting retail sales.
Journal of Retailing, 69, 372-398.
Neustadt, R. E., & May, E. R. (1986). Thinking in Time: The Uses of History for Decision Makers. New York: Free
Radio New Zealand Limited (2000a, September 20). Brenda Wilson (Chief Executive, New Zealand Nurses
Organisation) interviewed by Geoff Robinson. Morning Report, Transcript: Newztel News Agency Ltd.
Radio New Zealand Limited (2000b, September 20). Rae Lamb (Health Correspondent, Radio New Zealand)
interviewed by Mary Wilson with exerted material from interviews with Annette King (Minister of Health),
Susan Rolls (Emergency Nurse at Wellington Hospital), and Russell Taylor (Wellington Nurses Union
Organiser). Checkpoint, Transcript: Newztel News Agency Ltd.
Radio New Zealand Limited (2000c, September 22). Margot Mains (Chief Executive Officer, Capital Coast Health)
interviewed by Geoff Robinson. Morning Report, Transcript: Newztel News Agency Ltd.
Rowe, G. & Wright, G. (2001). Expert opinions in Forecasting: The role of the Delphi technique, in J. S. Armstrong
(ed.) Principles of Forecasting. Boston: Kluwer Academic Publishers.
Schrodt, P. A. (2002). Forecasts and contingencies: from methodology to policy. Paper presented at the American
Political Science Association meetings, Boston, 29 August – 1 September. Retrieved May 7, 2004, from
Siegel, S. & Castellan, N. J. Jr. (1988). Non-parametric Statistics for the Behavioral Sciences, 2nd ed. Singapore:
Speke, C., & Reuter, S. (2003). German military historians predict Anglo-American defeat in Iraq. Online Journal,
March 29. Available from http://morris.wharton.upenn.edu/forecast/Conflicts/PDF%20files/
Stewart, T. R., & Leschine, T. M. (1986). Judgment and analysis in oil spill risk assessment. Risk Analysis, 6, 305-
Tetlock, P. E. (1999). Theory driven reasoning about possible pasts and probable futures: Are we prisoners of our
perceptions? American Journal of Political Science, 43, 335-366.
We thank the experts who participated in the research reported here. They included Roderic Alley, Barry Anderson,
Don Baker, Corrine Bendersky, Constant Beugre, Doug Bond, Michelle Brackin, Jos Ramn Cancelo, Nihan
Cini, David Cohen, Ike Damayanti, Serghei Dascalu, Nikolay Dentchev, Ulas Doga Eralp, Miguel Dorado,
Erkan Erdil, Jason Flello, Paul Gaskin, Andrew Gawith, Kristian Skrede Gleditsch, Joshua Goldstein, David
Grimmond, George Haines, Claudia Hale, Ragnar Ingibergsson, Patrick James, Michael Kanner, John Keltner,
Daniel Kennedy, Susan Kennedy, Oliver Koll, Rita Koryan, Talha Köse, Tony Lewis, Zsuzsanna Lonti, Dina
Beach Lynch, David Matz, Bill McLauchlan, Kevin Mole, Ben Mollov, Robert Myrtle, W. Bruce Newman,
Randall Newnham, Konstantinos Nikolopoulos, Glenn Palmer, Dean G. Pruitt, Perry Sadorsky, Greg Saltzman,
Amardeep Sandhu, Marlies Scott-Wenzel, Deborah Shmueli, Mike Smith, Marta Somogyvári, Harris Sondak,
Dana Tait, Scott Takacs, Dimitrios Thomakos, William Thompson, Ailsa Turrell, Bryan Wadsworth, James
Wall, Daniel Williams, Christine Wright, Becky Zaino. We also thank Lisa Bolton, Nikolay Dentchev, Don
Esslemont, Stanley Feder, Paul Goodwin, Clare Harries, Oliver Koll, and Tom Yokum for providing pre-
submission peer review. Editorial assistance was provided by Mary Haight and Marian Lee.
... [45]). To overcome or mitigate the influence of availability bias in the selection of analogies for forecasting purposes, Green & Armstrong [46] devise a five-step approach. This structured use of analogies, which requires experts to rate the similarity of the chosen analogies with the described target situation, was shown to increase accuracy from 32% to 46% when predicting decisions in eight conflict situations [46]. ...
... To overcome or mitigate the influence of availability bias in the selection of analogies for forecasting purposes, Green & Armstrong [46] devise a five-step approach. This structured use of analogies, which requires experts to rate the similarity of the chosen analogies with the described target situation, was shown to increase accuracy from 32% to 46% when predicting decisions in eight conflict situations [46]. ...
... Group forecasting relies on qualitative or contextual data provided by multiple human forecasters. There exists a multitude of qualitative forecasting approaches, which include but are not limited to Delphi [63,64], market research [65], panel consensus, visionary forecast and historical analogy [46,66,67], group discussion [68], decision conferencing [69,70], nominal group technique [71] and focus group. In the following subsections, we discuss the most well-known techniques: focus group, nominal group technique and Delphi method [72]. ...
Full-text available
This paper's top-level goal is to provide an overview of research conducted in the many academic domains concerned with forecasting. By providing a summary encompassing these domains, this survey connects them, establishing a common ground for future discussions. To this end, we survey literature on human judgement and quantitative forecasting as well as hybrid methods that involve both humans and algorithmic approaches. The survey starts with key search terms that identified more than 280 publications in the fields of computer science, operations research, risk analysis, decision science, psychology and forecasting. Results show an almost 10-fold increase in the application-focused forecasting literature between the 1990s and the current decade, with a clear rise of quantitative, data-driven forecasting models. Comparative studies of quantitative methods and human judgement show that (1) neither method is universally superior, and (2) the better method varies as a function of factors such as availability, quality, extent and format of data, suggesting that (3) the two approaches can complement each other to yield more accurate and resilient models. We also identify four research thrusts in the human/machine-forecasting literature: (i) the choice of the appropriate quantitative model, (ii) the nature of the interaction between quantitative models and human judgement, (iii) the training and incentivization of human forecasters, and (iv) the combination of multiple forecasts (both algorithmic and human) into one. This review surveys current research in all four areas and argues that future research in the field of human/machine forecasting needs to consider all of them when investigating predictive performance. We also address some of the ethical dilemmas that might arise due to the combination of quantitative models with human judgement.
... Experience of similar forecasting cases can be also seen as contextual information. Using such information, that is, analogies, in forecasting has been studied e.g. by Hoch and Schkade (1996), Green& Armstrong (2007), and Lee et al. (2007). mention information about special events, such as new salespromotion campaigns, international conflicts or strikes as examples of contextual information. ...
... Using such information, that is, analogies, in forecasting has been studies e.g. by, Hoch and Schkade (1996), Green& Armstrong (2007) and Lee et al. (2007). mention information about " special events, such as new salespromotion campaigns, international conflicts or strikes" as examples about contextual information. ...
Demand forecasting is one of the fundamental managerial tasks. Most companies do not know their future demands, so they have to make plans based on demand forecasts. The literature offers many methods and approaches for producing forecasts. When selecting the forecasting approach, companies need to estimate the benefits provided by particular methods, as well as the resources that applying the methods call for. Former literature points out that even though many forecasting methods are available, selecting a suitable approach and implementing and managing it is a complex cross-functional matter. However, research that focuses on the managerial side of forecasting is relatively rare. This thesis explores the managerial problems that are involved when demand forecasting methods are applied in a context where a company produces products for other manufacturing companies. Industrial companies have some characteristics that differ from consumer companies, e.g. typically a lower number of customers and closer relationships with customers than in consumer companies. The research questions of this thesis are: 1. What kind of challenges are there in organizing an adequate forecasting process in the industrial context? 2. What kind of tools of analysis can be utilized to support the improvement of the forecasting process? The main methodological approach in this study is design science, where the main objective is to develop tentative solutions to real-life problems. The research data has been collected from two organizations. Managerial problems in organizing demand forecasting can be found in four interlinked areas: 1 …
... Manifold methodologies have been explored to improve judgemental forecasting accuracy to varying success (Lawrence et al. 2006). These methodologies include, but are not limited to, prediction intervals (Lawrence and Makridakis 1989), decomposition (Mac-Gregor and Armstrong 1994), structured analogies (Green and Armstrong 2007;Nikolopoulos et al. 2015) and unaided judgement (Litsiou et al. 2019). Various group forecasting techniques have also been explored (Linstone and Turoff 1975;Delbecq, Van den Ven, and Gustafson 1975;Landeta, Barrutia, and Lertxundi 2011), although the risks of groupthink (McNees 1987) and the importance of maintaining the independence of each group member's individual forecast are well established (Armstrong 2001). ...
Conference Paper
We introduce Forecasting Argumentation Frameworks (FAFs), a novel argumentation-based methodology for forecasting informed by recent judgmental forecasting research. FAFs comprise update frameworks which empower (human or artificial) agents to argue over time about the probability of outcomes, e.g. the winner of an election or a fluctuation in inflation rates, whilst flagging perceived irrationality in the agents' behaviour with a view to improving their forecasting accuracy. FAFs include five argument types, amounting to standard pro/con arguments, as in bipolar argumentation, as well as novel proposal arguments and increase/decrease amendment arguments. We adapt an existing gradual semantics for bipolar argumentation to determine the aggregated dialectical strength of proposal arguments and define irrational behaviour. We then give a simple aggregation function which produces a final group forecast from rational agents' individual forecasts. We identify and study properties of FAFs, and conduct an empirical evaluation which signals FAFs' potential to increase the forecasting accuracy of participants.
... Manifold methodologies have been explored to improve judgemental forecasting accuracy to varying success (Lawrence et al. 2006). These methodologies include, but are not limited to, prediction intervals (Lawrence and Makridakis 1989), decomposition (Mac-Gregor and Armstrong 1994), structured analogies (Green and Armstrong 2007;Nikolopoulos et al. 2015) and unaided judgement (Litsiou et al. 2019). Various group forecasting techniques have also been explored (Linstone and Turoff 1975;Delbecq, Van den Ven, and Gustafson 1975;Landeta, Barrutia, and Lertxundi 2011), although the risks of groupthink (McNees 1987) and the importance of maintaining the independence of each group member's individual forecast are well established (Armstrong 2001). ...
We introduce Forecasting Argumentation Frameworks (FAFs), a novel argumentation-based methodology for forecasting informed by recent judgmental forecasting research. FAFs comprise update frameworks which empower (human or artificial) agents to argue over time about the probability of outcomes, e.g. the winner of a political election or a fluctuation in inflation rates, whilst flagging perceived irrationality in the agents' behaviour with a view to improving their forecasting accuracy. FAFs include five argument types, amounting to standard pro/con arguments, as in bipolar argumentation, as well as novel proposal arguments and increase/decrease amendment arguments. We adapt an existing gradual semantics for bipolar argumentation to determine the aggregated dialectical strength of proposal arguments and define irrational behaviour. We then give a simple aggregation function which produces a final group forecast from rational agents' individual forecasts. We identify and study properties of FAFs and conduct an empirical evaluation which signals FAFs' potential to increase the forecasting accuracy of participants.
... It is important to define clinical endpoints and correlates of protection in preclinical models due to the increasing challenge of performing phase-III trials in humans in endemic areas as a result of herd immunity. The process of translating preclinical data to predict vaccine effectiveness is similar to the process known as "forecasting by analogy" (115). As outlined above, viral load, IL-6 and ferritin are the hallmarks of the VCF model, as determined from human studies. ...
Full-text available
Following the disruptive epidemics throughout the Indian Ocean, Southeast Asia and the Americas, efforts have been deployed to develop an effective vaccine against chikungunya virus (CHIKV). The continuous threat of CHIKV (re-)emergence and the huge public health and economic impact of the epidemics, makes the development of a safe and effective vaccine a priority. Several platforms have been used to develop candidate vaccines, but there is no consensus about how to translate results from preclinical models to predict efficacy in humans. This paper outlines a concept of what constitutes an effective vaccine against CHIKV, which may be applied to other viral vaccines as well. Defining endpoints for an effective vaccine is dependent on a proper understanding of the pathogenesis and immune response triggered during infection. The preclinical model adopted to evaluate experimental vaccines is imperative for the translation of preclinical efficacy data to humans. Several CHIKV animal models exist; however, not all provide suitable endpoints for measuring vaccine efficacy. This review summarizes the current knowledge related to CHIKV pathogenesis and the correlates of protection. We then define what would constitute an effective CHIKV vaccine in humans using four key endpoints, namely: (i) prevention of chronic disease, (ii) prevention of acute disease, (iii) prevention of transmission to mosquitoes, and (iv) complete prevention of infection. Lastly, we address some of the gaps that prevent translation of immunogenicity and efficacy findings from preclinical models to humans, and we propose to use the combination of virus-cytokine-ferritin levels as a read-out for measuring vaccine-induced protection.
... Literature research shows that there are several common types of papers related to the forecasting product demand. One of them uses marketing research methods like [3]. Marketing research needs a lot of human resources and it is highly unlikely they can be automized. ...
Full-text available
The problem of predicting demand for a new product based on its characteristics and description is critical for various industrial enterprises, wholesale and retail trade and, especially, for modern highly competitive sector of air transportation, since solving this problem will optimize production, management and logistics in order to maximize profits and minimize costs. Classic demand forecasting methods assume the availability of sales data for a certain historical period, which is obviously not the case when concerning a new product. Most research papers are limited either to a specific category of goods or use sophisticated marketing methods. This paper proposes the use of machine learning methods. We used data about new product demand from the Ozon online store. The input data of the algorithm are characteristics such as the price, name, category and text description of the product. To solve the regression problem, various implementations of the gradient boosting algorithm were used, such as XGBoost, Light GBM, Cat Boost. The forecast accuracy is now about 4.00. The proposed system can be used both independently and as part of another more complex system.
... Adoption researchers and modelers may also use this approach to better understand the patterns of adoption of potential adopters for different types of technologies, and such knowledge could be used to identify analogues that could inform the uptake of a new technology by that particular target population (see Green and Armstrong, 2007;Goodwin et al., 2014), or to complement existing methods to reflect the unique characteristics and context of the target population for scaling efforts (Sartas et al., 2020;Wigboldus et al., 2016). To this effect, adoption pathways analysis can be applied to a large number of adoption examples with the aim of identifying patterns for different types of farmers or different types of technologies. ...
CONTEXT Scholars have argued that empirical studies of adoption in agriculture should consider adoption as a dynamic process rather than a binary choice, but many empirical studies continue to be based on cross-sectional surveys in which adoption is treated binarily. In general, surveys put more emphasis on investigating adoption drivers (i.e. independent variables) at the expense of defining complete adoption measures (i.e. dependent variables). OBJECTIVE In this study, we present, demonstrate and illustrate a method - adoption pathways analysis – as an approach to better represent and analyse the dynamics and diversity of adoption. METHODS The approach consists of conducting a survey to define individual decisions at different stages of adoption and producing proportional flow diagrams representing the collective results of adopters moving through these various stages. The method is illustrated for four well-established practices in New Zealand pastoral farming using responses from 138 farmer surveys. RESULTS AND CONCLUSIONS Findings show how the current use status for each practice was the result of individual adoption journeys, converging in distinct pathways. For example, the current population of farmers can be broken down into those who have maintained or increased use of a practice over the medium or long term, those who have decreased their use of the practice since first adopting it, those who are still trialling the practice, those who adopted and then dis-adopted the practice, those who are aware of the practice but have never adopted it, and those who are not aware of it. The pathway to adoption may or may not have included trialling of the practice. Anticipating future pathways, we identified that farmers may intend to increase, maintain or decrease their adoption, and that current non-adopters may or may not be interested in future adoption. For different practices, different proportions of the farm population followed different adoption pathways. Observing these differences provides insights into adoption, and adoption barriers, for each practice. SIGNIFICANCE Our approach provides a method for adoption research with a highly informative way to unpack the diversity of dynamic adoption pathways for agricultural practices, addressing the current imbalance in survey design that puts more emphasis on potential drivers of adoption at the expense of adoption measures. We discuss the potential uses of adoption pathways analysis to agricultural researchers and extension agents, and its potential to contribute to better explaining past adoption or predicting future adoption.
Full-text available
The development of a Time series Forecasting System is a major concern for Artificial Intelligence researchers. Commonly, existing systems only assess temporal features and analyze the behavior of the data over time, thus, resulting in uncertain forecasting accuracy. Although many forecasting systems were proposed in the literature; they have not yet answered the attending question. Hence, to overcome this problematic, we propose an innovative method called Taylor-based Optimized Recursive Extended Exponential Smoothed Neural Networks Forecasting method, abbreviated as TOREESNN. Briefly explained, the proposed technique introduces three ideas to solve this issue: First, building an innovative framework for forecasting univariate time series based on Exponential Smoothed theory. Second, designing an Elman Classifier model for uncertainty prediction in order to correct the forecasted values. And finally hybrading the two recurrent systems in one framework to obtain the final results. Experimental results demonstrate that the proposed method has a high accuracy both in training and testing data in terms of Mean Squared Error (MSE) and outperforms the state-of-the-art Recurrent Neural Networks models on Mackey-Glass, Nonlinear Auto-Regressive Moving Average time series (NARMA), Lorenz, and Henon map datasets.
Full-text available
This PHd thesis titled as "A Quantitative Approach to Passenger Car Demand in Turkey" is consisted of three parts, namely, demand concept, theoretical approaches that explain demand function (demand theories) and identification of personal automobile demand function, demand forecasting and forecast of Turkish passenger car demand. In the former chapter, demand and demand related concepts are explained, theoretical approaches that explain demand function (demand theories) and personal automobile demand function's identification is presented. Passenger car demand function which was formed by examining demand theories is:D = f (P1, P2, M, S1, S2, i) P1 : passenger car price, P2 : fuel price, S1 : country's savings volume, S2 : consumer loans volume, M : total GDP, i : consumer loans interest rate.This part that constitutes the theoretical structure of the thesis is also expected to contribute to the Turkish literature especially with regard to modern demand theories.In the latter chapter, demand forecasting strategies, demand forecasing methodology, relationship between demand theories and demand forecasting techniques, and demand forecasting techniques are presented. Because it is the theoretical base of the application, quantitative demand forecasting techniques are given weight in this chapter. In the third and the last chapter, which provides the originality of the thesis, proves that the passenger car demand in Turkey may only be forecasted through econometric models with adhering to the demand forecasting methodology that is presented in the second chapter. Application of the generated econometric model was practiced by both traditional and modern forecasting methods which are multiple regression as the traditional method and the artificial neural networks as its modern counterpart. Because of the nonlinear patterns in the demand, and high correlations between explanatory variables; multiple regressions pattern recognition and generalization abilities were not enough for covering this econometric model. The artificial neural networks technique enabled the elemination of some of these drawbacks, thus enhancing the forecasts' performance and accuracy. This is why modern methods such as the artificial neural network, which is assesed to have the ability to pattern recognition and to generalise the nonlinear patterns should be the method of choice instead of traditional methods when forecasting the passenger car demand in Turkey. Regarding thesis assumptions, it is reasonable to conclude that personal automobile demand is expected to rise however with a rapidly decreasing accelaration within the next five years. Taking current developments in the economical conjuncture into account this outcome is assesed as a not-so-remote possibility.
Full-text available
Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts. We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases.
Full-text available
Expert opinion is often necessary in forecasting tasks because of a lack of appropriate or available information for using statistical procedures. But how does one get the best forecast from experts? One solution is to use a structured group technique, such as Delphi, for eliciting and combining expert judgments. In using the Delphi technique, one controls the exchange of information between anonymous panelists over a number of rounds (iterations), taking the average of the estimates on the final round as the group judgment. A number of principles are developed here to indicate how to conduct structured groups to obtain good expert judgments. These principles, applied to the conduct of Delphi groups, indicate how many and what type of experts to use (five to 20 experts with disparate domain knowledge); how many rounds to use (generally two or three); what type of feedback to employ (average estimates plus justifications from each expert); how to summarize the final forecast (weight all experts’ estimates equally); how to word questions (in a balanced way with succinct definitions free of emotive terms and irrelevant information); and what response modes to use (frequencies rather than probabilities or odds, with coherence checks when feasible). Delphi groups are substantially more accurate than individual experts and traditional groups and somewhat more accurate than statistical groups (which are made up of noninteracting individuals whose judgments are aggregated). Studies support the advantage of Delphi groups over traditional groups by five to one with one tie, and their advantage over statistical groups by 12 to two with two ties. We anticipate that by following these principles, forecasters may be able to use structured groups to harness effectively expert opinion.
Full-text available
Tested 3 hypotheses concerning people's predictions of task completion times: (1) people underestimate their own but not others' completion times, (2) people focus on plan-based scenarios rather than on relevant past experiences while generating their predictions, and (3) people's attributions diminish the relevance of past experiences. Five studies were conducted with a total of 465 undergraduates. Results support each hypothesis. Ss' predictions of their completion times were too optimistic for a variety of academic and nonacademic tasks. Think-aloud procedures revealed that Ss focused primarily on future scenarios when predicting their completion times. The optimistic bias was eliminated for Ss instructed to connect relevant past experiences with their predictions. Ss attributed their past prediction failures to external, transient, and specific factors. Observer Ss overestimated others' completion times and made greater use of relevant past experiences. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Full-text available
Colman's reformulation of rational theory is challenged in two ways. Analogy-making is suggested as a possible candidate for an underlying and unifying cognitive mechanism of decision-making, one which can explain some of the paradoxes of rationality. A broader framework is proposed in which rationality is considered as an emerging property of analogy-based behavior.
Full-text available
Analogical, or case-based reasoning has received quite a bit of attention in the literature on foreign policy decision-making. There has been little attention paid to whether analogical reasoning does indeed predominate or to what degree abstract reasoning plays a role in the decision-making process. If decision-makers do not primarily reason by analogy (an empirical question), then the focus on such reasoning runs the risk of ignoring important aspects of problem formulation and the scope of possible solutions considered. Hence, this article investigates the degree to which decision-makers employ analogical and abstract reasoning. The empirical data are from the Senate hearing regarding the first American program for development aid. This case permits an empirical assessment of the consensus in the foreign aid literature that the Marshall Plan was the central analogy for this aid. In addition, it has been argued that in public discourse, decision-makers should be expected to use analogies as justifications for their preferences. The study finds a preference for explanation-based reasoning and discusses some of the implications of these findings.
Cognitive theories predict that even experts cope with the complexities and ambiguities of world politics by resorting to theory-driven heuristics that allow them: (a) to make confident counterfactual inferences about what would have happened had history gone down a different path (plausible pasts); (b) to generate predictions about what might yet happen (probable futures); (c) to defend both counterfactual beliefs and conditional forecasts from potentially disconfirming data. An interrelated series of studies test these predictions by assessing correlations between ideological world view and beliefs about counterfactual histories (Studies 1 and 2), experimentally manipulating the results of hypothetical archival discoveries bearing on those counterfactual beliefs (Studies 3-5), and by exploring experts' reactions to the confirmation or disconfirmation of conditional forecasts (Studies 6-12). The results revealed that experts neutralize dissonant data and preserve confidence in their prior assessments by resorting to a complex battery of belief-system defenses that, epistemologically defensible or not, make learning from history a slow process and defections from theoretical camps a rarity.
This article describes the development and testing of a forecasting system for retailers who plan periodic promotions. When the number of variables describing the promotion is large relative to the historical database of past promotions, traditional forecasting approaches cannot be applied. In such cases, retailers must rely on the expertise of their buyers to subjectively estimate promotional unit sales. This research develops a Case-Based Reasoning system that allows all buyers to forecast promotional sales as accurately as the organization's expert buyer and, by making the subjective process explicit, also provides an avenue to improve forecast performance over time. The system (1) selects the historical analogs that are most similar to the planned promotion, (2) adjusts the sales of each analog to account for any differences between the analog and the planned promotion, and (3) combines the forecasts derived from the multiple analogs to arrive at a single sales projection. The performance of the system was tested and found to compare favorably to the performance of an expert buyer in a large national retail organization.