The following methods are used by practitioners or are recommended by experts:
-
Expert judgment
- unaided
- structured
- game theory
Interestingly, evidence on the relative accuracy of the methods is contrary to experts’ expectations.
Combining forecasts from substantially different methods has been shown to improve forecasting accuracy. It may be as well to combine forecasts from methods that have been shown to be accurate, and to weight forecasts based on evidence of relative accuracy.
Unaided Judgment
The most commonly used method for forecasting decisions in conflicts is experts using their unaided judgment. This is not surprising, as unaided-judgement forecasts can often be derived quickly and cheaply.
Structured Judgment
One method for forecasting using structured expert judgement is the Delphi technique. Delphi panels challenge forecasts and forecasters’ reasoning anonymously. The method has been shown to improve accuracy relative to aggregating the forecasts of individual experts and the forecasts of unstructured groups. There is, however, no direct evidence on the relative accuracy of Delphi for forecasting decisions in conflicts.
How to do it
A panel of experts provide forecasts and rationale anonymously. The panel is provided with a summary of their forecasts and reasons. This process is repeated until responses are somewhat stable – typically after one or two iterations. An unweighted average of the individual forecasts is adopted as the panel’s forecast. When panellists choose options from a list or assign probabilities to options, a set of aggregate probabilities for each option can be calculated from their responses.
Between five and 20 heterogeneous experts with relevant domain knowledge should be used.
Game Theory
Game theory is a method that has been advocated for developing strategy and for forecasting decisions in conflicts. For example, see Scott Armstrong’s review of Co-opetition. The method seems, however, to be infrequently used for forecasting decisions in real conflicts. Researchers have advocated various approaches or hyphenated game theories. For forecasting research, a useful definition of game theory is: that which game theorists do when they are asked to forecast decisions in conflicts.
Structured Analogies
Reference to conflicts that are similar to a current conflict for predicting the outcome of that conflict appears to be common practice. This practice has been much documented and discussed in relation to foreign policy forecasting. Structured analogies is a method that forecasts from analogies in a formal way. Structured-analogies forecasts are likely to be accurate if experts are each able to think of two or more analogous conflicts from their own experience.
How to do it
Forecasting with structured analogies involves four steps: (1) describe the target situation, (2) identify and describe analogies, (3) rate similarity, and (4) derive forecasts. We describe the steps here.
(1) Describe the target situation
Prepare an accurate, comprehensive, and brief description. To do so, the administrator might seek advice from unbiased experts or from experts with opposing biases. Where feasible, include a list of possible outcomes for the target situation following the description. Doing so makes coding easier. It also makes it possible to ensure forecasts are relevant to the user.
(2) Identify and describe analogies
Recruit experts who are likely to know about situations that are similar to a target situation. The number of experts depends upon the knowledge that they have about analogous situations, variability in responses across experts, and the importance of having an accurate forecast. Drawing upon the research on the number of forecasts needed when combining, we suggest using at least five experts.
(3) Rate similarity
Once the situations have been described, ask the experts to list similarities and differences between their analogies and the target situation. Then they should rate the similarity of their analogies to the target on the basis of this analysis.
(4) Derive forecasts
To promote logical consistency, use pre-determined rules to derive a forecast from experts’ analogies information. This also aids replicability.
Evidence on accuracy
Kesten Green presented preliminary evidence on the relative accuracy of structured analogies forecasts at ISF 2002 in Dublin and at ISF 2003 in Merida. Kesten and Scott Armstrong are preparing a paper on "Structured Analogies for Forecasting" - full text.
Sources of analogies
-
International Crisis Behavior (ICB) and version 7 of the ICB dataset (May, 2007)
-
Keesing’s Record of World Events (subscription only).
-
Centre for the Study of Civil War: Uppsala/PRIO Armed Conflicts Dataset (Version 4-2006)
-
Nobel Foundation’s maps show the locations and brief descriptions of conflicts that have occurred since the beginning of the 20th Century.
-
Norwegian University of Science and Technology Department of Geography’s ViewConflicts pages provide maps and details of conflicts that have occurred since 1946.
-
International Crisis Behavior (ICB) and version 6.0 of the ICB dataset (January 30, 2006)
-
World Bank's research program, The Economics of Civil War, Crime and Violence
Simulated Interaction
Simulating a conflict that involves interaction between parties using role players is referred to as simulated interaction. The simulation outcomes are used as forecasts of the actual outcome. Alternative strategies can be compared using independent simulations with different role players.
How to do it
Forecasting with simulated interaction involves four steps: (1) describe the roles of the people in the conflict, (2) describe the target situation, (3) simulate the situation, and (4) derive forecasts. We describe the steps here.
(1) Describe the roles of the people in the conflict
Describe the roles of all the people involved in the target conflict, or describe the roles of two people to represent each party to the conflict. Allocate roles to people who are somewhat similar to the actual people if the cost is not great, then ask them to read and adopt their allocated role for the duration of their simulation.
(2) Describe the target situation
Descriptions of about one page in length are sufficiently long to include information about the parties to the conflict and their goals, relevant history, current positions and expectations, the nature of the parties’ interaction, and the issue to be forecast. Longer descriptions may overburden role players. A comprehensive and accurate description will help role players achieve realistic simulations. A good understanding of the situation and considerable care and effort is therefore advisable at this stage. To this end, obtain information from independent experts, have the description reviewed, and test the material. Conducting simulated interactions using independently prepared descriptions may be worthwhile.
Where knowledge of possible decisions is good, provide a list for role-players to choose from. This provides clarity for role players who are unfamiliar with the type of situation they are simulating, ensures forecasts are useful, and makes coding easier.
(3) Simulate the situation
Ask the role players to improvise and interact with others in ways that are consistent with the target situation. In practice, this appears readily achievable with careful preparation. Typically, interactions might involve face-to-face meetings and preparation for these. Exchanging information using notes or networked computers, or conducting telephone conversations might be more realistic for some situations. Provide realistic settings if this is affordable.
(4) Derive forecasts
For each simulation, the forecast is the final decision made by the role-players. Conduct as many as ten simulations and combine the predictions. For example, if seven simulations led to war, one would predict a 70% chance of war.
Descriptions of Methods
Miller, K. & Brown, D. (2004). "Risk assessment war-game (RAW)." University of Virginia Department of Systems and Information Engineering Working Paper SIE-040009 -Full Text
Field Experiments - Small scale or short-term trials
Field trials are likely to be more realistic, and hence provide more accurate forecasts, than simulated interactions. Although they are recommended by forecasters, they are not often used. There are good reasons for this: experiments in the field will often be expensive; rivals may be alerted; comparing alternative strategies or policies may not be possible; and experimentation may be impossible if a strategy depends on an unusual coincidence of circumstances. The task of forecasting German response to the Allied landings at Dunkirk illustrates these points.
Prediction Markets
Prediction markets may be useful tools for conflict forecasting when valid knowledge is dispersed among many people who are willing to reveal their knowledge through wagers. A paper on the subject by Wolfers and Zitzewitz is available here. Internet-based markets offer predictions on possible events such as a US attack on Iran, a terrorist attack in eastern-US (Foresight Exchange), the exile of Yasser Arafat, and the creation of a Palestinian state (Tradesports.com).
The material for this Special Interest Group is organized and submitted by This email address is being protected from spambots. You need JavaScript enabled to view it. – Please contact him for further information, and corrections, additions, or suggestions for these pages