This paper provides a short, well-written assessment of the accuracy of political election forecasts. It draws upon evidence from 155 elections from nine countries, during the years 1949 to 1985. Two important issues are examined: (1) To what extent have the improvements in survey techniques produced improvements in the accuracy of the polls, and (2) Do the statistical estimates of uncertainty provide a good assessment of the degree of confidence that one can place in a political poll?
(1) Impact of improved survey techniques. Numerous technical advances have been made in survey techniques over the past 35 years (e.g., random-digit dialing, better techniques for obtaining responses, methods to adjust for non-response bias, and the ability to shorten the time required to obtain results so that surveys can be conducted just prior to the elections), Nevertheless, according to Buchanan, the accuracy of the polls has not improved. This is shown in the table, where the error is the average absolute deviation from the actual percentage vote received by the winning party.
Buchanan’s data are surprising and disappointing. Before accepting these results, however, consider the threats to validity. Buchanan’s study was based on results from different countries. The sample was made up of different countries in different years. The polling situation differs among countries; for example, the proportion of voters varies substantially among the countries in his sample; voting is mandatory in Australia, but only about half of those eligible to vote in the U.S. do so. The polling procedures also vary by country, particularly with respect to the sponsorship of the surveys. Independent polling agencies typically produce more accurate results than those controlled by the politicians (Shamir 1986). The extent of vote fraud also varies by country and this could affect Buchanan’s result.
Buchanan’s data conflict with those in Perry (1979), who found that the predictive accuracy of political polls improved over time. Perry’s conclusion was based on an analysis of a much smaller sample. However, Perry’s sample was a more homogeneous set of forecasts, and thus was free of many of the threats to validity in Buchanan’s study. Perry examined U.S. elections only, with a sample size of five elections in each decade. His results showed that the greatest improvement in accuracy occurred from the 1940s to the 1950s, a period during which sampling procedures were being improved. A more recent analysis of Gallup Poll Data by Perry (personal communication with Perry of July 22, 1986) indicates that the improvement in accuracy has continued, though at a modest pace (see last column of the table). Presumably, this gain was brought about by reductions in the non-sampling errors (see next to last column of this table). According to Perry, the expected mean deviation was computed assuming a simple random sample in each case (with a cluster factor of 1.25). The 'pq’ in the formula for the standard error was based on the major party vote, and the sample ’n’ was the likely sample of voters. The typical mean error in Perry’s sample was 1.5, which is significantly less than the 2.7 in Buchanan’s sample. That is, the polling accuracy seems to be much better in the U.S. than in other countries.
(2) Statistical estimates of uncertainty. Polling agencies are careful about reporting the statistical confidence intervals. These are calculated using agree-upon formulas for estimating the sampling error. Of course , there are sources of error other than sampling error. Nevertheless, the statistical confidence intervals are presented as a good basis for confidence Buchanan examined the errors from the 155 elections and found a standard deviation of 2.6%. This implies a 95% confidence interval of + 5.1%. For the typical sample size, which he estimated to be about 1500 the expected confidence interval would be + 2.5% In other words the estimated confidence intervals are about half of what would have been expected. Viewed another way, sampling error represented about half of the total error. Perry’s analysis suggests that the non-sampling errors are less important in the U.S.. Note the last column of Perry’s data. The actual error is about 1.5 times the expected error. Perry believes that we are unlikely to improve upon the ratio of 1.3 that was experienced for the 1974-1984 period.
Until we devise better ways to estimate confidence factors, perhaps we should adopt ‘safety factors.’ For example, we could say ‘take the estimated confidence interval and multiply by 1.5 for U.S. elections.’ Alternatively, we might obtain subjective estimates of the uncertainty due to response and non-response errors, and then combine these with the estimate of sampling error.
Acknowledgment: Paul Perry, retired from Gallup, provided data and useful comments for this review.