Lau examined eight factors that might effect the accuracy of a poll to predict the vote for the U.S. President. Six were methodological factors: (1) sample size, (2) estimates of likely voters, (3) number of days that the interviewers were in the field, (4) whether the poll was a tracking poll where a small sample is drawn each day, but reported results sum across several days), (5) whether the poll was done on weekdays only, and (6) the strictness of the definition of a supporter of a candidate. In addition, two contextual factors were examined: percent undecided and days to election. Lau examined these variables for a sample of 56 national surveys conducted between August 31 and November 2, 1992. The data for each of the methodological factors did show wide variations. For example, sample sizes varied from 575 to 2086.
Which factors do you think were most closely related to accuracy? One might expect that sample size is closely related. It is common for the sample size to be reported along with the survey findings. While the findings were all in the expected direction, only factors related to nonresponse bias achieved statistical significance. The two variables that achieved a high level of significance (p < 0.01) were that tracking polls were more accurate, and surveys run on weekdays only were less accurate. The number of days that the interviewers were in the field was significant (p < 0.1). Interestingly, the relationship between sample size and accuracy was very weak. (While this may seem surprising, it reinforces a finding by Crespi, 1988, who analyzed polls at national, state, and local levels for several different election years.) This is partly due to the findings that sampling error often contributes less than half of the total error (Buchanan 1986). Also interesting is that the number of days to election was not related to accuracy. Given these findings, it would seem that polling organizations can save money by using smaller samples. They could improve accuracy by focusing more on the reduction of non-response error (including having interviewers spend more days in the field, and avoiding surveys that are done only on weekends). In addition, Lau recommends that polling organizations report non-response rates and that they should abandon the common practice of reporting error margins based solely on sample size, because that gives a false sense of security.