What factors affect the accuracy of new product forecasting? In addressing this issue, Gartner and Thomas provide a bland abstract and a vague set of conclusions. Sandwiched in between are some interesting and useful findings. They use prior research to develop seven reasonable hypotheses that relate to new product forecasting by firms. For example, "The greater the use of more than one forecasting approach (data sources and methods), the more accurate the new-product forecasts."
They test their hypotheses by asking practitioners how they made their forecasts and relating this to the level of accuracy reported by the practitioners. Such field studies of what methods aid accuracy are useful. Unfortunately they are also rare. Dalrymple (1987) did such a study of business firms, and Bretschneider et al. (1989) studied government forecasting (this latter source was not included in the Gartner and Thomas paper).
To obtain their data, Gartner and Thomas conducted a mail survey of new US software firms. The survey was demanding in that it required some effort to provide reliable answers. Despite employing reasonable procedures such as follow-ups, they received responses from only about 10% of the firms. (Unfortunately, few details were provided about the survey methodology, making it difficult to assess the degree of resistance by those receiving the questionnaire.) The low response rate probably does not pose a serious problem because the generalizations were based on comparisons between two groups of respondents. That is, each subsample was apparently subject to the same bias.
Each of 103 respondent firms reported the percentage error of their forecasts for the each of "the first two years of your new product." The average absolute error for the first year was 47%, which seems to be substantial.
For analysis, the firms were divided into 46 with relatively accurate forecasts (a typical absolute error of less than 25%) and 57 with larger errors. The findings should be of substantial interest to practitioners. Most were reassuring. Those firms that spent more on the forecasting process tended to have more accurate forecasts. Those that gathered more information about their intended customers had more accurate forecasts. Those that used more forecasting methods and data sources were more accurate. Test markets aided accuracy.
Some findings might surprise you, however. For example, the use of "technology/market diffusion curves" was associated (not significantly) with less accurate forecasts. This finding is consistent with the empirical results reported in Collopy et al. (1994). Also, pretest market models, new product concept tests, quantitative simulations, cross-impact analysis, quantitative analysis of sales history of similar products, competitive analysis, and beta-test sites did little to improve accuracy.
These findings should also be useful to researchers. Further research could extend this to compare (1) what experts believe to be associated with accuracy (e.g. Collopy and Armstrong 1992), with (2) what is useful based on compara tive studies of methods (e.g. the M-competition), and with (3) studies of what methods are associated with accuracy in firms, such as this Gartner and Thomas study. Such research would be useful for practice (comparisons of #1 and #2 versus #3), implementation of research (comparisons of #2 versus #1 and #3), and further research (#1 versus #2 and #3).