This paper seems likely to replace the Chambers, Mullick, and Smith (1971) paper as one of the best sellers among HBR reprints. As with the earlier paper, Georgoff and Murdick (G & M) provide managers with a checklist for selecting the best forecasting method for a given situation. This checklist contains 16 dimensions (e.g., time frequency) that are also stated as questions for the manager. For example, for time frequency, the question is, "Are frequent forecast updates needed?" For each of these dimensions (questions), the authors describe the advantages and disadvantages of each of 20 methods, such as exponential smoothing. This was a major undertaking, because their framework involved 320 cells. I examine four issues here: (1) Do the authors raise the right questions? (2) Do they include all relevant methods? (3) Do they provide the right answers? and (4) Is the checklist usable?
The questions. The questions are organized into four major categories: time, resource requirements, input and output. Overall, it appears to be a comprehensive and well-formulated list. However, I believe that some important items were omitted. (1) In terms of inputs it also is important to ask how much knowledge exists about the relationships between the dependent and independent variable (G & M ask only about the existence of data ) (2) How reliable and valid are the data on the dependent variable? (Expert systems are particularly relevant where objective data on the dependent variable are lacking.) (3) In terms of outputs the list should include questions about whether forecasts are needed to examine alternative futures (i.e ,unfavorable vs favorable environments, large changes in capabilities or large changes in strategy). G & M focus on expected changes in these areas, not on the ‘what if’ or conditional forecasts. (4) No consideration was given to the question of how to gain acceptance of the forecast within the organizatlon For example, is it important that the users understand the forecasting methods?
The methods. The list of methods is fairly comprehensive, yet again, there are important omissions. For example, no mention was made of expert systems (sometimes called judgmental bootstrapping). Also, in my admittedly biased opinion, another important omission was role-playing (Armstrong 1987).
The answers. Chambers et al. did not bother to examine the evidence when they created their guidelines; they based them on their practical experience. To their credit, G & M did examine the research literature, but imagine how much research is relevant to those 320 cells (or, if they made my additions, the 440 cells they would have): recognizing the immense task faced by the authors, it was inevitable that some research was overlooked. Consequently, the descriptions in many of the cells are inconsistent with the evidence. For example, scenarios have often been recommended as a method for long-range forecasting. I am one of those who have recommended this. Recent research has challenged this proposal. As discussed in my review (Armstrong 1985, pp. 40-45), scenarios are likely to distort judgmental forecasts. As a result, I do not believe that they should be used to make long range forecasts. They are, however, relevant for the goal of gaining acceptance of forecasts.
I was perplexed by the recommendations with respect to short, medium, and long-range forecasting. For example, I do not believe that consumer market surveys are relatively good for medium-range forecasting; rather they do best in short-range forecasting. Georgoff (personal communication) explained that this was due to the author’s definitions of the time span . . . short being one to three months, medium being three to 24 months, and long-range being more than 24 months. These times are considerably shorter than I had expected. Unfortunately, due to space limitations, the definitions of short, medium, and long-term had been excluded from the Georgoff and Murdick paper. Nevertheless G & M state that moving averages and Box-Jenkins are appropriate for long- range forecasting. This may be helpful in some situations, but in general it seems a bit risky.
Econometric methods are described in the G & M table as giving "spotty performance in dynamic environments" when it comes to accuracy. As shown in Armstrong (1985, Chapter 15) and Fildes (1985), econometric methods are, relative to other methods, especially good in these situations. In short, I believe that G & M’s answers contain some misinformation along with the sound information.