Review of: 
Stephen K. McNees (1992), ‘The uses and abuses of
"consensus" forecasts’, Journal of Forecasting, 11, 703710. 
This study examined the value of combined forecasts by using the Blue Chip
macroeconomic forecasts issued each October from 1977 through 1988, a total of 11 years.
Seven variables were forecast, yielding a total of 77 annual forecasts from each of the 22
forecasters in the consensus. This represents all of the forecasters who had made
forecasts for all variables for each of the 11 years. The conclusions were as follows:
 The equalweights combined forecast was better than the individual forecasts about two
thirds of the time, and on average, the combined forecast was about 7% less than that for
the average individual.
 Forecasters displayed little ‘skill’ in the sense that those who were accurate
in forecasting one variable were not more accurate in forecasting other variables.
 The accuracy of the mean and the median forecasts were similar. The median did slightly
better than the mean when the Mean Absolute Error was used as the criterion, while it was
the reverse when the criterion was the Root Mean Square Error.
 Forecasters often use the range of forecasts provided by a group of experts as a rough
gauge of confidence. It is difficult to specify how this should relate to confidence. In
his analysis of the 22 forecasters, McNees found that the percentage of actual values that
fell outside the forecasters’ range varied from 27 to 54 (and averaged about 43%) of
the forecasts. This suggests that the truth often lies outside the current range of
opinions.
Comment on Armstrong’s Summary by Stephen K. McNees
I believe Armstrong’s summary misses the main point of my article. For any
set of numbers (e.g. forecast errors), the absolute value of their mean is less than the
mean of their absolute values (or equal if all numbers are of the same sign). For any
set of (nonidentical) numbers, the squared value of their mean is less than the mean of
their individual squared values. These wellknown tautologies imply for every
variable and every observation that the ‘combination’ forecast cannot be
worse than average. In contrast, an individual forecast can be (and often is) worse than
average. Several forecasters in this study outperformed the mean or median forecast for
all variables except the one for which they were well below average. These facts help to
explain why the ‘consensus’ forecast performs so well over large sets of
variables; whereas, for a single variable, a sizable minority (e.g. one third) of
individuals are typically more accurate than the ‘consensus’ or combination
forecast.
