The 2009 INFORMS Data Mining Contest is the second installment of a data mining contest that started last year in conjunction with the INFORMS conference. The contest again involves predictive problems for health care quality. The two tasks for this year's contest are: 1) modeling of a patient transfer guideline for patients with a severe medical condition from a community hospital setting to tertiary hospital provider and 2) assessment of the severity/risk of death of a patient's condition.

Predictions must be submitted by September 25. You may want to consider giving a talk at the Contest session at the INFORMS conference. Workshop acceptance will be based on successful submissions to the Contest. Contestants are encouraged to prepare a paper describing methods and insights to be considered for publication in a Special Issue in a data mining journal (to be determined). See the Contest site for more information.

A new working paper by Scott Armstrong and Andreas Graefe describes findings on the use of an index-method model based on 49 biographical variables to predict the outcomes of 28 U.S. presidential elections. It correctly predicted 25. Out-of-sample forecasts were more accurate than forecasts from 12 benchmark models. The authors welcome peer review.

Wright and MacRae used large-scale meta analysis to identify bias and variability in the forecasts from such scales. They found that converting purchase intentions to linear probability scales or proportions resulted in unbiased forecasts. The same was true of 11-point probability scales, but these had lower dispersion of forecast errors. This result gives strong support to Principle 11.4, as scale-point adjustments were not required to get unbiased forecasts. It also supports Principle 8.4, as the use of the longer 11-point scale reduced forecast error, and Principle 8.7 as there was much greater variability in forecast errors for studies with small samples. The meta-analysis was restricted to existing products and services, and did not investigate accuracy for new products.

Does the common advice to "stand in the other person's shoes" lead to better forecasts? In a new paper, Kesten Green and Scott Armstrong describe evidence that it does not. The abstract and link to full text of this paper are available on the Papers page, in a section titled "Working papers by authors seeking reviews and advice". We encourage authors to take advantage of this opportunity to get early feedback on papers that are relevant to forecasting principles, and the wider forecasting community to keep up-to-date with the latest research.

A special section, Time Series Monitoring with five papers and introduction by editors Wilpen Gorr and Keith Ord, appears in the July-September 2009 issue of the International Journal of Forecasting. The papers take a fresh look at this field with new societal applications and new methods and frameworks.

Following are excerpts from the introduction paper...

“Screening products, populations, or territories for exceptional changes in demand for products or services is an important management activity, whether for the prevention of losses or to take advantage of opportunities. In either event, managers must make decisions that interrupt normal operations and reallocate resources. To trigger such activity, time series monitoring has the purpose of automatically detecting outliers and structural changes, such as step increases or decreases, in time series data as soon as possible after they occur and with sufficiently few false alarms.”

“What is new now? One development is the application of time series monitoring to social issues, such as communicable disease detection and crime prevention. There is even less ability to control associated processes than with product demand and there is the added richness of spatial patterns becoming prominent features of exceptional behavior. In addition, the costs and benefits of monitoring need to be addressed as a matter of public policy. In response, this special section provides new theoretical results on forecasting and monitoring, new monitoring methods that include spatial components and take advantage of advances in computer science (spatial scan statistic), the application of an evaluative framework that is non-parametric and includes explicit and practical methods for achieving an optimum cost/benefit balance (receiver operating characteristic curves), and new estimation methods estimation methods that better accommodate non-stationarities (state space framework).”

 

1.

Introduction to time series monitoring
Pages 463-466
Wilpen L. Gorr, J. Keith Ord


 

2.

How does improved forecasting benefit detection? An application to biosurveillance
Pages 467-483
Thomas H. Lotze, Galit Shmueli

 


 

3.

Empirical calibration of time series monitoring methods using receiver operating characteristic curves
Pages 484-497
Jacqueline Cohen, Samuel Garman, Wilpen Gorr


 

4.

Expectation-based scan statistics for monitoring spatial time series data
Pages 498-517
Daniel B. Neill

 


 

5.

Monitoring processes with changing variances
Pages 518-525
J. Keith Ord, Anne B. Koehler, Ralph D. Snyder, Rob J. Hyndman

 


 

6.

Incorporating a tracking signal into a state space model
Pages 526-530
Ralph D. Snyder, Anne B. Koehler