‘Horses for Courses’ in demand forecasting
Research output: Contribution to journal › Article › peer-review
Standard Standard
In: European Journal of Operational Research, Vol. 237, No. 1, 22.02.2014, p. 152-163.
Research output: Contribution to journal › Article › peer-review
HarvardHarvard
APA
CBE
MLA
VancouverVancouver
Author
RIS
TY - JOUR
T1 - ‘Horses for Courses’ in demand forecasting
AU - Petropoulos, F.
AU - Makridakis, S.
AU - Assimakopoulos, V.
AU - Nikolopoulos, K.
PY - 2014/2/22
Y1 - 2014/2/22
N2 - Forecasting as a scientific discipline has progressed a lot in the last 40 years, with Nobel prizes being awarded for seminal work in the field, most notably to Engle, Granger and Kahneman. Despite these advances, even today we are unable to answer a very simple question, the one that is always the first tabled during discussions with practitioners: “what is the best method for my data?”. In essence, as there are horses for courses, there must also be forecasting methods that are more tailored to some types of data, and, therefore, enable practitioners to make informed method selection when facing new data. The current study attempts to shed light on this direction via identifying the main determinants of forecasting accuracy, through simulations and empirical investigations involving 14 popular forecasting methods (and combinations of them), seven time series features (seasonality, trend, cycle, randomness, number of observations, inter-demand interval and coefficient of variation) and one strategic decision (the forecasting horizon). Our main findings dictate that forecasting accuracy is influenced as follows: (a) for fast-moving data, cycle and randomness have the biggest (negative) effect and the longer the forecasting horizon, the more accuracy decreases; (b) for intermittent data, inter-demand interval has bigger (negative) impact than the coefficient of variation; and (c) for all types of data, increasing the length of a series has a small positive effect.
AB - Forecasting as a scientific discipline has progressed a lot in the last 40 years, with Nobel prizes being awarded for seminal work in the field, most notably to Engle, Granger and Kahneman. Despite these advances, even today we are unable to answer a very simple question, the one that is always the first tabled during discussions with practitioners: “what is the best method for my data?”. In essence, as there are horses for courses, there must also be forecasting methods that are more tailored to some types of data, and, therefore, enable practitioners to make informed method selection when facing new data. The current study attempts to shed light on this direction via identifying the main determinants of forecasting accuracy, through simulations and empirical investigations involving 14 popular forecasting methods (and combinations of them), seven time series features (seasonality, trend, cycle, randomness, number of observations, inter-demand interval and coefficient of variation) and one strategic decision (the forecasting horizon). Our main findings dictate that forecasting accuracy is influenced as follows: (a) for fast-moving data, cycle and randomness have the biggest (negative) effect and the longer the forecasting horizon, the more accuracy decreases; (b) for intermittent data, inter-demand interval has bigger (negative) impact than the coefficient of variation; and (c) for all types of data, increasing the length of a series has a small positive effect.
U2 - 10.1016/j.ejor.2014.02.036
DO - 10.1016/j.ejor.2014.02.036
M3 - Article
VL - 237
SP - 152
EP - 163
JO - European Journal of Operational Research
JF - European Journal of Operational Research
SN - 0377-2217
IS - 1
ER -