Finite sample weighting of recursive forecast errors

Chris Brooks, Simon P. Burke, S. Stanescu

Research output: Contribution to journalArticle (Academic Journal)peer-review

Abstract

This paper proposes and tests a new framework for weighting recursive out-of-sample prediction errors according to their corresponding levels of in-sample estimation uncertainty. In essence, we show how to use the maximum possible amount of information from the sample in the evaluation of the prediction accuracy, by commencing the forecasts at the earliest opportunity and weighting the prediction errors. Via a Monte Carlo study, we demonstrate that the proposed framework selects the correct model from a set of candidate models considerably more often than the existing standard approach when only a small sample is available. We also show that the proposed weighting approaches result in tests of equal predictive accuracy that have much better sizes than the standard approach. An application to an exchange rate dataset highlights relevant differences in the results of tests of predictive accuracy based on the standard approach versus the framework proposed in this paper.
Original languageEnglish
Pages (from-to)458-474
Number of pages17
JournalInternational Journal of Forecasting
Volume32
Issue number2
DOIs
Publication statusPublished - 1 Mar 2016

Fingerprint

Dive into the research topics of 'Finite sample weighting of recursive forecast errors'. Together they form a unique fingerprint.

Cite this