Many articles discuss how to measure forecastability for deterministic forecasting. However alot for in the trap of suggesting to simply use a coefficient of variation (CV) measure - even though it will count forecastable patterns as season and trend as variation and therefore mistakenly set them as hard to forecast.
The linked article here by Stefan de Kok does a good job of explaining the trap of pure CV and come with an alternative.
I'm a bit split though whether to use this type of measure or to compute the FAA of a simple benchmark (such as a moving average).
The FAA gives you a minimum acceptable accuracy level, but the proposed method here gives a measure which (typically) can be reported from 0 (unforecastable) to 1 (no noise).
Do any of you here have experience in implementing this and can share any experiences? Especially on the stakeholder/change management side.
The only way to assess "forecastability" of a time-series is to use a forecasting model as a baseline. This is exactly what is done in the article, but unfortunately, it means that if the baseline model is poor, the "forecastability" assessment is going to be poor as well. There is no work-around that.
Stepping back, one of the things that I have learned more than a decade ago at Lokad is that all the forecasting metrics are moot unless they are connected to euros or dollars attached to tangible supply chain decisions. This is true for deterministic and probabilistic forecasts alike, although, the problem becomes more apparent when probabilistic forecasts are used.
Many articles discuss how to measure forecastability for deterministic forecasting. However alot for in the trap of suggesting to simply use a coefficient of variation (CV) measure - even though it will count forecastable patterns as season and trend as variation and therefore mistakenly set them as hard to forecast.
The linked article here by Stefan de Kok does a good job of explaining the trap of pure CV and come with an alternative.
I'm a bit split though whether to use this type of measure or to compute the FAA of a simple benchmark (such as a moving average).
The FAA gives you a minimum acceptable accuracy level, but the proposed method here gives a measure which (typically) can be reported from 0 (unforecastable) to 1 (no noise).
Do any of you here have experience in implementing this and can share any experiences? Especially on the stakeholder/change management side.
The only way to assess "forecastability" of a time-series is to use a forecasting model as a baseline. This is exactly what is done in the article, but unfortunately, it means that if the baseline model is poor, the "forecastability" assessment is going to be poor as well. There is no work-around that.
Stepping back, one of the things that I have learned more than a decade ago at Lokad is that all the forecasting metrics are moot unless they are connected to euros or dollars attached to tangible supply chain decisions. This is true for deterministic and probabilistic forecasts alike, although, the problem becomes more apparent when probabilistic forecasts are used.