Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A large forecasting competition called M4 [1] recently published their results. If you're interested in forecasting I suggest checking out their summary paper. [2]

Highlights include:

* Pure ML methods are still not competitive with statistical models;

* Ensembles perform better than any single model, an important difference to the last competition;

* Very simple benchmarks can perform very well on this type of competition.

The top 3 included multiple statistical models feeding into an RNN (by an Uber engineer), another ensemble using XGboost for the final layer and a combination of just statistical methods with a clever weighting scheme.

If you're interested in making production-level predictions, it's probably a good idea to ensemble prophet with other methods.

[1] https://en.wikipedia.org/wiki/Makridakis_Competitions

[2] https://www.scribd.com/document/382185710/IJF-Published-M4-P...



Those highlights match my field experience.

I've found that ensembles aren't necessarily more accurate for individual forecasts per se, but in terms of aggregate errors over the long run, they end up being less wrong. (bias-variance tradeoff)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: