Hacker Newsnew | past | comments | ask | show | jobs | submit | hcarlens's commentslogin

That was true in the first Makridakis competition ("M1") in 1982, and possibly until M4 in 2018, but both M5 and M6 were won by what would generally be considered relatively sophisticated methods (e.g. LightGBM).

The Wikipedia article doesn't have that much detail on M5 or M6, but the M5 papers are in the International Journal of Forecasting[1] and M6 should be published later this year (there's already a preprint on arxiv [2]).

I recently spent some time looking into the history and results of the M competitions and had a chance to speak to Professor Makridakis about them, as well as the winners of each of the M6 competition tracks [3]. While the methods have become more sophisticated, some conclusions from M1 still seem to hold: in particular, that there is no overall "best" method, and that the winning method tends to be different for different types of data, time horizons, and evaluation metrics.

[1]: https://www.sciencedirect.com/science/article/pii/S016920702... [2]: https://arxiv.org/abs/2310.13357 [3]: https://mlcontests.com/state-of-competitive-machine-learning...


Our basic low-dimensional parametric model landed No1 at the SKU level at the M5, see my lecture https://www.lokad.com/tv/2022/1/5/no1-at-the-sku-level-in-th... (more references at the bottom)


Interesting, thanks for sharing!


A recent thread on Amazon’s new Chronos forecasting model showed that an ensemble of simple models outperformed it (a highly parametrized transformed model) on the M competition datasets.

https://github.com/Nixtla/nixtla/tree/main/experiments/amazo...


XGBoost, LightGBM, and Catboost are all used quite frequently in competitions. LightGBM is actually marginally more popular than the other two now, but it's pretty close. In the M5 forecasting competition a few years back, many of the top solutions used primarily LightGBM.


Yeah, this book was incredible and the tech in it has aged extremely well. Have you tried any of Ted Chiang's books? They're also great hard sci-fi. Another one that plays with similar ideas to Permutation City is the Bobiverse series by Dennis E. Taylor.


I read Bobiverse which was pretty good, at least I liked all but the last book.

Thanks!


Also relevant: https://aimoprize.com/

($10m prize for models that can perform well at IMO)


Agreed, and not only do they not compare their model to Phi-2 directly, the benchmarks they report don't overlap with the ones in the Phi-2 post[1], making it hard for a third party to compare without running benchmarks themselves.

(In turn, in the Phi-2 post they compare Phi-2 to Llama-2 instead of CodeLlama, making it even harder)

[1]: https://www.microsoft.com/en-us/research/blog/phi-2-the-surp...


There have been quite a few interesting Kaggle competitions in recent years, as well as other interesting ML/data science competitions on other platforms.

Platforms like Kaggle, DrivenData, Zindi, AIcrowd, CodaLab and others are running dozens-hundreds of competitions a year in total, including ones linked to top academic conferences. One interesting recent one is this one on LLM efficiency - trying to see to what extent people can fine-tune an LLM with just 1GPU and 24h: https://llm-efficiency-challenge.github.io/

Or the Makridakis series of challenges, running since the 80s, which are a great testbed for time-series models (the 6th one finished just last year): https://mofc.unic.ac.cy/the-m6-competition/


An interesting approach I came across at NeurIPS a few weeks ago is called "ML with Requirements"[1]: https://arxiv.org/abs/2304.03674

My basic understanding is that it combines "standard" supervised learning techniques (neural nets + SGD) with a set of logical requirements (e.g. in the case of annotating autonomous driving data, things like "a traffic light cannot be red and green at the same time"). The logical requirements not only make the solution more practically useful, but can also help it learn the "right" solution with less labelled data.

[1] I don't know if they had a NeurIPS paper about this; I was talking to the authors about the NeurIPS competition they were running related to this approach: https://sites.google.com/view/road-r/home


I had a chance to chat to them again today, and wrote some more details here: https://mlcontests.com/neurips-2023/tutorials/#exhibit-hall

Also as the other comment mentioned, https://positron.ai seems to be live now.


Thanks! It looks like the ASIC inference space (if we can it that) is getting more popular. There is also https://www.etched.ai/ that I recently saw.

I didn't follow asic mining during the bitcoin bubble but I have the impression it was the way to go for mining. I don't see why that wouldn't be true for inference, a long as one is ok being limited in flexibility and wed to a particular architecture.


I built a slightly less detailed version of this, which also also lists free credits: https://cloud-gpus.com/

Open to any feedback/suggestions! Will be adding 4090/H100 shortly.


Having your own GPU is nice, but it's power hungry, heats the room, and only really makes sense if you use it a lot.

For a list of cloud GPU providers with rough price comparisons, I created this page: https://cloud-gpus.com


Having your own GPU makes sense if it works out cheaper than the cloud equivalent (not just the GPU bit but everything else - bandwidth/storage fees, etc).

Given the prices of cloud infrastructure, it doesn't take much before walking to your local computer store and buying a GPU (or more!) becomes more cost-effective.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: