Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Many things that look exponential originally turn out to actually be sigmoidal.

I consider the start of this wave of AI to be approximately the 2017 Google transformer paper and yet transformers didn't really have enough datapoints to look exponential until GPT 3 in 2022.

The following is purely speculation for fun and sparking light-hearted conversation:

My gut feeling is that this generation of models transitioned out of the part of the sigmoid that looks roughly exponential after the introduction of reasoning models.

My prediction is that tranformer-based models will start to enter the phase that asymptotes to flatline in 1-2 years.

I leave open the possibility for a different form of model to emerge that is exponential but I don't believe transformers to be right now.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: