Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's interesting is that you can already see the "AI race" dynamics in play -- OpenAI must be under immense market pressure to push o3 out to the public to reclaim "king of the hill" status.


I suppose they're under some pressure to release o3-mini, since r1 is roughly a peer for that, but r1 itself is still quite rough. The o1 series had seen significantly more QA time to smooth out the rough edges, and idiosyncracies what a "production" model should be optimized for, vs. just a top scorer on benchmarks.

We'll likely only see o3 once there is a true polished peer for it. It's a race, and companies are keeping their best models close to their chest, as they're used internally to train smaller models.

e.g., Claude 3.5 Opus has been around for quite a while, but it's unreleased. Instead, it was just used to refine Claude Sonnet 3.5 into Claude Sonnet 3.6 (3.6 is for lack of a better name, since it's still called 3.5).

We also might see a new GPT-4o refresh trained up using GPT-o3 via deepseek's distillation technique and other tricks.

There are a lot of new directions to go in now for OpenAI, but unfortunately, we won't likely see them until their API dominance comes under threat.


That could also definitely make sense if the SOTA models are too slow and expensive to be popular with a general audience.


Yeah, but they can use DeepSeek's new algorithm too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: