Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Gemini 3.0 is one of the most anticipated releases in AI at the moment because of the expected advances in coding performance.

Based on what I'm hearing from friends who work at Google and are using it for coding, we're all going to be very disappointed.

Edit: It sound like they don't actually have Gemini 3 access, which would explain why they aren't happy with it.



Gemini 3.0 isn't broadly available inside Google. There's are "Gemini for Google" fine-tuned versions of 2.5 Pro and 2.5 Flash, but there's been no broad availability of any 3.0 models yet.

Source: I work at Google (on payments, not any AI teams). Opinions mine not Google's.


Hate to spoil this excitement, but we at Google do not have Gemini 3 available to us for use in Vibecoding.


Which should surprise no one. LLMs are reaching diminishing returns, unless we find a way to build GPUs more cheaply.


For coding this is absolutely positively incorrect.

Going from GPT4 to GPT5 Codex has been transformational. It has gone from smarter autocomplete to writing entire applications for me.


And why would cheaper GPUs damper the diminishing effect?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: