Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As someone who abuses gemini regularly with a 90% full context, the model performance does degrade for sure but I wouldn't call it massively.

I can't show any evidence as I don't have such tests, but it's like coding normally vs coding after a beer or two.

For the massive effect, fill it 95% and we're talking vodka shots. 99%? A zombie who can code. But perhaps that's not fair when you have 1M token context size.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: