As someone who abuses gemini regularly with a 90% full context, the model performance does degrade for sure but I wouldn't call it massively.
I can't show any evidence as I don't have such tests, but it's like coding normally vs coding after a beer or two.
For the massive effect, fill it 95% and we're talking vodka shots. 99%? A zombie who can code. But perhaps that's not fair when you have 1M token context size.
I can't show any evidence as I don't have such tests, but it's like coding normally vs coding after a beer or two.
For the massive effect, fill it 95% and we're talking vodka shots. 99%? A zombie who can code. But perhaps that's not fair when you have 1M token context size.