Hacker Newsnew | past | comments | ask | show | jobs | submit | brap's commentslogin

Koko, that chimp’s alright.

Koko's communication skills turned out to be a scam.


I like this but I hate how everything has to be tied to AI now to get attention. “I wanted to vibe code-“ who cares? It’s a neat tool, do we have to force AI into it?

It’s the tool‘s use case, which provides valuable background information about its technical choices.

Normally we get a few "but why would you make this?" comments. Maybe let's not discourage people who actually give us the answer upfront.

>AGI seem as far away as it’s always been

This blurb is the whole axiom on which the author built their theory. In my opinion it is not accurate, to say the least. And I say this as someone who is still underwhelmed by current AI for coding.


Another commenter said something that resonated with me - it feels too real, loses the magic.

Watch cartoons if you don't want 'real'. Those made by Disney are said to be 'magic'.

Sorry for being snarky. It's just that I have large difficulties enjoying 24 fps pan shots and action scenes. It's like watching a slide show to me. I'm rather annoyed that the tech hasn't made any progress in this regard, because viewers and makers want to cling on to the magic/dream-like experiences they had in their formative years.


Gemini is my favorite, but it does seem to be prone to “breaking” the flow of the conversation.

Sharing “system stuff” in its responses, responding to “system stuff”, starts sharing thoughts as responses, responses as thoughts, ignoring or forgetting things that were just said (like it’s suddenly invisible), bizarre formatting, switching languages for no reason, saying it will do something (like calling a tool) instead of doing it, getting into odd loops, etc.

I’m guessing it all has something to do with the textual representation of chat state and maybe it isn’t properly tuned to follow it. So it kinda breaks the mould but not in a good way, and there’s nothing downstream trying to correct it. I find myself having to regenerate responses pretty often just because Gemini didn’t want to play assistant anymore.

It seems like the flash models don’t suffer from this as much, but the pro models definitely do. The smarter the model to more it happens.

I call it “thinking itself to death”.

It’s gotten to a point where I often prefer fast and dumb models that will give me something very quickly, and I’ll just run it a few times to filter out bad answers, instead of using the slow and smart models that will often spend 10 minutes only to eventually get stuck beyond the fourth wall.


> ignoring or forgetting things that were just said (like it’s suddenly invisible)

This sounds like an artifact of the Gemini consumer app, some others may be too (the model providers are doing themselves a disservice by calling them the same).


It basically boils down to: capitalism works.


I think that’s exactly why they’re not including timestamps. If timestamps are shown in the UI users might expect some form of “time awareness” which it doesn’t quite have. Yes you can add it to the context but I imagine that might degrade other metrics.

Another possible reason is that they want to discourage users from using the product in a certain way (one big conversation) because that’s bad for content management.


I generally agree that they are garbage at producing code beyond things that are trivial. And the fact that non-techies use them as “fact checkers” is also disturbing because they are constantly wrong.

But I have found them to be very helpful for certain things, for example I can dump a huge log file and a chunk of the codebase and ask it to trace the root cause, 80% of the time it manages to find it. Would have taken me many hours otherwise.


What was the piece of furniture?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: