Hacker Newsnew | past | comments | ask | show | jobs | submit | tin7in's commentslogin

I use it all the time with coding agents, especially if I'm running multiple terminals. It's way faster to talk than type. The only problem is that it looks awkward if there are others around.

Interesting. I can think and type faster, but not talk. I am not much of a talker.

Same, whenever I try to dictate something I always umm and ahhh and go back a bunch of times, and it's faster to just type. I guess it's just a matter of practice, and I'm fine when I'm talking to other people, it's only dictation I'm having trouble with.

It has something called "Custom Words" which might be what you are describing. Haven't tested this feature yet properly.

As an alternative to Wisprflow, Superwhisper and so on. It works really well compared to the commercial competitors but with a local model.

I'm really surprised how much pushback and denial there is still from a lot of engineers.

This is truly impressive and not only hype.

Things have been impressive at least since April 2025.


Is this satire? This comment could not be a better example of what the linked article is talking about.

Not satire. The author is in denial of what's happening.

What is happening?

Not much. They can still parrot their training data. AGI is still 5-20 years away.

I bought the Refactoring UI book years ago and it taught me so much about simplicity and good design!

Peter's (author) last project is reusing a lot of these small libraries as tools in a way larger project. Long term memory is part of that too.

It's an assistant building itself live on Discord. It's really fun to watch.

https://github.com/clawdbot/clawdbot/


Peter (author) talks more about LLMs as slot machines here: https://steipete.me/posts/just-one-more-prompt

Yeah sounds unhealthy, at least self-aware?

Speaking from personal experience and talking to other users - the agents/harnesses of the vendors are just better and they are customized for their own models.


what kinds of tasks do you find this to be true for? For a while I was using claude code inside of the cursor terminal, but I found it to be basically the same as just using the same claude model in there.

Presumably the harness cant be doing THAT much differently right? Or rather what tasks are responsibilities of the harness could differentiate one harness from another harness


This becomes clearer for me with harder problems or long running tasks and sessions. Especially with larger context.

Examples that come to mind are how the context is filled up and how compaction works. Both Codex and Claude Code ship improvements regarding this specific to their own models and I’m not sure how this is reflected in tools like Cursor.


For us also Digital Ocean, Render, and a few other vendors are down.

At this point picking vendors that don't use Cloudflare in any way becomes the right thing to do.


Claude was also down (which brought me here)


I tried it and ran out of credits during the first prompt. No visible way to upgrade or purchase.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: