Its already here - its called GEO and their are silicon valley startups already pumping out crap to feed next gen models so that you ensure you're product is baked into the weights
The next gen of models are going to need very strict sanitising of input articles as I think the sheer volume of GPT SEO spam is going to be, or already is, quite staggering. Model collapse might not be what happens but certainly a dilution of quality in training data.
This looks like a workflow problem more than a model problem.
When inputs aren’t controlled, scale amplifies noise faster than understanding.
Tools improve, but the decision boundaries stay the bottleneck.
you say "local-first" but have placed voyage API for embeddings as the default (had to go to the website and dig to find that you can infact use local embedding models). Please fix
It would be convenient if it could load local SLMs itself, otherwise I'll have to manually start the LLM server before I can use it, and it's not something I leave running all the time.
AI and Claude Code are incredible tools. But use cases like "Organize my desktop" are horrible misapplications that are insecure, inefficient and a privacy nightmare. Its the smart refrigerator of this generation of tech.
I worry that the average consumer is none the wiser but I hope a company that calls itself Anthropic is anthropic. Being transparent about what the tool is doing, what permissions it has, educating on the dangers etc. are the least you can do.
With the example of clearing up your mac desktop: a) macOS already autofolds things into smart stacks b) writing a simple script that emulates an app like Hazel is a far better approach for AI to take
reply