Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> That means my data never leaves my device

I mean, if you vibecoded it you don't actually know that, do you?





If you want to be completely watertight, you can absolutely run an on premises model. No data ever leaves your network, ever. Some pretty good models run on $5-10k hardware

Can’t do that with SaaS

Also, I’m baffled that on HN of all places, I have to actually defend the idea of rolling your own apps and protecting your data from cloud providers


Wait of course I was assuming a local model. People aren't using hosted AI on machines holding data that they care about being exfiltrated, right?

> People aren't using hosted AI on machines holding data that they care about being exfiltrated, right?

If only


I work in a DFARS-compliant office and I keep forgetting the world isn't like that

Until vibecoding agents somehow develop the capability to sign up for a cloud storage API and pay for it on their own, you can probably be pretty sure about that.

An exfiltrator would have a blind upload box sitting somewhere the poisoned prompt knows about

..so they would pay so the see the blog post a little earlier thna you do? Math doesn't work out on this

They would pay to see whatever local files your settings and skills allow the agent to see (plus whatever skills they infiltrated, something you'll have zero visibility about)

vibe code manifest.xml to disallow network access. If you're really paranoid, you can use Google search to look up the permissions names instead of relying on an LLM to do it.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: