Hacker Newsnew | past | comments | ask | show | jobs | submit | hgaddipa001's commentslogin

Why wouldn't we be indexing at scale?


the demo video is literally just single thread tool calling to external sources. Indexing data is also a really complex problem more than just adding some elastic search to gmail which also you will find does not scale easily, if that's even what you're doing.


We do a lot of processing on our backend to prevent against prompt injection, but there definitely still is some risk. We can do better on as is always the case.

Need to read up on how CaMel does it. Do you have any good links?


That’s a pretty scary answer, to be honest.

Regardless, here’s the CaMeL paper. Defeating Prompt Injections by Design (2025): https://arxiv.org/abs/2503.18813

Here’s a paper offering a survey of different mitigation techniques, including CaMeL. Design Patterns for Securing LLM Agents against Prompt Injections (2025): https://arxiv.org/abs/2506.08837

And here’s a high-level overview of the state of prompt injection from 'simonw (who coined the term), which includes links to summaries of both papers above: https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/


Thanks!

Don't worry have worked with a few friends experienced in prompt injection to help with the platform.

But will read these too :)


Re: CaMeL, Jesus, why not build a UI with explicit access controls at that point?


because you can't enjoy your pina coladas on the beach if your phone keeps buzzing every 10 seconds.


Not sure we haven't ever done volume that size for one person, but in theory should be able too!

We use indexing similar to glean (but a bit less elegant without the ACLs)

Can talk more about your use case if you'd like to.

Send me a text at 262-271-5339


Why do you think privacy?

Security I understand, but if you consent to giving it access would it not be fine for privacy.


You give it access, it grabs your ssh keys and exfiltrate to some third party server. That is not the access the user gave to your platform but it is what it would be capable of doing.


Ohh we don't give it computer use access or anything like that. We inject tokens post tool call, so to protect users from the agent doing anything malicious.


I'm thinking about what this post explains more clearly than I can:

https://simonwillison.net/2025/Jun/16/the-lethal-trifecta

Seems to me that these kind of systems, by design, tick all three boxes. I've had many discussions with people that let agent systems read and act on their incoming email for instance, and I think it's utter insanity from a security perspective.


Have you tried out Slashy?

What makes you say that


Not really and this is totally not related to Slashy, it just look like the same as the other 20 Slashys launched last month. Launch HNs used to be exciting.

Maybe HN/ycombinator is just not interesting anymore. I saw some of you commenting that this might be similar to the famous Dropbox situation. That could not be more delusional and representative of what HN became, a meme of itself.


The strategy is throw a little bit of money at everything, hope one of them will become a unicorn, everyone gets richer.

Rinse and repeat.

You're right though ... these YC batches are not what they used to be. AI is hot right now, so it seems YC is throwing money at anything that seems like it can at least actually do something (not that it is necessarily good). If that product doesn't get hot, who cares? Plenty more money to go around on the next batch, because one of them probably will!


Hmm that's fair, we're definitely not the most exciting launch out there compared to others in our batch.

I'd like to think the fact we do what we promise is exciting, but without trying the product hard to convey that well :)


Oh I used Screen Studio :)

Thanks for the compliment.

Not worried about browser agents, as we actually have pretty deep integrations (we include semantic search as well as user action graphs).

Naturally apis will always be better than browsers as apis are computer languages and browsers are human language.

The sale of Browser Company today too I think shows there's not that much of a ceiling for agentic browsers.


We did a lot of internal testing but no official benchmark.

We find that the less the agent knows, the more it hallucinates


Smart


Or run your legal questions through a frontier model and then have a lawyer verify the answers. You can save a lot of money and time.

Yes, all LLM caveats apply. Due your diligence. But they are quite good at this now.


Have you actually tried this approach? I’m curious as to the result, especially when you took it to your lawyer. Not a contract review but a business practice risk evaluation.


Some context from coverage of GPT 5:

https://legaltechnology.com/2025/08/08/openai-launches-gpt-5...

https://www.artificiallawyer.com/2025/08/08/gpt-5-tops-harve...

Remember when "asking for a friend" was a thing?

Today's expression is "I asked a friend". You can try that when talking to your lawyer about your latest ChatGPT — they might still believe you.


Hmm this is a good idea too


We use Claude/OpenAI right now with Groq for tool routing!

I'd say maybe to get comfortable try out the non email features first, but we don't have access to any of your data.


How do you not have access to the data if I give you access to my email?


The agent does!

We don't, and agent pulls in data only when executing queries


Does the agent run on hardware you control?


Runs on AWS for now!


So you do have access to all the data. It's not really a great look if you're lying about what you have access to, and this is a technical audience, it's not like we don't know how agents work.


Sad state of current launch HNs where OP don't even know they are talking to hackers, not people that get easily impressed.


So you have access to the users Gmail, not "the agent".


Hmm ig yeah I can be more granular.

Yeah we store our user credentials on our side and manage them. Along with refreshing tokens and so forth


This is horrifying. Everyone should be horrified.


I think they mean OAuth credentials (all these APIs use OAuth unless you're doing something terribly wrong).


Yep we're using Oauth, so it's easy for a user to disconnect.


Or an alt/throwaway email...


ooh good idea!


Here's a fun launch video we made as well :)

https://x.com/raidingAI/status/1955890345927172359


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: