> Monitored their network traffic for 60-second sessions
How can he monitor what's going on between a startup's backend and OpenAI's server?
> The truth is just an F12 away
That's just not how this works. You can see the network traffic between your browser and some service. In 12 cases that was OpenAI or similar. Fine. But that's not 73%. What about the rest? He literally has a diagram claiming that the startups contact an LLM service behind the scenes. That's what's not described, how does he measure that?
You are not bothered by the only sign that the author even exist is this one article and the previous one? Together with the claim to be a startup founder? Anybody can claim that. It doesn't automatically provide credibility.
I believe he's saying that a large number of the startups he tested did not have their own backend to mediate. It was literally direct front-end calls to openai. And if this sounds insane, remember that openai actually supports this: https://platform.openai.com/docs/api-reference/realtime-sess...
Presumably OpenAI didn't add that for fun, either, so there must be non-zero demand for it.
It's a fair point that OpenAI officially supports ephemeral keys.
But I still believe the vast majority of startups do wrapping in their own backend. Yes, I read what he's doing, and he's still only able to analyze client-side traffic, which means his overall claims of "73%" are complete and total bullshit. It is simply impossible to conclude what he's concluding without having access to backend network traces.
EDIT: This especially doesn't make sense because the specific sequence diagram in this article shows the wrapping happening in "Startup Backend", and again, it would be impossible for him to monitor that network traffic. This entire article is made-up LLM slop.
> How can he monitor what's going on between a startup's backend and OpenAI's server?
He is not claiming to be doing that. He says what and how he's capturing multiple times. He says he's capturing what's happening in browser sessions. Reflect on what else you may to re-evaluate or discard if you misunderstood this.
> That's just not how this works. You can see the network traffic between your browser and some service.
Yes, the author is well aware of that as are presumably most readers. However for example if your client makes POST requests to the startup's backend like startup.com/api/make-request-to-chatgpt and the payload is {systemPrompt: "...", userPrompt: "..."}, not much guessing as to what is going on is necessary.
> You are not bothered by the only sign that the author even exist is this one article and the previous one?
Moving goalposts. He may or not be full of shit. Guess we'll see if/when we see the receipts he promised to put on GitHub.
What actually bothers is the lack of general reading comprehension being displayed in this thread.
> Together with the claim to be a startup founder? Anybody can claim that.
What? Anybody can be a startup founder today. Crazy claim. Also... what?
> It doesn't automatically provide credibility.
Almost nobody in this space has credibility. That could turn out to be Sam Altman's alias and I'd probably trust it even less.
In any case evaluating whether or not a text is credible should preferably happen after one has understood what was written.
How can he monitor what's going on between a startup's backend and OpenAI's server?
> The truth is just an F12 away
That's just not how this works. You can see the network traffic between your browser and some service. In 12 cases that was OpenAI or similar. Fine. But that's not 73%. What about the rest? He literally has a diagram claiming that the startups contact an LLM service behind the scenes. That's what's not described, how does he measure that?
You are not bothered by the only sign that the author even exist is this one article and the previous one? Together with the claim to be a startup founder? Anybody can claim that. It doesn't automatically provide credibility.