First, as an organization, do all this cybersecurity theatre, and then create an MCP/LLM wormhole that bypasses it all.
All because non-technical folks wave their hands about AI and not understanding the most fundamental reality about LLM software being fundamentally so different than all the software before it that it becomes an unavoidable black hole.
I'm also a little pleased I used two space analogies, something I can't expect LLMs to do because they have to go large with their language or go home.
My first reaction to the announcement of MCP was that I must be missing something. Surely giving an LLM unlimited access to protected data is going to introduce security holes?
Assuming a 101 security program past the quality bar, there are a number of reason why this can still happen at companies.
Summarized as - security is about risk acceptance, not removal. There’s massive business pressure to risk accept AI. Risk acceptance usually means some sort of supplemental control that’s not the ideal but manages. There are very little of these with AI tools however - small vendors, they’re not really service accounts but IMO best way to monitor them probably is that, integrations are easy, eng companies hate devs losing admin of some kind but if you have that random AI on endpoints becomes very likely.
I’m ignoring a lot of nuance but solid sec program blown open by LLM vendors is going to be common, let alone bad sec programs. Many sec teams I think are just waiting for the other shoe to drop for some evidentiary support while managing heavy pressure to go full bore AI integration until then.
And then folks can gasp and faint like goats and pretend they didn’t know.
It reminds me of the time I met an IT manager who dint have an IT background. Outsourced hilarity ensued through sales people who were also non-technical.
> I'm also a little pleased I used two space analogies, something I can't expect LLMs to do because they have to go large with their language or go home.
Speaking of LLMs, did you notice the comment you were responding to was written by an account posting repetitive LLM-generated comments? :)
Nitpick, but wormholes and black holes aren't limited to space! (unless you go with the Rick & Morty definition where "there's literally everything in space")
Maybe this is the key takeaway of GenAI: that some access to data, even partially hallucinated data, is better than the hoops that the security theatre puts in place that prevents average Joe doing their job.
This might just be a golden age for getting access to the data you need for getting the job done.
Next security will catch up and there'll be a good balance between access and control.
Then, as always security goes to far and nobody can get anything done.
"GenAI" is nothing new. "AI" is just software. It's not intelligent, or alive, or sentient, or aware. People can scifi sentimentalize it if they want.
It might simulate parts of things, hopefully more reliably.
It's however a different category of software which requires management that doesn't exist yet how it should.
Cybersecurity security theatre for me is using a web browser to secure and administer what was previously already done and creating new security holes from a web interface.
Then, bypassing it to allow unmanaged MCP access to internal data moats creating it's own universe of security vulnerabilities, full stop. In a secured and contained environment, using an MCP to access data to unlock insight is one thing.
It doesn't mean dont' use MCPs. It means the AI won't figure out what the user doesn't know about security around securing MCPs which is a far more massive vulnerability because users of AI have delegated their thinking to a statistics formula ("GenAI"), because it is so impressive on the surface, but no one is checking the work to make sure it stays that way. Managing quality however, is improving.
My comment is calling out effectively letting external paths have unadulterated access to your private and corporate data.
Data is the new moat. Not UI/UX/Software.
A wormhole that exposes your data makes it available for someone to put it into their data moat far too commonly, and also for it to be mis-interpretted.
First, as an organization, do all this cybersecurity theatre, and then create an MCP/LLM wormhole that bypasses it all.
All because non-technical folks wave their hands about AI and not understanding the most fundamental reality about LLM software being fundamentally so different than all the software before it that it becomes an unavoidable black hole.
I'm also a little pleased I used two space analogies, something I can't expect LLMs to do because they have to go large with their language or go home.