General Analysis has released an open source MCP guard to secure your MCP clients against prompt injection attacks like these. https://generalanalysis.com/blog/mcpguard
General Analysis has released an open source MCP guard to secure your MCP clients against prompt injection attacks like these. https://generalanalysis.com/blog/mcpguard
I broadly agree that "MCP-level" patches alone won't eliminate prompt-injection risk. Latest research also shows we can make real progress by enforcing security above the MCP layer, exactly as you suggest [1]. DeepMind's CaMeL architecture is a good reference model: it surrounds the LLM with a capability-based "sandbox" that (1) tracks the provenance of every value, and (2) blocks any tool call whose arguments originate from untrusted data, unless an explicit policy grants permission.