Exactly. I have said several times that the largest and most lucrative market for AI and agents in general is liability-laundering.
It's just that you can't advertise that, or you ruin the service.
And it already does work. See the sweet, sweet deal Anthropic got recently (and if you think $1.5B isn't a good deal, look at the range of of compensation they could have been subject to had they gone to court and lost).
Remember the story about Replit's LLM deleting a production database? All the stories were AI goes rogue, AI deletes database, etc.
If an Amazon RDS database was just wiped a production DB out of nowhere, with no reason, the story wouldn't be "Rogue hosted database service deletes DB" it would be "AWS randomly deletes production DB" (and, AWS would take a serious reputational hit because of that).
It's just that you can't advertise that, or you ruin the service.
And it already does work. See the sweet, sweet deal Anthropic got recently (and if you think $1.5B isn't a good deal, look at the range of of compensation they could have been subject to had they gone to court and lost).
Remember the story about Replit's LLM deleting a production database? All the stories were AI goes rogue, AI deletes database, etc.
If an Amazon RDS database was just wiped a production DB out of nowhere, with no reason, the story wouldn't be "Rogue hosted database service deletes DB" it would be "AWS randomly deletes production DB" (and, AWS would take a serious reputational hit because of that).