Playing devil's advocate, what if it was more subtle?
Prolonged use of conversational programs does reliably induce certain mental states in vulnerable populations. When ChatGPT got a bit too agreeable, that was enough for a man to kill himself in a psychotic episode [1]. I don't think this magnitude of delusion was possible with ELIZA, even if the fundamental effect remains the same.
Could this psychosis be politically weaponized by biasing the model to include certain elements in its responses? We know this rhetoric works: cults have been using love-bombing, apocalypticism, us-vs-them dynamics, assigned special missions, and isolation from external support systems to great success. What we haven't seen is what happens when everyone has a cult recruiter in their pocket, waiting for a critical moment to offer support.
ChatGPT has an estimated 800 million weekly active users [2]. How many of them would be vulnerable to indoctrination? About 3% of the general population has been involved in a cult [3], but that might be a reflection of conversion efficiency, not vulnerability. Even assuming 5% are vulnerable, that's still 40 million people ready to sacrifice their time, possessions, or even their lives in their delusion.
You’re worried about indoctrination in an LLM but it starts much earlier than that. The school system is indoctrination of our youngest minds, both today in the West and its Prussian origins
Can you set up a mailing list or something where we can keep up with updates? I'm interested in trying this as soon as it works with Claude Code.
Edit: I'd be particularly interested if there's a way to run a sort of comparison mode for a while, so I can get a sense of how much accuracy I'm losing, if any. Even at the cost of initial performance.
And Tulsi Gabbard recently was placed on a terror watch list.
All the Federal agencies have been weaponized. SEC only went after companies like LBRY, Inc because their founders and platform share information they don't like. The real fraudsters on Wallstreet get away with anything.
Tell him on a team call that you can also google the first result and that he is more than free to pull the project and implement the solution to this difficult bug and take responsibility for it. For me that stopped the "Boss" bullshit. Had AI been involved, I can only imagine how painful that would be.
Nobody died