I've been seeing someone on Tiktok that appears to be one of the first public examples of AI psychosis, and after this update to GPT-5, the AI responses were no longer fully feeding into their delusions. (Don't worry, they switched to Claude, which has been far worse!)
> If Claude notices signs that someone may unknowingly be experiencing mental health symptoms such as mania, psychosis, dissociation, or loss of attachment with reality, it should avoid reinforcing these beliefs. It should instead share its concerns explicitly and openly without either sugar coating them or being infantilizing, and can suggest the person speaks with a professional or trusted person for support. Claude remains vigilant for escalating detachment from reality even if the conversation begins with seemingly harmless thinking.
I started doing this thing recently where I took a picture of melons at the store to get chatGPT to tell me which it thinks is best to buy (from color and other characteristics).
chatGPT will do it without question. Claude won't even recommend any melon, it just tells you what to look for. Incredibly different answer and UX construction.
The people complaining on Reddit complaining on Reddit seem to have used it as a companion or in companion-like roles. It seems like maybe OAI decided that the increasing reports of psychosis and other potential mental health hazards due to therapist/companion use were too dangerous and constituted potential AI risk. So they fixed it. Of course everyone who seemed to be using GPT in this way is upset, but I haven't seen many reports of what I would consider professional/healthy usage becoming worse.
AFAIK that trophy goes to Blake Lemoine, who believed Google's LaMDA was sentient[0,1] three years ago, or more recently Geoff Lewis[2,3] who got gaslit into believing in some conspiracy theory incorporating SCP.
IDK what can be done about it. The internet and social media were already leading people into bubbles of hyperreality that got them into believing crazy things. But this is far more potent because of the way it can create an alternate reality using language, plugging it directly into a person's mind in ways that words and pictures on a screen can't even accomplish.
And we're probably not getting rid of AI anytime soon. It's already affected language, culture, society and humanity in deep and profound, and possibly irreversible ways. We've put all of our eggs into the AI basket, and it will suffuse as much of our lives as it can. So we just have to learn to adapt to the consequences.
I didn't realize that I was supposed to get to 0, but just trying to get as many words as possible. I got down to 'a' and reset, it would be nice if just getting to a 1 letter word prompted the win state.
This is sick! Is it using an LLM's photo detection to determine if it is a picture of touching grass?
I recently added all of my most distracting apps to the Hidden section on iPhone. It requires a FaceID check to even see the apps, and blocks all notifications from them. It has helped me cut down on the scroll-time significantly.
So uh, don't try to make the jump from the pink elevator to the solitary pink cube on the last level. If you make it, you're stuck there forever! (I thought it would be a skill jump to an easter egg)