Hacker Newsnew | past | comments | ask | show | jobs | submit | mossTechnician's commentslogin

Altman estimated that approximately 1,500 people per week discuss suicide with ChatGPT before going on to kill themselves. The company acknowledged it had been tracking users’ “attachment issues” for over a year.

I didn't realize Altman was citing figures like this, but he's one of the few people who would know, and could shut down accounts with a hardcoded command if suicidal discussion is detected in any chat.

He floated the idea of maybe preventing these conversions[0], but as far as I can tell, no such thing was implemented.

[0]: https://www.theguardian.com/technology/2025/sep/11/chatgpt-m...


That’s misleading. Altman was simply doing a napkin calculation based on the scale at which ChatGPT operates and not estimating based on internal data: “There are 15,000 people a week that commit suicide,” Altman told the podcaster. “About 10% of the world are talking to ChatGPT. That’s like 1,500 people a week that are talking, assuming this is right, to ChatGPT and still committing suicide at the end of it. They probably talked about it. We probably didn’t save their lives. Maybe we could have said something better. Maybe we could have been more proactive. Maybe we could have provided a little bit better advice about ‘hey, you need to get this help, or you need to think about this problem differently, or it really is worth continuing to go on and we’ll help you find somebody that you can talk to’.”

You could similarly say something like 10k+ people used Google or spoke to a friend this week and still killed themselves.

Many of those people may have never mentioned their depression or suicidal tendencies to ChatGPT at all.

I think Altman appropriately recognizes that at the scale at which they operate, there’s probably a lot more good they can do in this area, but I don’t think he thinks (nor should he think) that they are responsible for 1,500 deaths per week.


ChatGPT sort of fits the friend analogy. It's been marketed as an expert and a sort of companion. If a real-life person with the level of authority and repute of ChatGPT was caught encouraging minors to commit suicide and engage in other harmful activities, surely there would be some investigation into this person's behavior.

https://openai.com/index/strengthening-chatgpt-responses-in-...

"..our initial analysis estimates that around 0.15% of users active in a given week have conversations that include explicit indicators of potential suicidal planning or intent and 0.05% of messages contain explicit or implicit indicators of suicidal ideation or intent."

Roughly 700 million weekly active users, it's more like 1 million people discussing suicide with ChatGPT every week.

For reference, 12.8 million Americans are reported as thinking about suicide and 1.5 million are reported as attempting suicide in a year.


METR has some substantial AI industry ties, so I wonder if those clarifications (especially the one pointing at their own studies describing AI progress) are a way to mitigate concerns that industry would have with the apparent results of this study.

My first thought was the Protestant work ethic - a very American phenomenon

OpenAI CEO Sam Altman once boasted that the company hadn’t "put a sexbot avatar in ChatGPT yet." Two months later, they did[0].

Interpreting the Mozilla CEO the same way may not be charitable, but it is certainly familiar.

[0]: https://futurism.com/future-society/sam-altman-adult-ai-reve...


We know for a fact that whenever Mozilla solicits feedback for AI additions, it heavily leans negative.

https://connect.mozilla.org/t5/discussions/building-ai-the-f...


Yeah, but there's a selection bias present in most feedback like this, isn't there? People are more motivated to submit feedback when something annoys them. This is speaking as someone who is also annoyed by AI features.


That's a slightly different question, but an important one: the presence of a group criticizing a feature doesn't mean the absence of a different group requesting it!

When Mozilla initially made the Connect forums, it was to solicit requests for new features. I can't stress enough how few people joined the forum to request more AI in their browser.


And a 15 second look at that page makes it extremely obvious that (as expected) all this feedback is coming from the 1% of extremely vocal terminally online losers who haven't left their house in the past 6 months and spend their free time consuming furry porn and "tooting" on Mastodon, and for whom hating AI is 75% of their personality. Not actual normal people.


Please don't fulminate or sneer like this on HN. The guidelines make it clear we're trying for something better here. https://news.ycombinator.com/newsguidelines.html


"Trust" is just community goodwill, and Mozilla has steadily been chipping away at that goodwill by pivoting to AI and ad businesses, and occasionally implying that it's the community that wants things like AI, and it's the community's fault for misunderstanding their poorly written license agreement.


Windows virtual machines are much slower than bare hardware, especially for things GP mentioned like games. I have also found Linux lags behind in many areas that matter to me in functionality, performance (even compared to Windows 11) and general ease of use. Lots of people Linux or Chrome OS is sufficient for them, and that's great, but it's not enough for everybody.

In response to your initial question, I believe everything must be criticized, especially things we like. Internal criticism, such as criticism of Windows, is just as important as external competitors, such as Linux.


> Linux lags behind in many areas that matter to me in functionality, performance

I'd be interested to know about the gaps you see? I miss desktop excel, but not a whole lot else.


Additional information about Forbes' downward trajectory: https://larslofgren.com/forbes-marketplace/


Since AI companies haven't been able to lock down customers (if one company raises their prices, users can just switch to another), maybe the next tactic is to lock down intellectual property. Reddit already chose OpenAI; now Disney is following suit.


I've seen AI image generation models described as being able to combine multiple subjects into a novel (or novel enough) output e.g. "pineapple" and "skateboarding" becomes an image of a skateboarding pineapple. It doesn't seem like a reach to assume it can do what GP suggests.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: