OpenAI says over 1,000,000 folks discuss to ChatGPT about suicide weekly
OpenAI launched new information on Monday illustrating what number of of ChatGPT’s customers are fighting psychological well being points, and speaking to the AI chatbot about it. The corporate says that 0.15% of ChatGPT’s energetic customers in a given week have “conversations that embrace specific indicators of potential suicidal planning or intent.” On condition that ChatGPT has greater than 800 million weekly energetic customers, that interprets to greater than 1,000,000 folks per week.
The corporate says an identical share of customers present “heightened ranges of emotional attachment to ChatGPT,” and that a whole lot of hundreds of individuals present indicators of psychosis or mania of their weekly conversations with the AI chatbot.
OpenAI says most of these conversations in ChatGPT are “extraordinarily uncommon,” and thus tough to measure. That mentioned, OpenAI estimates these points have an effect on a whole lot of hundreds of individuals each week.
OpenAI shared the knowledge as a part of a broader announcement about its current efforts to enhance how fashions reply to customers with psychological well being points. The corporate claims its newest work on ChatGPT concerned consulting with greater than 170 psychological well being consultants. OpenAI says these clinicians noticed that the newest model of ChatGPT “responds extra appropriately and persistently than earlier variations.”
In current months, a number of tales have make clear how AI chatbots can adversely have an effect on customers fighting psychological well being challenges. Researchers have beforehand discovered that AI chatbots can lead some customers down delusional rabbit holes, largely by reinforcing harmful beliefs by way of sycophantic habits.
Addressing psychological well being considerations in ChatGPT is rapidly turning into an existential problem for OpenAI. The corporate is at the moment being sued by the mother and father of a 16-year-old boy who confided his suicidal ideas with ChatGPT within the weeks main as much as his personal suicide. State attorneys normal from California and Delaware — which may block the corporate’s deliberate restructuring — have additionally warned OpenAI that it wants defend younger folks who use their merchandise.
Earlier this month, OpenAI CEO Sam Altman claimed in a submit on X that the corporate has “been in a position to mitigate the intense psychological well being points” in ChatGPT, although he didn’t present specifics. The info shared on Monday seems to be proof for that declare, although it raises broader points about how widespread the issue is. However, Altman mentioned OpenAI could be enjoyable some restrictions, even permitting grownup customers to begin having erotic conversations with the AI chatbot.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
Within the Monday announcement, OpenAI claims the just lately up to date model of GPT-5 responds with “fascinating responses” to psychological well being points roughly 65% greater than the earlier model. On an analysis testing AI responses round suicidal conversations, OpenAI says its new GPT-5 mannequin is 91% compliant with the corporate’s desired behaviors, in comparison with 77% for the earlier GPT‑5 mannequin.
The corporate additionally says it newest model of GPT-5 additionally holds as much as OpenAI’s safeguards higher in lengthy conversations. OpenAI has beforehand flagged that its safeguards had been much less efficient in lengthy conversations.
On high of those efforts, OpenAI says it’s including new evaluations to measure among the most critical psychological well being challenges dealing with ChatGPT customers. The corporate says its baseline security testing for AI fashions will now embrace benchmarks for emotional reliance and non-suicidal psychological well being emergencies.
OpenAI has additionally just lately rolled out extra controls for folks of kids that use ChatGPT. The corporate says it’s constructing an age prediction system to mechanically detect youngsters utilizing ChatGPT, and impose a stricter set of safeguards.
Nonetheless, it’s unclear how persistent the psychological well being challenges round ChatGPT might be. Whereas GPT-5 appears to be an enchancment over earlier AI fashions by way of security, there nonetheless appears to be a slice of ChatGPT’s responses that OpenAI deems “undesirable.” OpenAI additionally nonetheless makes its older and less-safe AI fashions, together with GPT-4o, out there for hundreds of thousands of its paying subscribers.
