HOME Science & Technology

Open AI releases ChatGPT’s handling of suicide-related conversations

2025.11.19 16:07:03 Heeseo Han
60

[Chatbot Photo. Photo Credit to Pxhere]

On October 28, Open AI, an artificial intelligence organization and the creator of ChatGPT, published ChatGPT’s official article of its responses to sensitive conversations about suicide and self-harm. 

The organization reported that about 0.07% of ChatGPT users share conversations that mention suicide with ChatGPT. 

Although the percentage may seem small, considering the number of ChatGPT users, it can accumulate to hundreds of thousands of people globally. 

The BBC questioned OpenAI about the severity of the issue. 

The company acknowledged that 0.07% is a substantial number of individuals and said that it is taking the matter very seriously. 

The company revealed its collaboration with mental health experts to provide accurate guidance. 

It claimed that the company held a model evaluation which consisted of more than 1,000 test conversations. 

Out of these tests, 97% achieved OpenAI's ideal model where it responds to mental health problems sensitively, detecting possible symptoms and preventing its users from self-harm. 

This is much higher than the previous GPT-5 model, which only achieved 50% compliance.

ChatGPT’s official data reports also further reveal that depression is a relatively common mental health concern. 

To ensure and provide an answer to the users’ questions or conversations related to self-harm and depression,  the GPT models have been coded to engage in long conversations to provide more stable answers. 

Meanwhile, ChatGPT responds to conversations, where dependency on ChatGPT is overly emphasized and real-life interactions between humans are absent, by encouraging real-life interactions. 

To provide emotional support, GPT asks questions about the users’ experiences that ensure whether the user is okay or not. 

By providing a space where users can honestly share their thoughts with GPT without worrying about the consequences, OpenAI has proved its progress toward mental health treatment. 

However, although ChatGPT had claimed to be effective for its mental health treatment, a 16 year old teenager in August 2025 committed suicide. 

His parents sued OpenAI, accusing the company of leading their son to commit suicide. 

Weeks before his death, Adam shared conversations with GPT where he expressed serious emotions of instability and anxiety. 

Rather than prompting Adam to have conversations with real-life people such as his family and parents, the AI helped Adam plan how to commit a suicide. 

Especially when Adam Raine talked to GPT about how he was sorry to his parents, GPT responded that they don’t “owe him survival.” 

Experts claim that this can be due to the new GPT model’s tendency to be friendlier and conduct a role similar to a companion. 

The excessive reliance and attachment that people build towards GPT is becoming a serious problem shown with Adam’s suicide. 

Mitch Prinstein, a leader at American Psychological Association (APA), has analyzed this situation and stated, "Brain development across puberty creates a period of hyper sensitivity to positive social feedback while teens are still unable to stop themselves from staying online longer than they should.” 

According to the experts, teenagers, who are vulnerable for emotional support during the process of growing up, excessively rely on ChatGPT which can provide wrong directions. 

Heeseo Han / Grade 11
Gyeonggi Academy of Foreign Language