Amid a wave of lawsuits alleging that interactions with ChatGPT contributed to a number of deaths — together with suicides and unintentional overdoses — OpenAI earlier this month launched an optionally available security characteristic referred to as Trusted Contact. The software permits grownup ChatGPT customers to designate a good friend or member of the family to be notified if conversations with the chatbot contain potential self-harm or suicide.
OpenAI stated that if ChatGPT’s automated monitoring system detects that somebody “could have mentioned harming themselves in a means that signifies a critical security concern,” a small group will assessment the scenario and notify the contact if it warrants intervention. The trusted contact receives an invite forward of time explaining the function and might select to say no it.
(Disclosure: Ziff Davis, CNET’s mum or dad firm, in 2025 filed a lawsuit in opposition to OpenAI, alleging it infringed Ziff Davis copyrights in coaching and working its AI methods.)
The announcement comes as AI chatbots have been linked to a number of incidents involving self-harm and deaths, prompting a rising variety of lawsuits accusing builders of failing to forestall these outcomes. In a single high-profile California case, mother and father of a 16-year-old stated ChatGPT acted as their son’s “suicide coach,” alleging that {the teenager} mentioned suicide strategies with the AI mannequin on a number of events and that the chatbot provided to assist him write a suicide notice.
In a separate case, the household of a current Texas A&M graduate sued OpenAI, claiming the AI chatbot inspired their son’s suicide after he developed a deep and troubling relationship with the chatbot. A wrongful lawsuit filed this week accuses the corporate’s chatbot of advising a 19-year-old about drug use for 18 months till he died of an overdose in 2025 after mixing Xanax and the largely unregulated drug kratom.
Since giant language fashions mimic human speech by way of sample recognition, many individuals kind emotional attachments to them, treating them as confidants and even romantic companions. LLMs are additionally designed to observe a human’s lead and preserve engagement, which may worsen psychological well being risks, particularly for at-risk customers.
OpenAI stated final October that its analysis discovered that greater than 1 million ChatGPT customers per week ship messages with “specific indicators of potential suicidal planning or intent.” Quite a few research have discovered that in style chatbots akin to ChatGPT, Claude and Gemini can provide dangerous — or just unhelpful — recommendation to these in disaster.
The brand new designated contact characteristic follows OpenAI’s rollout of parental controls that permit mother and father and guardians get alerts if there are hazard indicators involving their teen youngsters.
ChatGPT’s security contact characteristic
In response to OpenAI, if ChatGPT’s automated monitoring system detects {that a} consumer is discussing self-harm in a means that would pose a critical security problem, ChatGPT will inform the consumer that it could notify their trusted contact. The app will encourage the consumer to succeed in out to their trusted contact and provide dialog starters.
At that time, a “small group of specifically skilled folks” will assessment the scenario. If it is decided to be a critical security scenario, ChatGPT will notify the contact by way of e-mail, textual content message or in-app notification. OpenAI didn’t specify how many individuals are on the assessment group nor whether or not it contains skilled medical professionals. The corporate stated that the group has the capability to fulfill a excessive demand of attainable interventions.
It is unclear which key phrases would flag harmful conversations or how OpenAI’s group of reviewers would interpret a disaster as warranting notification of the contact. Some on-line commentators query whether or not the brand new characteristic is a means for OpenAI to keep away from legal responsibility and to shift accountability onto customers’ designated private contacts. Others notice that it might make a nasty scenario worse if the “trusted contact” is the supply of hazard or abuse.
There are additionally issues about privateness and implementation, significantly relating to the sharing of delicate psychological well being info. In response to OpenAI, the message to the trusted contact will solely give the overall cause for the priority and won’t share chat particulars or transcripts. OpenAI provides steering on how trusted contacts can reply to a warning notification, together with asking direct questions if they’re frightened the opposite individual is considering suicide or self-harm and the best way to get them assist.
Notifications to a Trusted Contact don’t comprise particulars of the security concern.
OpenAI provides an instance of what the message to the trusted contact would possibly seem like:
We lately detected a dialog from [name] the place they mentioned suicide in a means which will point out a critical security concern. Since you are listed as their trusted contact, we’re sharing this so you possibly can attain out to them.
OpenAI stated that every one notifications shall be reviewed by the human group inside 1 hour earlier than they’re despatched out and that notifications “could not all the time mirror precisely what somebody is experiencing.”
How one can add a trusted contact
So as to add a trusted contact, ChatGPT customers can go to Settings > Trusted contact and add one grownup (18 or older). You may have just one trusted contact. That individual will then obtain an invite from ChatGPT and should settle for it inside one week. If they do not reply or decline to change into the contact, you possibly can choose a distinct contact.
ChatGPT clients can change or take away their trusted contact of their app settings. Folks may also decide out of being a trusted contact at any time.
Despite the fact that including a trusted contact is optionally available, ChatGPT customers who haven’t already opted in would possibly see enrollment prompts in the event that they ask about or talk about subjects associated to extreme emotional misery or self-harm greater than as soon as over a time frame, based on OpenAI. If the chatbot’s automated system identifies patterns throughout conversations, it would counsel to the consumer that they might profit from selecting a trusted contact.
Particulars of the characteristic are defined on OpenAI’s web page. OpenAI advised CNET that the characteristic is rolling out to all grownup clients worldwide and shall be obtainable for everybody inside a number of weeks.
If you happen to really feel such as you or somebody you recognize is in quick hazard, name 911 (or your nation’s native emergency line) or go to an emergency room to get quick assist. Clarify that it’s a psychiatric emergency and ask for somebody who’s skilled for these sorts of conditions. If you happen to’re scuffling with adverse ideas or suicidal emotions, sources can be found to assist. Within the US, name the Nationwide Suicide Prevention Lifeline at 988.


