ChatGPT Adds ‘Trusted Contact’ Tool to Tackle Mental Health Risks

ChatGPT Adds ‘Trusted Contact’ Tool to Tackle Mental Health Risks
May 10, 2026 12:29

Artificial intelligence-powered chatbot ChatGPT will now play a more proactive role in safeguarding users’ lives. If a user is detected to be in a situation involving potential self-harm, the system will automatically notify a trusted friend or family member. On Friday, OpenAI announced the rollout of its new ‘Trusted Contact’ feature.

Why this initiative?

ChatGPT currently has approximately 800 million weekly users worldwide. According to a report by BBC, at least 1 million users each week express thoughts related to self-harm or mental distress during conversations with the chatbot. Following a lawsuit against OpenAI last year related to a teenager’s suicide, the company has introduced this major update to strengthen its safety measures.

How ‘Trusted Contact’ will work

Users aged 18 or older can add an adult individual as their ‘Trusted Contact’ through the settings option in ChatGPT. The selected contact must accept the invitation within one week.

If the system detects that a user is forming a serious plan to harm themselves, it will alert the trusted contact via email, text message, or app notification. However, notifications will not be sent abruptly; ChatGPT will first encourage the user to reach out to their trusted contact themselves.

Privacy and human oversight

OpenAI stated that the process will not rely solely on artificial intelligence. A specially trained team will review each case, and only when a genuine risk to life is confirmed will a notification be sent. To protect user privacy, no transcripts or detailed conversation data will be shared. The message will simply state: “Your friend is going through a difficult time, please check on them.”

One-hour response target

The company claims it will strive to verify and send notifications within one hour of detecting any potential risk signal. While no system is entirely flawless, OpenAI believes that human verification will significantly reduce the likelihood of false alerts.

Experts say this initiative marks a significant and humane step in leveraging artificial intelligence for mental health support.

DBTech/BMT/OR