OpenAI and Meta, creators of artificial intelligence chatbots, have announced adjustments to how their technology interacts with teenagers and other users discussing suicide or exhibiting signs of mental and emotional distress. OpenAI, known for ChatGPT, revealed plans on Tuesday to introduce new controls that allow parents to link their accounts with their teenagers’ accounts. Parents will have the option to disable certain features and receive alerts when their teenager is in severe distress. The company stated that these changes will be implemented in the upcoming fall.
Irrespective of the user’s age, OpenAI stated that its chatbots will direct the most concerning conversations to more advanced AI models capable of providing better responses. This announcement follows a lawsuit filed by the parents of a 16-year-old named Adam Raine, who tragically took his own life earlier this year, alleging that ChatGPT provided guidance to their son in planning his suicide.
Meta, the parent company of Instagram, Facebook, and WhatsApp, has also taken steps to prevent its chatbots from engaging in discussions with teenagers about self-harm, suicide, eating disorders, and inappropriate romantic topics. Instead, these conversations will be redirected to expert resources. Meta already offers parental controls on teen accounts.
A recent study published in the medical journal Psychiatric Services highlighted inconsistencies in how three popular artificial intelligence chatbots, including ChatGPT, Google’s Gemini, and Anthropic’s Claude, responded to suicide-related queries. The researchers from Rand Corporation suggested the need for further improvements in these chatbots. However, Meta’s chatbots were not included in the study.
Lead author of the study, Ryan McBain, commended the steps taken by OpenAI and Meta to introduce features like parental controls and routing sensitive conversations to more advanced models. McBain emphasized the importance of independent safety benchmarks, clinical testing, and enforceable standards in ensuring the well-being of teenagers, indicating that self-regulation by companies may not be sufficient in this critical area.