There is a growing concern over the interactions of young individuals with AI chatbots, leading Meta to introduce new tools for parents to monitor their children’s chatbot activities. Some regions are considering prohibiting the use of AI chatbots for young people. Parents utilizing Meta’s Teen Accounts supervision feature on Facebook, Instagram, and Messenger can track the topics their children engage with using the AI chatbot over the previous seven days, including subjects like health and well-being. Meta is also working on alerts to notify parents if teens attempt to discuss sensitive topics like suicide or self-harm with the chatbot.
Meanwhile, provincial governments are taking steps to restrict the use of AI chatbots. Manitoba recently announced plans to ban youth from utilizing AI chatbots and social media platforms. In a similar move, B.C.’s Attorney General mentioned that the provincial government would consider implementing such restrictions if the federal government fails to provide adequate protections for youth on AI chatbots and social media.
Legal Actions Holding AI Creators Responsible
Concerns are mounting about the potential mental health risks associated with extensive use of AI chatbots, particularly among younger users, placing increased pressure on the tech companies behind these technologies. Families of victims in the Tumbler Ridge, B.C., shooting have filed a lawsuit against OpenAI, alleging that the company did not notify authorities despite being aware of troubling content shared with ChatGPT by the shooter. OpenAI has stated that it has enhanced its safety measures, including improving responses to signs of distress. Another lawsuit blames the use of ChatGPT for contributing to a teenager’s suicide.
Manitoba Premier Wab Kinew says he wants to ban social media and artificial intelligence chatbots for youth. But would this plan keep youth healthier and safer? CBC reporter Bryce Hoye investigates.
AI Chatbots and Mental Health Concerns
Research is starting to highlight the potential risks associated with certain uses of AI chatbots, particularly in the mental health support domain. Psychiatrist Darja Djordjevic conducted a recent risk assessment on the use of chatbots for mental health support, concluding that current systems are not safe for addressing various mental health conditions in young individuals. While chatbots may respond adequately to direct mental health prompts in short interactions, they often struggle in longer conversations and fail to detect warning signs related to mental health issues. Djordjevic emphasized that these chatbots are designed for engagement rather than providing support and safety.

AI companies have primarily focused on suicide and self-harm prevention, but Djordjevic stresses that young people need support for a wide range of issues. Many teenagers turn to AI for companionship and emotional support

