
- Meta will block AI chatbots from discussing suicide, self-harm, and eating disorders with teens
- Privacy settings for users aged 13 to 18 now allow parents to see which AI chatbots spoke to their teens
- A California couple sued OpenAI alleging ChatGPT urged their son to suicide, prompting safety reviews
Meta has announced that it will block artificial intelligence (AI) chatbots from talking to teenagers about sensitive topics such as suicide, self-harm and eating disorders. Instead, young users will be directed to helplines and resources.
This comes two weeks after a US senator's investigation into Meta after leaked documents that suggested that its AI bots could have "sensual" chats with teenagers.
Meta has said that the claims are inaccurate and against its rules, and that it strictly prohibits content that sexualises minors.
"We built protections for teens into our AI products from the start, including designing them to respond safely to prompts about self-harm, suicide, and disordered eating", a Meta spokesperson said. Meta told TechCrunch that it will limit the number of chatbots available to teenagers.
Although the move was appreciated, Andy Burrows, head of the Molly Rose Foundation said that he found it "astounding" that Meta created chatbots that could harm young people and released it into the market without proper testing.
"While further safety measures are welcome, robust safety testing should take place before products are put on the market - not retrospectively when harm has taken place," he said.
"Meta must act quickly and decisively to implement stronger safety measures for AI chatbots and Ofcom should stand ready to investigate if these updates fail to keep children safe", he added.
Meta has also added privacy settings for accounts under the age group 13 to 18, with content that aims to give them a safer experience, and would also permit parents to view which AI chatbot had spoken to their teen in the last seven days.
A couple from California sued OpenAI's ChatGPT over the death of their teenage son, alleging that the bot urged the teenager to kill himself.
Although the company clarified that, "ChatGPT is trained to direct people to seek professional help", it acknowledged that "there have been moments where our systems did not behave as intended in sensitive situations".
"AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress," the organisation said in a blog post.
Track Latest News Live on NDTV.com and get news updates from India and around the world