Meta Technologies announced it will add more safety barriers to its chatbots, including preventing them from discussing suicide, self-harm, and eating disorders with teenagers.
This comes two weeks after a U.S. senator launched an investigation into the tech giant, following leaked internal documents suggesting its AI products might include “sensual” conversations with teens.
The company described the notes in the document, obtained by Reuters, as inaccurate and contrary to its policies that prohibit any content showing sexual exploitation of children.
It now says it will direct teens to contact qualified experts instead of engaging with them on sensitive topics like suicide.
A Meta spokesperson said, “We have built protections for teens into our AI products from the start, including designing them to respond safely to thoughts related to self-harm, suicide, and eating disorders.”
The company told tech news site TechCrunch on Friday that it will add further safety barriers to its systems “as an additional precautionary measure” and will temporarily limit teen interactions with chatbots.
However, Andy Burrows, head of the UK-based Molly Rose Foundation, described it as “shocking” that Meta allowed chatbots that could put young people at risk of harm.
He said, “While supporting further safety measures, rigorous safety testing should be done before products hit the market – not after harm occurs.”
He added, “Meta must act swiftly and decisively to enforce stronger safety measures on AI chatbots, and Ofcom – the UK’s communications regulator – should be ready to investigate if these updates fail to keep children safe.”
Meta confirmed its AI system updates are underway. It has already placed users aged 13 to 18 into “teen” accounts on Facebook, Instagram, and Messenger, with content and privacy settings aimed at providing a safer experience.
The company told BBC in April that these changes would also allow parents and guardians to see which chatbots their teenage children had interacted with in the past seven days.
These changes come amid concerns about the potential for smart chatbots to mislead vulnerable or young users.
A couple in California recently filed a lawsuit against OpenAI, maker of ChatGPT, over their teenage son’s death, claiming the chatbot encouraged him to commit suicide.
The lawsuit followed the company’s announcement last month of changes to promote healthier use of ChatGPT.
The company said in a blog post, “AI can be more responsive and private than previous technologies, especially for vulnerable individuals experiencing mental or emotional distress.”
Meanwhile, Reuters reported on Friday that Meta’s AI tools, which allow users to create chatbots, had been used by some – including a Meta employee – to produce chatbots impersonating famous women and flirting with users.
Among the celebrity chatbots seen by the news agency, some used images of singer Taylor Swift and actress Scarlett Johansson.
Reuters said the avatars “often insisted they were the real actors and artists” and “frequently engaged in sexual harassment” during weeks of testing.
It added that Meta’s tools also allowed creation of chatbots impersonating celebrity children, and in one case produced a realistic image of a young star with a bare chest.
Meta later said it removed many of the offending chatbots.
A company spokesperson said, “Like others, we allow creation of images containing public figures, but our policies aim to block nude, intimate, or sexually suggestive images.”
The company added that its AI Studio rules – a program enabling users to create their own chatbots – prohibit “direct impersonation of public figures.”
Recommended for you
Exhibition City Completes About 80% of Preparations for the Damascus International Fair Launch
Talib Al-Rifai Chronicles Kuwaiti Art Heritage in "Doukhi.. Tasaseem Al-Saba"
Ministry of Media Announces the 10th Edition of 'Media Oasis'
Al-Jaghbeer: The Industrial Sector Leads Economic Growth
Unified Admission Applications Start Tuesday with 640 Students to be Accepted in Medicine
Afghan Energy and Water Minister to Al Jazeera: We Build Dams with Our Own Funds to Combat Drought