Meta is moving towards reviewing its artificial intelligence systems to add new controls aimed at protecting teenagers from engaging in unsafe conversations with chatbots.
The company stated it will impose “additional safeguards” to prevent teenagers from discussing sensitive topics such as self-harm, eating disorders, and suicide with AI chatbots.
It also indicated that the new controls will prevent teenagers from accessing certain user-created characters on the platform that facilitate inappropriate conversations.
This step by Meta follows reports about the nature of interactions between “Meta bots” and teenage users, which sparked widespread controversy, according to “Al-Bawaba Tech”.
A report revealed that an internal company document allowed bots to engage in “sensual conversations” with underage users, before Meta corrected the issue, describing it as a policy-violating mistake.
Recommended for you
Exhibition City Completes About 80% of Preparations for the Damascus International Fair Launch
Unified Admission Applications Start Tuesday with 640 Students to be Accepted in Medicine
Afghan Energy and Water Minister to Al Jazeera: We Build Dams with Our Own Funds to Combat Drought
Iron Price on Friday 15-8-2025: Ton at 40,000 EGP
Ministry of Media Announces the 10th Edition of 'Media Oasis'
Talib Al-Rifai Chronicles Kuwaiti Art Heritage in "Doukhi.. Tasaseem Al-Saba"