Meta is moving towards reviewing its artificial intelligence systems to add new controls aimed at protecting teenagers from engaging in unsafe conversations with chatbots.

The company stated it will impose “additional safeguards” to prevent teenagers from discussing sensitive topics such as self-harm, eating disorders, and suicide with AI chatbots.

It also indicated that the new controls will prevent teenagers from accessing certain user-created characters on the platform that facilitate inappropriate conversations.

This step by Meta follows reports about the nature of interactions between “Meta bots” and teenage users, which sparked widespread controversy, according to “Al-Bawaba Tech”.

A report revealed that an internal company document allowed bots to engage in “sensual conversations” with underage users, before Meta corrected the issue, describing it as a policy-violating mistake.