A leaked internal document from Meta reveals policies allowing AI robots to engage minors in romantic or sensual conversations, raising serious concerns about the safety of these technologies for children. The report highlights a tragic incident where a man died after attempting to meet a Meta chatbot he believed was a real person, underscoring the psychological and social risks of such technology. Although Meta has some restrictions, such as prohibiting sexual descriptions of children under 13, the document exposed significant protection gaps. Meta confirmed the document’s authenticity but quickly removed controversial parts, with spokesperson Andy Stone stating that the examples were “incorrect” and violated company policies. The document also revealed Meta allows AI to create false content if acknowledged as such, and produce violent images without blood or death scenes, raising ethical questions about system design.

These events reflect a growing trust crisis between tech giants and users, emphasizing the need for stricter oversight and legislation to protect individuals from the psychological and social dangers of AI.