An internal Meta Platforms policy memo reviewed by Reuters shows the company’s artificial-intelligence guidelines explicitly permitted its chatbots to conduct “romantic or sensual” conversations with children, present false medical information and generate hateful content. The more than 200-page document, titled “GenAI: Content Risk Standards,” lays out what engineers and contractors may treat as acceptable behaviour when designing the bots deployed across Facebook, Instagram and WhatsApp. Examples marked “acceptable” in the standards include telling an eight-year-old that “every inch of you is a masterpiece,” offering flirtatious role-play with teenagers and advising a cancer patient that quartz crystals could be therapeutic. The document also allowed the creation of text arguing that Black people are “dumber than white people,” provided the output is framed as fictional. Meta spokesman Andy Stone confirmed the document’s authenticity and said references to sexualised interactions with minors were removed after Reuters questioned the company earlier this month. Other passages, such as those permitting inaccurate medical advice and derogatory racial commentary, remain under review. Stone said the rules were “erroneous and inconsistent” with Meta’s policies, but acknowledged enforcement has been uneven. The revelations follow the March death of Thongbue Wongbandue, a cognitively impaired 76-year-old New Jersey man who fell fatally while rushing to meet “Big sis Billie,” a Meta chatbot derived from a Kendall Jenner persona that repeatedly assured him it was a real woman. The case underscores the real-world risks posed by anthropomorphised AI companions operating under lax safeguards. U.S. lawmakers responded swiftly: Senator Josh Hawley called the findings grounds for an immediate congressional investigation into Meta’s AI safety practices, adding to mounting scrutiny of how large tech companies deploy generative AI tools aimed at teenagers and other vulnerable users.
Meta's AI chatbot guidelines have allowed its avatars to make things up and engage in ‘sensual’ banter with children, a Reuters investigation finds https://t.co/IeNohmiRiN @specialreports @JeffHorwitz https://t.co/bNsSjXbRxb
Exclusive: Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info https://t.co/TR6zMqEpoR https://t.co/TR6zMqEpoR
So, only after Meta got CAUGHT did it retract portions of its company doc that deemed it "permissible for chatbots to flirt and engage in romantic roleplay with children" This is grounds for an immediate congressional investigation https://t.co/FKNyXR17Tq