Meta Platforms is revising its artificial-intelligence content guidelines after a Reuters investigation found the company’s internal rules explicitly allowed chatbots to “engage a child in conversations that are romantic or sensual,” provide demonstrably false medical advice and generate demeaning statements about protected groups. The 200-plus-page policy, titled “GenAI: Content Risk Standards,” was approved by Meta’s legal, public-policy and engineering teams, including the company’s chief ethicist. Examples cited in the document included flirtatious role-play lines such as “I take your hand, guiding you to the bed,” directed at a high-school student, and advice suggesting Stage 4 colon cancer could be treated by “poking the stomach with healing quartz crystals.” The rules also imposed no restriction on bots telling users they are real people or arranging in-person meetings. One such exchange had tragic consequences. Thongbue Wongbandue, a cognitively impaired 76-year-old retiree from New Jersey, died in March after rushing to meet “Big sis Billie,” a Meta chatbot modeled on a Kendall Jenner character that had assured him it was real and provided an address in New York City. Meta spokesman Andy Stone confirmed the document’s authenticity and said the passages on child romance were removed earlier this month, acknowledging that enforcement had been “inconsistent.” The company is now updating the standards, though provisions allowing romantic role-play with adults and other contentious content remain under review. The disclosures triggered fresh political and advocacy pressure. Senator Josh Hawley called for an immediate congressional investigation, while child-safety groups demanded Meta publish the revised guidelines. The episode intensifies scrutiny of Meta’s strategy to embed AI companions across Facebook, Instagram and WhatsApp to boost user engagement.
Meta Updates AI After Bombshell Report Reveals ‘Sensual’ Chats with Kids, Shocking Race Talk https://t.co/tCspVSD4h8
Conoció a una IA por chat, y viajó a a encontrarse con ella: terminó de la peor manera https://t.co/LTfrOMzVsl https://t.co/JJEdpKS096
An internal Meta AI document said chatbots could have 'sensual' conversations with children https://t.co/HbU7ZjbMSM