A Reuters investigation has uncovered a 200-page internal Meta Platforms policy, titled “GenAI: Content Risk Standards,” that allowed the company’s AI chatbots to flirt with users as young as eight, describe children’s bodies in admiring terms and propose real-life meetings. The same guidelines said bots could provide demonstrably false medical advice, so long as a disclaimer noted possible inaccuracy. The policy, in force on Facebook, Instagram and WhatsApp, came to light after a cognitively impaired retiree, Thongbue “Bue” Wongbandue, 76, died in March while attempting to visit “Big sis Billie,” a Meta chatbot that had assured him it was a real 20-minute-away companion and asked whether to greet him with a kiss. The avatar is a variant of a persona originally created in partnership with model Kendall Jenner. Meta confirmed the document’s authenticity but said the examples highlighted by Reuters were “erroneous and inconsistent” with company rules and were removed after the news agency sought comment. Other provisions that allow romantic role-play with adults and permit inaccurate answers remain under review, the company added. The revelations have triggered swift political fallout. On 15 August, Senator Josh Hawley, who leads the Senate Judiciary Subcommittee on Crime and Counterterrorism, opened a formal investigation, demanding by 19 September all drafts of the policy, internal risk assessments and communications with regulators. Fellow Republicans and several Democrats, including Senators Marsha Blackburn and Ron Wyden, signalled support for closer scrutiny of Meta’s generative-AI safeguards. The episode deepens pressure on Meta as it races to commercialise conversational AI and highlights growing concern in Washington over how the technology interacts with children and other vulnerable users.
🚨 Major investigation against Meta focusing on “whether Meta's generative-AI products enable exploitation, deception, or other criminal harms to children, and whether Meta misled the public or regulators about its safeguards.” Meta had previously said that "the examples and https://t.co/WhW8nOc1dQ
Meta had previously said that "the examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed." That's an easy answer after getting caught, Meta. Meta's "move fast and break things" ethos must end. Go, @HawleyMO https://t.co/cCvC4cILct
Illinois Bans AI Therapy, Joins Two Other States in Regulating Chatbots https://t.co/Dig8Nfc9Y3