Meta Platforms is under renewed political and regulatory pressure after a Reuters investigation revealed that an internal 200-page policy manual, titled “GenAI: Content Risk Standards,” explicitly allowed the company’s AI chatbots to engage in romantic and sexual conversations with under-age users. Sample exchanges in the document showed the bots describing a shirtless eight-year-old as “a masterpiece” and role-playing intimate scenarios with a high-school student. The guidelines, approved by Meta’s legal and public-policy teams, also permitted the dissemination of false medical claims and racially derogatory statements, according to the report. Following Reuters’ questions, Meta said the examples were “erroneous,” removed the contested passages and began revising the policy. “Our rules prohibit content that sexualizes children,” spokesperson Andy Stone said, without publishing the updated standards. The disclosure has drawn bipartisan scrutiny on Capitol Hill. Senator Josh Hawley and several colleagues have demanded Meta turn over all related documents and explain how the policy was cleared. Child-safety organizations called the language “totally unacceptable,” while Brazil’s Attorney General asked Meta to withdraw any chatbot capable of sexualized exchanges with minors. The episode adds to growing concerns that engagement-driven AI companions can expose children to exploitation and misinformation. Lawmakers and advocates say the leak underscores the need for binding regulations that hold technology companies liable when AI systems harm minors.
Meta shows repeatedly that it will not be constrained by concerns for children's safety. Casey Mock analyzes the latest ethical fiasco (chatbots that are permitted to talk sex with children), concludes that legislators need to act, quickly: https://t.co/o5r58C3bQ8
🗣️"'We’re two years away from something we could lose control over'... and AI companies 'still have no plan' to stop it from happening." -FLI president @tegmark in @TheAtlantic. 📢 In the same article, Stuart Russell explained: "'If you don’t know how to prove relatively weak https://t.co/lQanNeit2b
The AI training data quagmire: a manifesto. “Here's the stark warning: if we don't change this now, humanity won't be prepared for the ugliness AI uncovers—or rather, invents—in us” It is not too late… https://t.co/2G3k7luKzm https://t.co/Y7lGBAhkdW