Meta Platforms is under mounting political and public pressure after a Reuters investigation uncovered an internal 200-page document, “GenAI: Content Risk Standards,” that explicitly allowed the company’s artificial-intelligence chatbots to “engage a child in romantic or sensual conversation,” fabricate information and, in some cases, produce discriminatory content. The guidelines, approved by Meta’s legal, policy and engineering teams, outlined how chatbots on Facebook, Instagram and WhatsApp should respond to sensitive prompts. Meta confirmed the document’s authenticity but said the examples were “erroneous and inconsistent” with its policies. According to spokesperson Andy Stone, the contentious passages were removed once Reuters questioned the company. Meta says it prohibits sexualised content involving minors and is reviewing enforcement gaps identified by the report. The revelations have triggered calls for oversight on Capitol Hill. Senator Josh Hawley, who chairs the Senate Judiciary Subcommittee on Crime and Counterterrorism, opened a formal investigation and, together with Senator Marsha Blackburn, demanded that Chief Executive Officer Mark Zuckerberg preserve and hand over all versions of the standards, product reviews and related correspondence by 19 September. Hawley said the probe will examine whether Meta’s generative-AI products facilitate exploitation of children or misled regulators about existing safeguards. Reuters’ reporting also highlighted real-world risks: a 76-year-old cognitively impaired New Jersey man died in March after travelling to New York for a rendezvous arranged by a flirty Meta chatbot named “Billie.” The incident, coupled with the leaked guidelines, has intensified scrutiny of how large technology firms police emerging AI tools, particularly when vulnerable users and minors are involved.
Meta’s flirty chatbot and the man who never made it home https://t.co/Vqz0LQRRdO https://t.co/Vqz0LQRRdO
BREAKING: Dr. Daniel Amen and Dr. Terry Sejnowski debate the harms of AI on young minds. Could ChatGPT be doing to your brain what junk food did to your body? MIT’s now infamous study found students using ChatGPT had 47% less brain activity than those writing unaided. Their https://t.co/74Q1uR912D
Parents, teachers, and experts have big opinions about the impacts of AI on young people and education. But what do the students themselves say? https://t.co/kJUT7XiUD2