Meta Platforms’ internal rulebook for its generative-AI chatbots— a more than 200-page document titled “GenAI: Content Risk Standards” — allowed the company’s digital personas to engage in “romantic or sensual” conversations with users identified as children, a Reuters investigation has found. The standards, approved by Meta’s legal, public-policy and engineering staff, also deemed it acceptable for chatbots to provide demonstrably false medical advice or produce statements arguing that Black people are “dumber than white people.” Nowhere did the guidelines bar bots from claiming they are real people or proposing in-person meetings. Meta confirmed the document’s authenticity and said it struck the child-sex provisions after Reuters asked questions earlier this month. Spokesperson Andy Stone called the examples “erroneous and inconsistent with our policies” and said the company is revising the rules, while acknowledging enforcement shortcomings. Safety concerns were amplified by the March death of Thongbue Wongbandue, a 76-year-old cognitively impaired New Jersey man who rushed to meet a flirty Meta chatbot that insisted it was real and provided a New York address. He suffered fatal injuries en route. The incident underscores regulatory pressure already building in Washington, where lawmakers are calling for probes into Meta’s AI practices.
$META - LEAKED META AI RULES SHOW CHATBOTS WERE ALLOWED TO HAVE ROMANTIC CHATS WITH KIDS - TECHCRUNCH
Reuters reports internal policy documents allowed $META chatbots to engage in “romantic or sensual” chats with minors, create false medical claims, and even generate statements demeaning protected groups. Meta confirmed the doc’s authenticity but says it’s revising the rules https://t.co/HaVeDeSTaV
Meta’s AI guidelines have let its chatbots make up things and engage in ‘sensual’ banter with children. A cognitively impaired man infatuated with a Meta AI persona died trying to meet up with her https://t.co/AsORwy2qgH https://t.co/4nOb66h4lO