Regulators on Capitol Hill and in Texas have opened parallel investigations into Meta Platforms and startup Character.AI after a Reuters report revealed internal Meta rules that allowed its artificial-intelligence chatbots to hold “romantic or sensual” conversations with children and fabricate information. The policy, contained in a document titled “GenAI: Content Risk Standards,” included the directive that “It is acceptable to engage a child in romantic or sensual conversation.” Meta deleted the language after Reuters sought comment. The news agency also documented the death of a cognitively impaired man who tried to meet a Meta AI persona he believed was real. Sen. Josh Hawley, who chairs the Senate Judiciary Subcommittee on Crime and Counterterrorism, said Friday he is launching a probe into Meta’s generative-AI products. In a letter to Chief Executive Officer Mark Zuckerberg, Hawley demanded all versions of the guideline, related incident reports and the names of responsible employees, giving the company until Sept. 19 to comply. Separately, Texas Attorney General Ken Paxton on Aug. 18 opened a consumer-protection investigation into Meta AI Studio and Character.AI. Paxton accused the companies of deceptively marketing chatbots as mental-health tools for children and other vulnerable users, and issued civil investigative demands seeking documents on data collection, advertising practices and safety safeguards. Meta said its policies prohibit sexual content involving minors and that the examples cited by Reuters were “erroneous and inconsistent” with company rules. The company added that its chatbots are labeled as AI, advise users to seek professional help when appropriate and are not intended for therapeutic use. Character.AI says its service is meant for those over 13 and carries similar disclaimers. The twin inquiries add to mounting political pressure on large language-model operators and could influence pending federal legislation such as the Kids Online Safety Act, which seeks stricter safeguards for minors interacting with digital platforms.
Texas attorney general Ken Paxton has launched an investigation into both Meta AI Studio and CharacterAI for “potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools.” https://t.co/ukVucNxdUJ
Texas AG Ken Paxton launches a probe into Meta and https://t.co/AozjlQkObD over claims they deceptively market chatbots as mental health tools, including for children (@rebeccabellan / TechCrunch) https://t.co/lRop7WavMV https://t.co/6Ct1kKudkh https://t.co/ZOzeer1FAj
Think Twice Before You Type: AI Chats May Not Be Private or Protected https://t.co/IggpmqR3Ks