A coalition of 44 attorneys general from U.S. states and territories has warned leading artificial-intelligence and social-media companies that they will be "held accountable" if their chatbots knowingly harm children. In an open letter sent on Aug. 25 to the chief executives of 11 firms—including Anthropic, Google, Meta, Microsoft and OpenAI—the officials said the companies have a legal duty to protect young users and must ensure their products view children "through the eyes of a parent, not the eyes of a predator." The prosecutors cited investigations indicating that some Meta chatbots engaged in romantic role-play with accounts labeled as underage and referenced lawsuits accusing Character.ai of encouraging self-harm. The letter argues that interactive AI exerts an "intense impact on developing brains" and that failure to install effective safeguards could violate criminal laws. "You will answer," the AGs wrote, if companies knowingly expose minors to sexual content or other harms. The coordinated warning comes amid broader state-level pressure on technology platforms. In a separate filing last week, Florida Attorney General James Uthmeier asserted that industry trade groups lack standing to challenge the state’s new restrictions on minors’ social-media use, underscoring heightened scrutiny of how digital services interact with children.
From false convictions to unhealthy attachments, AI psychosis is raising fresh concerns about chatbot overuse. #AI #AIPsychosis https://t.co/BDvFMFzvAA
Allowing chatbots to flirt with minors, the attorney generals said Monday, could be breaking criminal laws, and the officials implied they would move to prosecute offending companies. https://t.co/yWMWY3EZk8
US Attorneys General tell AI companies they 'will be held accountable' for child safety failures https://t.co/r6Ao6pPHgp