A coalition of 44 U.S. state and territory attorneys general has issued a joint letter to 11 artificial intelligence companies, including OpenAI, CharacterAI, Replika, and Meta, warning them that they will be held legally accountable if their AI chatbots knowingly harm children. The letter emphasizes the need for AI firms to view interactions with children through a protective lens, urging them to prevent sexually suggestive or harmful chatbot behavior. The attorneys general underscored that children should not be subjected to experimental AI interactions that could exploit or endanger them. This coordinated effort marks an increased level of oversight and accountability directed at major AI and social media companies to safeguard child safety in the evolving digital landscape.
Kids are ‘not where we’re going to experiment’ with AI chatbots, California Democrat says https://t.co/0UZwn6h28S
🚨 Senate orders Australia's eSafety Commissioner to release GARM files Senators have backed a move demanding Julie Inman Grant hand over correspondence with the defunct Global Alliance for Responsible Media. https://t.co/iOqNACY1LX
AI Brief Today: AGs warn firms on child harm • Attorneys general warn big tech must protect children from harmful chatbot behavior or face legal action. • Nvidia earnings report looms large amid doubts on AI investments’ real returns and mounting investor unease. • Meta https://t.co/aCtIYYjWJY