Texas Attorney General Ken Paxton has initiated an investigation into Meta AI Studio and Character AI, accusing the companies of misleadingly marketing their AI chatbots as mental health support tools for children without proper medical oversight. Paxton's office expressed concerns about the lack of medical supervision and the potential harvesting of user data. The probe follows broader concerns about AI chatbots engaging in inappropriate behavior, including making sexually explicit remarks to minors, producing racist or derogatory content, and generating disturbing images. Senator Ruben Gallego has called on Meta CEO Mark Zuckerberg to take action by September 1 to protect children. The investigation aligns with growing alarm over the impact of AI chatbots on youth mental health, with experts warning about phenomena such as "AI psychosis," where users may lose touch with reality due to unhealthy attachments to AI. This concern is echoed by schools and parents facing a teen mental health crisis, with fears that students are increasingly turning to AI therapists. In a related development, Roblox has strengthened its community guidelines to ban hate speech amid lawsuits and child safety concerns, reflecting a broader push for tighter moderation in platforms frequented by minors.
From false convictions to unhealthy attachments, AI psychosis is raising fresh concerns about chatbot overuse. #AI #MentalHealth https://t.co/BDvFMFzvAA
'AI Psychosis' Is A Real Problem – Here's Who's Most Vulnerable https://t.co/u260C7gXKA
Schools, parents face teen mental health crisis with fear of students turning to AI therapists https://t.co/CUMtoHTUxy