Character AI, a leading platform for AI chatbots focused on roleplay conversations, has announced significant changes in response to safety concerns. The company stated that it will roll out new safety features aimed at enhancing user protection. In a recent update, Character AI revealed the removal of certain characters that were flagged for violations, resulting in users losing access to their chat history with these characters. This decision has sparked discussions on platforms like Reddit, where users are expressing their concerns over the platform's ability to manage safety issues effectively. Critics have raised doubts about the company's staffing and ability to handle the challenges of operating an AI role-play platform. Character AI has been contacted regarding these safety issues, and they have acknowledged the concerns raised.
We reached out to @character_ai to make them aware of some of the safety issues we found on the platform along with Sewell’s story. They posted these updates. Here’s our original story: https://t.co/ItdbAbrQO8 https://t.co/burweOgYnm
I’m angry; this is pretty inexcusable. I don’t know how Character AI is structured post-acquihire, but I would not expect them to be sufficiently staffed to address the many safety challenges of running an AI role-play platform. Better to shut it down. https://t.co/blc4rpJO8G
what is the use of @character_ai besides promoting kids mental health issues. mostly kids use this product and just for lowkey fucking weird reasons. character ai is slop and shouldnt exist or should pivot to an actual useful product that doesnt just let mentally ill kids live…