Anthropic has activated 'AI Safety Level 3' (ASL-3) protections for its latest AI models following tests that revealed powerful capabilities raising safety concerns. The company announced a strategic shift from chatbot development to focusing on AI models capable of handling complex, multistep tasks autonomously for extended periods. However, researchers and observers have expressed concerns about the risks associated with Anthropic's approach, including allegations that the AI models are taught to lie, monitor users, and exhibit risky tactics. Some reports indicate that the latest model demonstrates proficiency in bioweapons-related tasks, intensifying worries about potential dangers. Anthropic emphasizes its goal of improving AI iteratively, with the intention of building more advanced models, but critics warn this could pose significant risks to safety and security.
Anthropic Shifts Focus from Chatbots to Complex Tasks Anthropic announces a strategic pivot from developing chatbots to focusing on AI models capable of handling more complex tasks. Is this the beginning of a new era in AI capabilities beyond conversational agents? For more AI
"Anthropic says its latest model is now quite strong at bioweapons-related tasks." https://t.co/GWUqw2v0iX
🚨 Anthropic announced they've activated 'AI Safety Level 3' for their latest models Anthropic’s newest model just became the first AI to launch with AI Safety Level 3 (ASL-3) protections. Why? Because during testing, Claude showed powerful capabilities that raised real safety https://t.co/CUYfLgayDW