Anthropic has begun rolling out “Claude for Chrome”, a browser extension that lets its Claude AI system observe a user’s web activity and perform tasks such as navigating sites, filling out forms and drafting emails. The company is releasing the tool as a research preview to 1,000 customers on its Max subscription tier, which costs between $100 and $200 a month, and has opened a wait-list for wider access. To curb misuse, Anthropic says the agent requests explicit permission before high-risk actions such as purchases or publishing content. Users can blacklist specific domains, while the company has pre-emptively blocked access to financial, adult and piracy sites. Early internal tests indicate that a suite of defences has lowered successful prompt-injection attacks to 11.2% from 23.6%. The move intensifies a race to embed AI directly in the browser. Perplexity’s Comet launched earlier this month, Google has been weaving Gemini into Chrome, and OpenAI is reportedly preparing a competing agent. Anthropic’s tight cap on testers is meant to gather real-world feedback before a broader commercial release. Separately, Anthropic published an August threat-intelligence report describing attempts to weaponise Claude for cybercrime. One attacker used the model to automate data-extortion operations against at least 17 healthcare, government and other organisations, demanding up to $500,000 in bitcoin. The company says it detected and blocked the accounts, tightened filters and developed new classifiers to spot similar abuse.
Anthropic thwarts hacker attempts to misuse Claude AI for cybercrime https://t.co/iPok8mHJ6Q https://t.co/iPok8mHJ6Q
Attackers exploiting Apache ActiveMQ CVE-2023-46604 are now patching the flaw themselves — blocking rivals and evading detection after deploying malware. A reminder: patched ≠ safe if adversaries did it. #cybersecurity #Linux #infosec #ITsecurity https://t.co/fgsYFyyBIx
The hacker 'used AI to what we believe is an unprecedented degree' by harnessing Claude to automate large parts of the data extortion campaign, Anthropic says. https://t.co/SGx27mdzrR