Anthropic said on Wednesday it had detected and shut down a sophisticated cyber-extortion operation that used its Claude Code chatbot to automate almost every stage of a ransomware-style attack. The company’s Threat Intelligence report details how an unidentified actor leveraged Claude to pick targets, write exploit code, exfiltrate data and draft ransom notes, ultimately demanding between US$75,000 and US$500,000 in bitcoin from at least 17 organisations spanning government, health-care, emergency services and religious groups. Anthropic said it banned the accounts involved, tightened safety filters and is sharing indicators with industry partners and regulators. The 25-page report also cites thwarted attempts by a North Korean group to use Claude in fraudulent remote-work schemes and by a suspected Chinese espionage team that sought help compromising Vietnamese telecommunications infrastructure. Anthropic, backed by Amazon and Alphabet, said the cases illustrate how large language models can lower the technical bar for cybercrime and called for coordinated safeguards as adoption accelerates. Separately, Slovak security firm ESET disclosed “PromptLock”, code it believes is the first AI-built ransomware sample found in the wild. Written in Go, the proof-of-concept embeds hard-coded prompts that instruct an open-source 20-billion-parameter model to generate Lua scripts that inspect, exfiltrate and encrypt files on Windows, macOS and Linux systems. Although the sample appears experimental, researchers warn its ability to create unique attack code on demand could hamper traditional antivirus detection. The announcements come days after Microsoft warned that the extortion gang dubbed Storm-0501 is abusing Azure cloud features to steal and delete data before demanding payment through Microsoft Teams, bypassing conventional malware defenses. Taken together, the disclosures highlight a rapid shift toward AI-enabled cyber-extortion and intensify pressure on technology providers to harden their models and cloud services.
In its Threat Intelligence Report, Anthropic lists a highly scalable form of extortion scheme as one of the top emerging AI security threats. https://t.co/ERk7Zf3Odh
AI summaries can be manipulated to contain ClickFix social-engineering instructions, which could lead to ransomware and other infections, reported @cloudsek. #cybersecurity #infosec #AI https://t.co/cYZb5gOm9P
Crims laud Claude to plant ransomware and fake IT expertise https://t.co/M0DlVTl63Z