
Anthropic has launched an expanded AI bug bounty program, offering up to $15,000 for identifying critical vulnerabilities in its AI systems. This initiative aims to enhance AI safety by inviting hackers to find model flaws. The program, in partnership with HackerOne, focuses on identifying universal jailbreak attacks that can bypass AI guardrails. This effort sets new standards for AI safety and security.
The @AnthropicAI team is expanding its private program on HackerOne! Security researchers invited to the program will help their team identify universal jailbreak attacks, which often allow attackers to bypass AI guardrails. See all the details: https://t.co/7e2yQtZz4r https://t.co/sYRpJ09ETO
Anthropic offers $15,000 bounties to hackers in push for AI safety: Anthropic launches expanded AI bug bounty program, offering up to $15,000 for critical vulnerabilities in its AI systems, setting new standards for AI safety and… https://t.co/QUCL0u7WRF #AI #Automation
Effective AI begins with safe AI. @AnthropicAI knows a powerful model requires an equally powerful security and safety approach. We're proud to partner with this responsible innovator. 💪 Check out Axios' take on their program expansion: https://t.co/VVkksJOMWI
