Security researcher Johann Rehberger discovered a vulnerability in the memory feature of OpenAI's ChatGPT app for macOS, which he named 'SpAIware.' This flaw allowed attackers to plant false information and steal user data across multiple conversations via indirect prompt injection. Despite reporting the issue to OpenAI, the company labeled it a safety issue rather than a security concern. Rehberger created a proof-of-concept to demonstrate the severity of the flaw. OpenAI has since fixed the vulnerability.
Hacker creates false memories in ChatGPT to steal victim data — but it might not be as bad as it sounds https://t.co/HmHFxrNgcr
Hacker plants false memories in ChatGPT to steal user data in perpetuity | Ars Technica https://t.co/nQ171qmVky
"When security researcher Johann Rehberger recently reported a vulnerability in ChatGPT that allowed attackers to store false information & malicious instructions in a user’s long-term memory settings, OpenAI summarily closed the inquiry, labeling the flaw a safety issue, not…a…