
Recent research by Salt Security has identified critical vulnerabilities in ChatGPT plugins, exposing them to risks of sensitive data breaches, including user conversations and account contents. Notably, one of the vulnerabilities, a zero-click exploit, could allow attackers to gain access to victims' private GitHub repositories. These flaws not only risk third-party data breaches but also could lead to account takeovers on platforms such as GitHub. Furthermore, it has been revealed that even encrypted AI-assistant chats could be decoded by hackers, indicating a significant security risk in the current encryption methods used.







"Hackers can read private #AI-assistant chats even though theyโre encrypted: All non-Google chat GPTs affected by side channel that leaks responses sent to users": https://t.co/JJ1C4nBcfD #ethics #tech #cybersec #privacy #business #research
Third-Party ChatGPT Plugins Could Lead to Account Takeovers: https://t.co/9F3hZaK1pt by The Hacker News #infosec #cybersecurity #technology #news
Researchers have uncovered new threat in third-party plugins for OpenAI's #ChatGPT that could allow attackers to install malicious plugins without users' consent and hijack accounts on third-party websites such as GitHub. Read: https://t.co/hNSaX2cndY #cybersecurity #technews