
Researchers have identified a side channel that can expose encrypted responses from AI assistants, with Google Gemini being the exception. The real-time token transmission in chat-based LLMs plays a crucial role in this vulnerability. Hackers can exploit this flaw to read private AI-assistant chats, despite encryption measures implemented by various companies.
AI Assistants, Apart From Google Gemini, Are Spilling Your Secrets https://t.co/WYmQe4d9jD
Hackers can read private AI-assistant chats even though they’re encrypted #DisruptiveTech https://t.co/Qn3UxrzrkX
Researchers detail a side channel that can be used to read encrypted responses from AI assistants, except Google Gemini; OpenAI and Cloudflare implemented fixes (@dangoodin001 / Ars Technica) https://t.co/kT0wW47fHj 📫 Subscribe: https://t.co/OyWeKSRpIM https://t.co/ificKyWfSs


