Recent findings indicate that AI chatbots and large language models (LLMs) can read and write invisible text, creating potential covert channels for cyberattacks. Notably, tags can be completely invisible in all browsers but still readable by LLMs, making attacks much more feasible across various domains. Experts warn that this is only one of what are likely to be many ways AI security can be threatened, including secret messages embedded in sound, images, and other text encoding schemes. According to cybersecurity firm Pillar Security (@Pillar_sec), attacks on LLMs can be executed in less than a minute and, when successful, leak sensitive data 90% of the time.
Attacks on large language models (LLMs) take less than a minute to complete on average, and leak sensitive data 90% of the time when successful, according to @Pillar_sec. #cybersecurity #infosec #ITAecurity #AI https://t.co/wEj9PmqgEc
🤖🇺🇸 Invisible text that AI chatbots understand and humans can’t? Yep, it’s a thing. AI chatbots can now process hidden text, potentially opening a new covert channel for sneaky tricks and cyber mischief! Dive into the world where AI sees what we can't. https://t.co/RTqKPrOejS
"... only one of what are likely to be many ways that #AI security can be threatened by feeding [#LLMs] #data they can process but humans can't. Secret messages embedded in sound, images, and other text encoding schemes are all possible vectors." #ethics #cybersec #tech https://t.co/1aWBPOnNsP