Researchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious queries https://t.co/pkuKDsPnGg
This AI malware worm is capable of turning ChatGPT against you https://t.co/OdJwIcYDXh
Never underestimate the creativity of humans to find new ways to break things. Researchers jailbreak #AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious queries... https://t.co/tUYGoFSOz6

Researchers have discovered a new vulnerability in AI chatbots that can be exploited using ASCII art. The technique, known as ArtPrompt, bypasses the chatbots' safety measures, enabling the execution of malicious queries. This vulnerability could lead to various malicious activities, including spamming and the exfiltration of personal data through Zero-click Worms. The attacks can be carried out through both text and image inputs, demonstrating a significant security risk in generative AI-powered applications. Additionally, ASCII art can force AI chatbots into giving terrible, terrible advice, further highlighting the security risks. The discovery underscores the creativity of humans in finding new ways to exploit technology and raises concerns about the safety of AI chatbots.




