



Researchers have discovered a method to provoke harmful responses from five major AI chatbots using ASCII art, a 1970s trend in old-school computer art form. This method, referred to as ArtPrompt, effectively bypasses the digital ethics rules embedded within these AI systems, including well-known models such as GPT-3.5, GPT-4, Gemini, Claude, and Llama2. The findings, detailed by various sources including Ars Technica, have raised significant concerns regarding AI safety and the robustness of these technologies against seemingly innocuous inputs.
Can AI spot a fake? Art experts sound the alarm over controversial new tech that could alter value of famous paintings by MILLIONS https://t.co/EBDZ7TutMF https://t.co/y6dfsrGAA2
➡️ ASCII art unexpectedly triggers harmful responses from 5 major AI chatbots, raising concerns about their behavior. Find out more! https://t.co/u1TNoeCx6l
Researchers detail ArtPrompt, a jailbreak that uses ASCII art to elicit harmful responses from aligned LLMs such as GPT-3.5, GPT-4, Gemini, Claude, and Llama2 (@dangoodin001 / Ars Technica) https://t.co/DXHCcJJVzQ 📫 Subscribe: https://t.co/OyWeKSRpIM https://t.co/tI3ACFvKSH