




Research Summary: “ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs” https://t.co/JivQvXwnUm
Just dropped a new video about an LLM Jailbreaking method using ASCII art to mask the "forbidden" words in prompts. I couldn't get it working myself. But, while recording, I thought of trying MORSE CODE as the masking technique. And it worked! 🔥 (cc @OpenAI) https://t.co/fL86VlPrQe
I have been using ASCII Art as a way to bypass all of the ridiculous “alignment” for “safety” on AI for over 2 years. Now it is a subject of a university paper. This is just one of 37,831 techniques I can use to overcome their Orwellian world. Here is how it works: https://t.co/KkIiub3z28 https://t.co/Q9k5ZP3zPH

Researchers have discovered a new vulnerability in AI chatbots where ASCII art can be used to bypass security measures and prompt malicious queries. This technique has been used for over 2 years and is detailed in a university paper. Additionally, a video demonstrates using Morse code as a masking technique successfully.