
OpenAI announced on Thursday that it has identified and disrupted five covert foreign influence campaigns utilizing its AI models, including ChatGPT and DALL-E. These campaigns, originating from countries such as Russia, Israel, China, Iran, and India, aimed to manipulate public opinion and influence political outcomes. OpenAI, along with Meta and TikTok, uncovered and thwarted operations linked to a for-profit organization in Israel, as well as other state-backed disinformation efforts. Specific campaigns included Operation Doppelganger, Operation Bad Grammar, Operation Spamouflage, and Operation IUVM. OpenAI's swift actions, sometimes within 24 hours, prevented significant audience impact, particularly in the case of attempts to interfere with Indian elections. The companies disclosed these findings in independent transparency reports.
#OpenAI, the creators of ChatGPT, has said it acted within 24 hours to disrupt deceptive uses of #AI in covert operations focused on the Indian elections, leading to no significant audience increase. https://t.co/JOVTy977zM
In a first, OpenAI removes influence ops 'targeting' Indian elections, BJP @imsoumyarendra reports https://t.co/S3upGAEVad
OpenAI report reveals threat actors using ChatGPT in influence operations https://t.co/pnfaktnSAO


















