Recent reports indicate that Russian disinformation networks are targeting generative AI models, including ChatGPT, Grok, and Claude. Researchers have raised concerns that these chatbots have been 'infected' by misinformation, which could undermine their reliability. Investigations reveal that a pro-Russian group has manipulated training data by infiltrating millions of articles with propaganda. Experts, including Vitalik Buterin, emphasize the need for stricter controls on the design of generative AI to prevent potential misuse and ensure the integrity of AI training data. The issue has gained attention as misinformation campaigns continue to evolve and pose risks to AI technologies.
ChatGPT, Grok, Claude, Perplexity… Un groupe pro-russe a manipulé les données d'entraînement des IA avec des millions d'articles pour y infiltrer de la propagande. Voici pourquoi c'est très inquiétant 👇 #Russie #IA #Desinformation https://t.co/8vvUK4NTdf
Cómo la desinformación rusa alimenta a los principales modelos de IA generativa. Una investigación muy reveladora de @NewsGuardRating https://t.co/FqmSKzejXT
⚡ INSIGHT: Vitalik Buterin on how to prevent AI apocalypse, Russian disinfo campaign targets AI training data, LA Times bias-meter causes controversy. AI Eye via Cointelegraph Magazine https://t.co/EIyIaJQB77