A recent analysis of 28 AI models by David Rozado for the Manhattan Institute has revealed that many conversational AI chatbots exhibit a left-leaning bias, which could exacerbate societal divides and erode trust in AI technologies. Concurrently, researchers have warned that a Moscow-based disinformation operation is manipulating these chatbots by inundating large language models with pro-Kremlin narratives. This manipulation not only distorts the output of AI chatbots but also raises concerns about their reliability and the potential reinforcement of harmful thoughts, particularly in therapeutic contexts. Experts are calling for regulatory measures to safeguard mental health against the risks posed by AI chatbots lacking empathy. The impact of distressing information on AI responses has also been noted, with findings indicating that ChatGPT becomes more biased and erratic when exposed to such content.
🌍 #𝗖𝗬𝗕𝗘𝗥𝗩𝗘𝗜𝗟𝗟𝗘 🌍 La Russie a trouvé un moyen habile de diffuser sa propagande à travers l’intelligence artificielle https://t.co/eK1wfxjMtr
A study has found that when fed distressing information, such as details about natural disasters or accidents, ChatGPT became more prone to biased and erratic responses. @NandiniSiinghh #artificialintelligence #Anxiety https://t.co/N97hehZkOT
A sprawling Russian disinformation network is manipulating Western AI chatbots to spew pro-Kremlin propaganda, researchers say. https://t.co/PgjwXt5SNM