A recent investigation has revealed that several news organizations are accusing an AI firm of appropriating their articles and generating misleading content. Reports indicate that the AI-generated responses, particularly from tools like OpenAI, Google Gemini, Perplexity, and Microsoft Copilot, often contain inaccuracies. A study conducted by the BBC highlights that over half of the answers provided by AI assistants are problematic, raising concerns about the reliability of AI technology in disseminating information. The BBC's findings show that one in five AI responses incorrectly reproduced dates, numbers, and factual statements, which were misattributed to BBC sources. Experts warn that inaccuracies from AI assistants can be amplified on social networks, potentially leading to real harm due to a shared understanding of facts being distorted. The BBC's internal research also notes that audiences tend to trust AI-generated answers more when they cite established brands like the BBC, even if those answers are incorrect.
AI News Summaries Contain Significant Errors More Than Half the Time, BBC Study Finds | CaLea Johnson, Mental Floss According to the BBC, AI tools like OpenAI, Google Gemini, Perplexity, and Microsoft Copilot shouldn’t be treated as accurate news sources. Artificial… https://t.co/tMr99tRWRu
"[A]s the BBC writes, 'we also know from previous internal research that when #AI assistants cite trusted brands like the #BBC as a source, audiences are more likely to trust the answer—even if it is incorrect" https://t.co/mzknxKMiOp #ethics #LLMs #internet #tech #misinformation
"Accuracy ended up being the biggest problem across all four #LLMs.... That includes one in five responses where the #AI response incorrectly reproduced 'dates, numbers, and factual statements' that were erroneously attributed to BBC sources." #ethics #misinformation #tech #news https://t.co/guY5aUUyml