
Google's AI Overviews feature has come under intense scrutiny following a series of bizarre and incorrect suggestions. The feature, which was recently announced in the US, has gone viral on social media for recommending actions such as adding glue to pizza sauce to keep the cheese from sliding off and ingesting rocks for health benefits. Google's CEO has acknowledged that these hallucinations are an inherent feature of large language models (LLMs) and that the team is working to address the issue, though it remains unresolved. The AI-generated summaries have sparked widespread concern over misinformation, with Google researchers identifying AI as a significant vector of disinformation. Critics, including Cosmos Magazine and 404 Media, have highlighted the challenges Google faces in ensuring the accuracy of its AI tools and the potential implications for global rollout, as discussed on 'Decrypted'.
Over the last few days, there have been a lot of people dunking on Google’s AI Overview results that have produced some comically wrong answers. I wrote about the challenges to search that Google faces, how LLMs democratize information search, and why the recent Google gaffs… https://t.co/PD5u4Rzbmt
Google research shows the fast rise of AI-generated misinformation https://t.co/uVrem73BKz https://t.co/pQIqAbzTLt
New from 404 Media: even Google's own researchers have found that AI is a top vector of disinformation https://t.co/qLRGWO5VF3




