
Google's AI Overviews, launched last summer and powered by the Gemini language model, is generating false and sometimes dangerous information, a phenomenon known as 'hallucinations.' Notable errors include suggesting users add glue to pizza sauce and describing the fabricated idiom 'You canât lick a badger twice' as legitimate. Laurence OâToole of Authoritas found that the introduction of AI Overviews has led to a 40% to 60% decline in click-through rates to publisher websites. Google CEO Sundar Pichai and Liz Reid, Google's head of search, have defended the tool, stating that most AI Overviews are accurate and helpful and that the breadth of sources users visit has increased. Google reports a hallucination rate of 0.7% to 1.3% for its Gemini models, while data from Hugging Face places the rate at 1.8%. Other generative AI models, such as OpenAI's o3 and o4-mini, have reported hallucination rates as high as 48% on certain tasks. The increasing complexity of generative AI systems has made their errors difficult to predict or explain, even for their creators.






đ€ La IA sigue siendo tanto la buena como la mala en este nuevo escenario tecnolĂłgico, donde los medios, el 'hype' y los expertos quieren 'romantizar' y exponerla como la tecnologĂa capaz de 'igualar y superar a la inteligencia humana'.https://t.co/Wzp6IieGpv
âĄïž InteligĂȘncias Artificiais cada vez mais realistas viralizam na web Apesar de a tecnologia divertir internautas com produçÔes, os vĂdeos podem acabar confundindo por causa da hiper-realidade Leia: https://t.co/QyZG2aaW3P https://t.co/eVoNKGMwzl
#TecnologĂa | đ„ ÂĄLa batalla entre Reddit y Anthropic se intensifica! ÂżSuena esto a un thriller tecnolĂłgico? đ€đ La IA tambiĂ©n tiene su lado oscuro... https://t.co/N8CGWv75yW