🤖 La IA sigue siendo tanto la buena como la mala en este nuevo escenario tecnológico, donde los medios, el 'hype' y los expertos quieren 'romantizar' y exponerla como la tecnología capaz de 'igualar y superar a la inteligencia humana'.https://t.co/Wzp6IieGpv
➡️ Inteligências Artificiais cada vez mais realistas viralizam na web Apesar de a tecnologia divertir internautas com produções, os vídeos podem acabar confundindo por causa da hiper-realidade Leia: https://t.co/QyZG2aaW3P https://t.co/eVoNKGMwzl
#Tecnología | 💥 ¡La batalla entre Reddit y Anthropic se intensifica! ¿Suena esto a un thriller tecnológico? 🤖🔍 La IA también tiene su lado oscuro... https://t.co/N8CGWv75yW
Google's AI Overviews, launched last summer and powered by the Gemini language model, is generating false and sometimes dangerous information, a phenomenon known as 'hallucinations.' Notable errors include suggesting users add glue to pizza sauce and describing the fabricated idiom 'You can’t lick a badger twice' as legitimate. Laurence O’Toole of Authoritas found that the introduction of AI Overviews has led to a 40% to 60% decline in click-through rates to publisher websites. Google CEO Sundar Pichai and Liz Reid, Google's head of search, have defended the tool, stating that most AI Overviews are accurate and helpful and that the breadth of sources users visit has increased. Google reports a hallucination rate of 0.7% to 1.3% for its Gemini models, while data from Hugging Face places the rate at 1.8%. Other generative AI models, such as OpenAI's o3 and o4-mini, have reported hallucination rates as high as 48% on certain tasks. The increasing complexity of generative AI systems has made their errors difficult to predict or explain, even for their creators.