A 2024 report commissioned by the U.S. Department of State has highlighted the possibility of an "extinction-level" risk due to the rapid development of artificial intelligence. Industry leaders and experts have expressed concerns about AI's potential existential threat to humanity. Geoffrey Hinton warned of a significant existential risk, estimating a 50% chance that AI could surpass human intelligence within 5 to 20 years, potentially leading to humanity losing control over these systems. Elon Musk predicted that by 2029, AI might be smarter than all humans combined and described AI as the only hope amid demographic challenges in the U.S. James Cameron also voiced fears of a Terminator-style apocalypse resulting from AI. Meanwhile, Anthropic CEO Dario Amodei echoed concerns about AI threatening humanity but noted that his company has discovered a new method to prevent AI from turning malevolent. Additionally, a former Google executive forecasted a 15-year dystopian era starting in 2027 driven by AI advancements.
Elon Musk, CEO de Tesla: "Este país perderá casi un millón de personas este año y la IA es la única esperanza" https://t.co/DXaKX7JJnu
🤖 La preocupación de James Cameron sobre la IA: "Existe el peligro de un apocalipsis al estilo Terminator". Conocé más 👇 https://t.co/gpGllshfXT
Anthropic says they’ve found a new way to stop AI from turning evil https://t.co/Szk1Leug8E