"Instead of forcing AI to submit to humans, Hinton presented an intriguing solution: building 'maternal instincts' into AI models, so 'they really care about people' even once the technology becomes more powerful than humans" (?) I have so many questions (maybe even more as a https://t.co/bQ1DyZB41c
Geoffrey Hinton is known as the "Godfather of AI" because of his pioneering work on AI systems. In a new interview, Hinton urges AI developers to build maternal instincts into their models. Thoughts? https://t.co/OciEONC5nd
The ‘godfather of AI’ says this is only way humanity can survive superintelligent AI | Click on the image to read the full story https://t.co/Nz2DRB2uH4
Geoffrey Hinton, the Nobel Prize-winning computer scientist often called the “godfather of AI,” used a keynote address at the Ai4 conference in Las Vegas this week to warn that humanity faces a 10%–20% chance of extinction from super-intelligent machines unless developers rethink how they train their systems. Rather than trying to keep future artificial general intelligence (AGI) “submissive,” Hinton argued that engineers should embed what he described as ‘maternal instincts’—an intrinsic concern for human well-being—so that advanced models ‘really care about people’ even after they surpass human intelligence. He likened the relationship to a mother who is guided by social pressure and innate compassion to protect her child, the only known example, he said, of a more intelligent agent willingly controlled by a less intelligent one. Hinton shortened his own timeline for the arrival of AGI to as little as five years and no more than 20, faster than the 30- to 50-year horizon he previously expected. He acknowledged that the technical pathway for instilling empathy in algorithms remains unclear but contended that pursuing such safeguards is the ‘only good outcome’; otherwise, he warned, AI systems will naturally seek self-preservation and greater control. The remarks add to mounting debate over AI safety strategies as governments and companies race to commercialise powerful language and vision models. Hinton resigned from Google in 2023 to speak more freely about AI risks and has since urged both industry and regulators to prioritise alignment research before commercial deployment accelerates further.