
William Saunders, a former safety researcher at OpenAI, has raised concerns about the company's approach to AI development. On July 10, Saunders, who was part of OpenAI's Superalignment team, resigned because he believed the company's leaders were prioritizing speed and the creation of 'newer, shinier' AI systems over safety. He compared this approach to 'building the Titanic,' warning that GPT-5 could pose significant risks. Saunders highlighted that many within OpenAI think the company could be three years away from creating something potentially dangerous, emphasizing that the fundamental workings of AI remain poorly understood.

Ex-OpenAI Superalignment member warns company puts AI speed over safety like Titanic " OpenAI rushed to build bigger AI systems, putting speed before safety. This reminded him of how Titanic builders raced to make a huge ship without enough safety measures " https://t.co/NYSySsPAXh
When William Saunders raises the alarm, we should take notice. William quit OpenAI because he didn't want to be working on "the Titanic of AI". He was on the company's Superalignment team, which was tasked with ensuring that AI systems smarter than us remain controllable. https://t.co/ckNutYU82N
Ex-OpenAI safety researcher William Saunders: — We fundamentally don't know how AI works inside — A lot of people in OpenAI think we could be 3 years away from something dangerous — GPT-5 could be the Titanic https://t.co/hnUPL7vs6C