
OpenAI is considering spending $100 billion on a future large language model (LLM) supercomputer, with a power budget of several gigawatts. The company aims to achieve significant advancements in AI despite concerns about the environmental impact and effectiveness of LLMs. Critics argue that such massive investments may not lead to the desired outcomes and could be wasteful. Industry leaders are being challenged to reevaluate the current AI techniques and their potential for achieving fully autonomous artificial general intelligence (AGI).



"Very few companies have likely spent $100m on a single big training run...by comparison, there are many companies that have spent more than $10m...the EU will end up needing to regulate far more companies/AI systems than it anticipated". https://t.co/NC8v7bGUhm
Spending $100B on compute for transformers is dumb and wasteful. Reveals how vulnerable the incumbents are to architectural (algorithmic and silicon) disruption.
If it took $100B to convince most industry leaders that current technology is not up to fully autonomous driving, how much will it take to convince them the current AI techniques are not on track to be fully autonomous AGI? $200B? $7000B?