Elon Musk's AI company xAI has set an ambitious goal to deploy 50 million units of H100-equivalent AI compute power within five years, emphasizing superior power efficiency. This target far exceeds current plans by competitors such as OpenAI, which aims to bring over 1 million GPUs online by the end of 2025. xAI currently operates approximately 230,000 GPUs in its Colossus 1 supercluster, including 30,000 Nvidia GB200 GPUs, with plans to add 550,000 more GB200 and GB300 GPUs in the coming weeks as part of Colossus 2. Analysts estimate that reaching 50 million H100 equivalents would require around 4 million total GPUs and an energy footprint of approximately 11 gigawatts. Supermicro has responded to xAI's goal by offering liquid-cooled systems capable of supporting up to 2,048 Nvidia Blackwell GPUs each, with a deployment timeline of three months. Separately, Tesla, also led by Musk, has expanded its AI training compute at Gigafactory Texas with an additional 16,000 H200 GPUs, bringing its Cortex system to 67,000 H100 equivalents. Tesla expects its Dojo 2 AI training system to operate at scale in 2026, targeting around 100,000 H100 equivalents. Musk also indicated that Tesla's Robotaxi business could have a material impact on the company's financials by the end of 2026.
BREAKING: ELON MUSK THINKS $TSLA ROBOTAXIS WILL HAVE “MATERIAL IMPACT” ON FINANCIALS BY END OF 2026 👀 Extremely bullish ! https://t.co/DAHg0wQMe2
Tesla $TSLA CEO Elon Musk just said he thinks Tesla Robotaxi will have "a material impact on our financials around the end of next year"
$TSLA | Elon Musk: Robotaxi Business to Have a Material Impact on Financials Around End of Next Year