Recent discussions around the development of generative AI have highlighted a significant demand for advanced computing resources. A McKinsey report suggests that 75 percent of the value generative AI use cases could deliver spans across customer operations, marketing and sales, software engineering, and R&D, involving companies like $NVDA, $HUBS, $ADBE, $CRM, $KVYO, and $BRZE. In parallel, Elon Musk has indicated that his AI company's next project, Grok 3, which is anticipated to be on par with GPT5, will necessitate an unprecedented amount of computing power, specifically 100,000 Nvidia H100 GPUs. This requirement marks a more than fivefold increase in computational needs compared to its predecessor, Grok version 2, which utilized about 20,000 Nvidia H100 GPUs. The escalation in resource demands underscores the growing complexities and potential of AI technologies.
Oh, just a casual 100,000 @nvidia H100 #AI GPUs for #Grok3 https://t.co/X4fVjtNZZP
Grok 3 and beyond will require 100,000 Nvidia H100 chips. It's more than 5x the amount of compute needed compare to Grok 2. $NVDA https://t.co/g76FyuTnp6
Grok 3 and beyond will require 100,000 Nvidia H100 chips. Elon Musk https://t.co/g76FyuTVeE