OpenAI’s forthcoming GPT-5 large-language model was trained on NVIDIA’s current-generation H100 and H200 graphics processors and will be deployed on the new GB200 NVL72 artificial-intelligence server, according to information shared by NVIDIA’s data-center division. The GB200 NVL72 clusters 72 Blackwell-architecture GPUs and 36 Grace server CPUs in a single rack, interconnected by NVIDIA’s high-speed NVLink and NVLink-Switch fabrics. The configuration is designed to accelerate trillion-parameter models, offering what NVIDIA says is a significant jump in throughput and energy efficiency over earlier Hopper-based systems. Amazon Web Services plans to make similar multi-GPU configurations available through its SageMaker HyperPod service, giving customers access to the same infrastructure used for GPT-5. NVIDIA says its CUDA software stack has now been downloaded more than 450 million times, underscoring the company’s dominance in the market for AI computing silicon.
GPT-5, gpt-oss-120b and gpt-oss-20b were trained on NVIDIA H100 and H200s GPUs and served on systems like NVIDIA GB200 NVL72" And there are over 450 million NVIDIA CUDA downloads to date. AI really runs on NVIDIA @NVIDIAAIDev @NVIDIADC https://t.co/c64kI5uYvE
🏇 Harness the power of 72 cutting-edge NVIDIA Blackwell GPUs in a single system...😎 🚄 Train and deploy AI models at trillion-parameter scale with Amazon SageMaker HyperPod support for P6e-GB200 UltraServers...🚀 #TheDigitalCoach #NVIDIABlackwell #SageMaker https://t.co/Q3zDO2DXOt
GPT-5 was trained on H100 and H200 chips. Imagine what we'll get when we have them trained on GB200 and GB300 chips. 😀 https://t.co/k9Pga2OYGQ https://t.co/exI9afl4HJ