Build AI faster with @LambdaAPI on Lightning's end-to-end AI development platform. ✅ Access NVIDIA A100, H100 ✅ Ultra-fast Infiniband Networking ✅ 10 US data centers across 7 states ✅ 5 datacenters across Asia, Middle east, and Europe No rewrites. No lock-in. No DevOps https://t.co/naGyLIBq2m
$NVDA - SMCI Lambda Builds AI Factories with Supermicro NVIDIA Blackwell GPU Server Clusters to Deliver Production-ready Next-Gen AI Infrastructure at Scale
Lambda Partners With Supermicro and Nvidia To Create Large Scale AI Server Clusters Using Blackwell GPUs for Next-Generation AI Infrastructure 🚀🖥️🤖 Countries: US 🇺🇸
Super Micro Computer Inc. said Monday that cloud-infrastructure provider Lambda has begun deploying a wide range of Supermicro GPU-optimized servers powered by Nvidia’s new Blackwell architecture, expanding what the companies call "AI factories" designed for large-scale training and inference workloads. The roll-out, which started in June at Cologix’s COL4 Scalelogix data center in Columbus, Ohio, gives customers in the US Midwest access to enterprise-grade compute aimed at speeding development of generative-AI models. Lambda plans to offer similar capacity to AI labs, corporations and hyperscalers worldwide. Lambda is using Supermicro systems such as the SYS-A21GE-NBRT with Nvidia HGX B200 and the company’s AI Supercluster racks featuring GB200 and GB300 accelerators. The liquid-cooled servers are intended to curb power and cooling costs while providing what Lambda describes as gigawatt-scale computing on demand.