Nvidia Corp. has introduced Spectrum-XGS Ethernet, a “scale-across” networking technology designed to interconnect multiple, geographically dispersed data centers so they can operate as a single, unified AI infrastructure. The new switches extend the company’s existing Spectrum-X platform beyond individual facilities, aiming to reduce latency and packet-delivery jitter—two factors that limit bandwidth and slow large artificial-intelligence workloads. Cloud-services provider CoreWeave will be an early adopter, deploying Spectrum-XGS to build what Nvidia calls giga-scale AI super-factories. By linking clusters across sites, the system is expected to streamline the movement of massive data sets and improve inference performance for demanding models. Nvidia positions the launch as the second layer of its AI-networking roadmap, complementing NVLink Fusion, which scales GPUs within single campuses.
Nvidia introduces Spectrum-XGS Ethernet to connect distributed data centers into giga-scale AI super-factories. $NVDA
Nvidia Launches Spectrum-XGS Ethernet to Link Distributed Data Centers into Large-Scale AI Super-Complexes 🚀🌐
Nvidia $NVDA just announced "NVIDIA® Spectrum-XGS Ethernet, a scale-across technology for combining distributed data centers into unified, giga-scale AI super-factories." https://t.co/PA4ODSok0V