Super Micro Computer Inc. used its Open Storage Summit on 13 August to spotlight how rapidly expanding “agentic” artificial-intelligence workloads are reshaping data-centre storage architectures. Executives from AMD, DataDirect Networks and Western Digital said real-time inference now requires high-speed, low-latency data fabrics that span multiple sites and clouds, turning virtually all data into ‘hot’ data ready for retrieval. Kevin Kang, senior manager at AMD, said the chipmaker is integrating its EPYC CPUs, GPUs and SmartNICs to move data directly to GPU memory and cut latency across AI pipelines. Balaji Venkateshwaran of DDN added that replacing AWS object storage with the company’s Infinia system cut latency enough to accelerate an AI application by a factor of 22, underscoring the payoff from targeted storage upgrades. Panelists warned that power consumption is becoming a gating factor as GPUs draw about ten times more electricity than CPUs, leaving less budget for storage unless efficiency improves. Liquid cooling, solid-state drive advances and closer hardware-software co-design were cited as priorities for keeping GPUs fed with data while containing energy use. The speakers said partnerships across vendors are essential to deliver end-to-end systems that can scale with surging AI demand.
Generally Intelligent is back—and now in video! Our first guest is our very own @mattboulos, who leads policy at Imbue. He and @kanjun discuss: - how AI shifts power - lawless digital spaces - bottom-up vs. top-down automation and more. Full episode and transcript in thread: https://t.co/b3FcwrbeP3
Supermicro and its partners tackle storage challenges for next-gen AI workloads https://t.co/mYVYg1vVLm
AIRING NOW! 🚨 Tune in for @theCUBEresearch’s AppDev Done Right Summit’s encore presentation with guests from @rafaysystemsinc, @heroku, @Tintri and more, to explore building, scaling & securing modern apps without the headache. 📺 Watch it here on X! https://t.co/IB9ruoTTtf