Huawei Technologies has introduced a software tool, Unified Cache Manager (UCM), that reallocates data across multiple memory tiers to accelerate inference in large-scale artificial-intelligence models. Presented at the Financial AI Reasoning Application Landing and Development Forum in Shanghai, the algorithm cut inference latency by up to 90 percent and lifted system throughput as much as 22-fold in internal tests, according to Zhou Yuefeng, vice-president of Huawei’s data-storage product line. By improving the efficiency of commodity DRAM and solid-state drives, UCM reduces the dependence of AI systems on costly high-bandwidth memory chips—a market dominated by South Korea’s SK Hynix and Samsung Electronics and U.S. supplier Micron Technology. Huawei plans to open-source the code in September, first to its developer community and subsequently to the wider industry. The announcement comes as Beijing seeks to curb reliance on imported components amid tightening U.S. export controls. Domestic memory maker CXMT is reportedly preparing to supply HBM3 chips to Huawei, although the companies have not confirmed the move. Together with software optimisations such as UCM, Chinese vendors aim to keep pace in the global race for AI capability while Washington presses ahead with its own AI Action Plan to maintain U.S. leadership.
The #US #AI Action Plan hinges on accelerating innovation, building #infrastructure, and leading in #diplomacy and #security to secure a competitive edge over #China: @viveksans & Himanshi Sharma https://t.co/XpxX5fP0zG
For too long, U.S. firms led in AI tech while policy lagged. That era is over. The AI Action Plan makes U.S. policy a global force again...🧵
Huawei’s new algorithm could cut China’s reliance on foreign memory chips Huawei plans to open-source UCM in September, first in its online developer community and later to the broader industry. Need more real world data on this... https://t.co/H0NyLVJTUb