
Meta has introduced two new 24k GPU clusters to support current and future AI models, including Llama 3. The clusters are part of Meta's infrastructure roadmap aiming for 350,000 NVIDIA H100 GPUs by the end of 2024. These clusters are utilized for Llama 3 training, showcasing Meta's commitment to advancing AI research and technology.











Progress in Algorithmic Performance of LLMs Has Been Phenomenal! The compute required to reach a set performance threshold has halved approximately every 8 months over the last few years and is substantially faster than hardware gains per Moore's Law It will get cheaper over…
.@Meta intros two GPU training clusters for Llama 3 https://t.co/uJ7EJhmnH3 @TechTarget Esther Ajao "Meta wants to provide you with the tools so you can use AI. They want to be able to use the output of all of that." - @rwang0
Meta Unveils Details of Two New 24k GPU AI Cluste #000GPUclusters #24 #AI #AIResearchSuperCluster #artificialintelligence #CloudServices #E1SSSDs #GrandTeton #infrastructureexpansion #Llama3 #llm #machinelearning #meta #NvidiaQuantum2InfiniBand https://t.co/Hldpx9EFGX https://t.co/DfTYUnuuhN