ByteDance has introduced Seed1.5-VL, a vision-language model featuring a 532 million parameter vision encoder paired with a 20 billion active parameter Mixture-of-Experts (MoE) large language model. Seed1.5-VL achieves state-of-the-art results on 38 out of 60 public vision-language model benchmarks, outperforming OpenAI's CUA and Anthropic's Claude 3.7 in tasks such as GUI control and gameplay. The model is designed to be efficient enough to run on laptops, exemplifying advances in compact yet powerful AI architectures. Concurrently, the Qwen3 model family has been released, with the largest Qwen3 235B-A22B model scoring 62 on the Artificial Analysis Intelligence Index, marking it as the most intelligent open-weight model to date. The Qwen3 series excels in reasoning, coding, and multilingual tasks, with an efficient MoE design but currently lacks vision support. Additionally, the startup DeepSeek, led by founder Liang Wenfeng, has gained attention for its innovative approach to AI development despite U.S. restrictions on Chinese AI technology. DeepSeek's recent paper on scaling challenges and hardware architecture reveals the use of 2048 H800 GPUs for training, employing FP8 precision with less than 0.25% accuracy loss and achieving a training cost per token of 250 GFLOPS, significantly lower than the 2.45 TFLOPS required for a dense 405 billion parameter model. The company also introduced Multi-head Latent Attention, which reduces the key-value cache size to 70 KB per token, compared to LLaMA-3.1's cache, which is seven times larger. These developments highlight ongoing advancements in AI model efficiency and hardware optimization in both Chinese and global AI research communities.
DeepSeek just dropped a new paper on scaling challenges and hardware architectures for AI training and inference Wenfeng Liang is on the author list btw https://t.co/5lQWYRXPSq
> DeepSeek-V3 was trained on just 2048 H800 GPUs > FP8 training got with <0.25% accuracy loss > Training cost per token: 250 GFLOPS versus 2.45 TFLOPS for a dense 405B model > Multi-head Latent Attention shrinks the KV cache to 70 KB per token, LLaMA-3.1’s cache is 7x larger https://t.co/xXJZkMT2Hd
This paper creates a dataset tracking key trends in performance, cost, power, and ownership from 2019 to 2025. → They estimated power and hardware cost based on chip data when not reported. → Analysis focused on top-10 system trends and aggregate distribution by sector and https://t.co/YUGCTyNjqv