Xiaomi Corp. has released MiDashengLM-7B, an open-source voice model designed to run its electric vehicles and smart-home appliances. The seven-billion-parameter system, published under an Apache 2.0 license, builds on Xiaomi’s in-house speech technology and Alibaba’s Qwen2.5-Omni-7B. The company said the software is already deployed across its car dashboards and Internet-of-Things ecosystem, offering wake-word activation, natural-language commands and on-device inference. By opening the code and training data, Xiaomi aims to attract outside developers and reduce reliance on proprietary U.S. platforms. Internal benchmarks cited by the Beijing-based firm show MiDashengLM-7B matching or surpassing commercial rivals on Chinese and English voice tasks while running on consumer-grade GPUs. The move highlights intensifying competition among Chinese tech companies to push AI capabilities onto edge devices. Hours after Xiaomi’s announcement, Tencent’s HunYuan lab unveiled four compact language models ranging from 0.5 billion to 7 billion parameters, each capable of 256,000-token contexts on a single GPU. Analysts say the rapid stream of open-source releases could accelerate adoption of domestic AI tools across the nation’s auto and appliance sectors.
WOW 🤯 🇨🇳 Another open model from a Chinese AI lab outperforms closed ones! XBai o4 beats OpenAI o3-mini and confidently beats Anthropic's Claude Opus. Apache 2.0 license and available on @huggingface https://t.co/eZpWEFUali
Xiaomi releases MiDashengLM-7B, an AI voice model under an Apache 2.0 license based on its foundational voice model, deployed in cars and smart home devices (@gaoyuan86 / Bloomberg) https://t.co/LtUJcmqhkC https://t.co/rEsxDD73Qf https://t.co/ZOzeer1FAj
Hunyuan just open-sourced 4 compact LLMs (0.5B, 1.8B, 4B, 7B) that run a full 256K-token context on a single consumer GPU. bringing long-form reasoning to phones, cars, and laptops. Each model ships with a "fast-thinking" mode for quick replies and a "slow-thinking" pass for https://t.co/Xynj2dBIwD https://t.co/5IvGT0Cun4