Chinese AI developer Rednote, associated with the social media platform Xiaohongshu, has released dots.llm1, an open-source large-scale Mixture of Experts (MoE) language model. The model activates 14 billion parameters out of a total 142 billion and was pretrained on 11.2 trillion high-quality, non-synthetic tokens. Dots.llm1 matches state-of-the-art performance comparable to Alibaba's Qwen2.5-72B and is distributed under an MIT license with checkpoints released every trillion tokens. This release highlights China's leading position in price-performance ratio for AI models. Separately, DMind has launched DMind-1, an open-source large language model built on Alibaba's Qwen3-32B, optimized for Web3 applications, offering superior benchmark performance at 10% of the cost. Its smaller variant, DMind-1-mini (14B parameters), ranks second, maintaining 95% of the power with smoother operation.
China's Xiaohongshu(Rednote) released its dots.llm open source AI model Type: A MoE model with 14B activated and 142B total parameters trained on 11.2T tokens. No Synthetic Data during Pretraining Training Stages: Pretraining and SFT. Architecture: Multi-head Attention with https://t.co/OsDBKxnmHP
It's hard to argue if RED note or Zhihu hold the most valuable social media content in Chinese. But this dots.llm1.inst from RED note looks quite good. - 142B MoE, 14B active parameters - Trained on 11.2T tokens - base model released - MIT license - 32k content length https://t.co/lUNjpmDSl5
China's Rednote Open-source dots.llm performance & cost. When it comes to price-performance-ration China is top performing https://t.co/DyDAz4aiDo