Chinese artificial-intelligence startup DeepSeek has rolled out DeepSeek-V3.1, an upgraded large-language model that the company says delivers faster reasoning and stronger agent capabilities than its earlier V3 and R1 systems. The launch was quietly initiated on 19 August and formally confirmed in a WeChat statement on 21 August. The open-source model houses roughly 685 billion parameters and can process up to 128,000 tokens of context—enough to ingest a 300-page document in one pass. A new hybrid architecture lets users toggle between a high-speed “Non-Think” mode and a more deliberative “Think” mode, which together pushed performance on key tests: the system recorded 71.6 % on the Aider coding benchmark and improved SWE-Bench accuracy to 66 % from 44.6 %. DeepSeek said V3.1 supports multiple precision formats, including a UE8M0 FP8 setting optimised for forthcoming domestic chips, part of Beijing’s broader push to reduce reliance on US hardware. The company has made the weights available on Hugging Face under an open licence, while simultaneously routing all web, app and API traffic to the new version. Commercial terms are changing alongside the technology. DeepSeek will raise API prices and scrap off-peak discounts from 6 September 2025. The release also serves to keep the Hangzhou-based firm in the spotlight while its longer-delayed R2 flagship remains stalled amid reported difficulties adapting Huawei Ascend processors.
DeepSeek-V3.1 is now available in anycoder Hybrid inference: Think & Non-Think — one model, two modes Stronger agent skills: Post-training boosts tool use and multi-step agent tasks https://t.co/gv4FEdJMuY
The focus on domestic chip compatibility may signal that DeepSeek's AI models are being positioned to work with China's emerging semiconductor ecosystem https://t.co/A4bMh9tSvE
DeepSeek's R2 model launch delayed due to Huawei Ascend chip issues. Chinese AI faces tech hurdles adapting to domestic hardware. 👉 https://t.co/ANECPISFUV #DeepSeekAI, #HuaweiChips, #AIDevelopment, #ChinaTech