Chinese artificial-intelligence startup DeepSeek on 19 August released version 3.1 of its flagship large language model, marking the company’s first update to the V3 line since January. The model is already available for download on Hugging Face and through DeepSeek’s website, with the same API endpoints as earlier versions. DeepSeek V3.1 expands the context window to 128,000 tokens—enough to process roughly a 300-page book in a single prompt—and increases model size to 685 billion parameters while retaining a Mixture-of-Experts architecture. The company did not publish a formal technical paper or model card but said the upgrade is live across its web, mobile and WeChat products. Early third-party tests show the model scoring 71.6 percent on the Aider coding benchmark, edging Anthropic’s Claude 4 Opus while costing roughly US$1 per full coding task. Developers note that V3.1 integrates chat, reasoning and code generation in one system, reinforcing DeepSeek’s position as one of the most capable open-source alternatives to proprietary offerings from OpenAI and Anthropic. DeepSeek’s low-profile launch contrasts with the fanfare that accompanied its reasoning-focused R1 model earlier this year. Chinese media attribute delays to an R2 successor to CEO Liang Wenfeng’s “perfectionism,” but the company has now folded all public endpoints into V3.1, signalling a shift toward a single consolidated product line. The release underscores the growing technical competitiveness of Chinese labs and intensifies price pressure on closed commercial models.
Best AI subscription plans 2025: ChatGPT, Perplexity, Gemini, Claude, and Grok compared https://t.co/xflW2EmuZX
𝐃𝐞𝐞𝐩𝐬𝐞𝐞𝐤 𝐯𝐬 𝐂𝐡𝐚𝐭𝐆𝐏𝐓-𝟓: 𝐖𝐡𝐨 𝐖𝐢𝐥𝐥 𝐖𝐢𝐧 𝐭𝐡𝐞 𝐀𝐈 𝐑𝐚𝐜𝐞 DeepSeek outperforms ChatGPT-5 in reasoning and planning. ChatGPT-5 still shines in creativity. So who’s really winning the race? #DeepSeekVsChatGPT5 #AITrends2025 #AI #AINews #AnalyticsInsight https://t.co/GxGYLGoz2l
> DeepSeek V3.1 beats Claude 4 Opus > DeepSeek moment has passed > quick maths https://t.co/gX1A1gcilq https://t.co/Laq8QWUwJw