Beijing-based Moonshot AI has released Kimi K2, an open-source trillion-parameter Mixture-of-Experts (MoE) language model designed for advanced agentic intelligence tasks. The model features 1 trillion total parameters with 32 billion active parameters per forward pass and supports a context window of up to 128,000 tokens. Kimi K2 is optimized for coding, mathematical reasoning, and autonomous agentic workflows, outperforming other open-source models and rivaling proprietary models such as Claude 4 and Gemini 2.5 in benchmark tests including LiveCodeBench, AceBench, SWE Bench, Tau2, and AIME 2025. The model employs the MuonClip optimizer, trained on 15.5 trillion tokens, and supports tool use and multi-step autonomous workflows through its instruct version. It is available under a modified MIT license with open weights, enabling fine-tuning and community development. Kimi K2 is positioned as a cost-effective alternative, delivering 60-70% cost savings compared to proprietary large language models. It has been recognized for its strong coding capabilities, reduced hallucinations, and agentic intelligence features, marking a notable advancement in open-source AI development from China.
Kimi K2: La IA china que desafía a Occidente y revive el fenómeno “DeepSeek” https://t.co/mlJXkAnCkM
DeepSeek-R1 moment is how the story of open-source reasoning dominance over closed models began. And here is how it continues with 4 novel amazing models from Chinese best AI companies, that enhance reasoning with agentic capabilities: - Kimi K2: The Agentic Intelligence https://t.co/X9BuNTHh1K
Open source AI just got serious Kimi K2 is the first model that doesn't make me miss Claude for coding tasks 53.7% LiveCodeBench score (beats GPT-4) Real tool execution that actually works Deep dive:https://t.co/LhjX3hyqNn https://t.co/ULmDI2szLK