Microsoft makes OpenAI’s new open model available on Windows https://t.co/Jly7aJDlNI
Microsoft makes OpenAI’s new open model available on Windows. The new lightweight open model is also coming to macOS soon, through Microsoft’s local foundry efforts. Details 👇 https://t.co/bwf3eFn0OF
Anthropic Drops Claude Opus 4.1, Crushes Coding Benchmarks https://t.co/iHX8pm7AWQ https://t.co/yEIdaCoLc5
Anthropic has released Claude Opus 4.1, an upgrade to its flagship large language model that scored 74.5 percent on the SWE-Bench Verified coding benchmark, overtaking OpenAI’s o3 (69.1 percent) and Google’s Gemini 2.5 Pro (67.2 percent). The result marks the highest publicly reported performance on the real-world software-engineering test and reinforces Anthropic’s lead in AI-assisted coding. The model is available immediately to paying Claude users through the web interface, Claude Code, API access and cloud partners including Amazon Bedrock and Google Cloud Vertex AI, with no change to pricing. Anthropic said it will deliver “substantially larger” upgrades in the coming weeks and highlighted gains in multi-step reasoning, code refactoring and data analysis tasks. Analysts view the release as a pre-emptive move ahead of OpenAI’s expected GPT-5 launch. VentureBeat reports that nearly half of Anthropic’s estimated $3.1 billion in annual API revenue comes from just two coding customers, underscoring the company’s need to defend its niche in developer tools. On the same day, OpenAI returned to its roots in open models by publishing gpt-oss-20b and gpt-oss-120b under an Apache 2.0 licence—the firm’s first open-weight releases since 2019. Microsoft quickly integrated the smaller, 20-billion-parameter version into Windows and Azure AI Foundry, enabling local inference on PCs with 16 GB of VRAM and signalling wider availability on macOS. The twin launches illustrate an accelerating arms race among leading AI labs. While Anthropic pushes closed-weight performance higher, OpenAI is courting developers with permissively licensed models that can run on consumer hardware, giving enterprises a broader set of options for building and deploying generative-AI applications.