OpenAI has released two open-weight large-language models—gpt-oss-120b and gpt-oss-20b—marking the company’s first open offering since GPT-2 in 2019. The move represents a strategic shift for the Microsoft-backed firm, which has spent the past five years prioritising proprietary systems. The larger gpt-oss-120b packs 117 billion parameters but, through a Mixture-of-Experts design, activates only 5.1 billion at inference, allowing it to run on a single 80 GB Nvidia H100 GPU. The smaller 20-billion-parameter model activates 3.6 billion parameters and is able to run locally on devices with 16 GB of memory. Both text-only models offer a 128,000-token context window, expose full chain-of-thought reasoning, and reportedly perform on par with OpenAI’s proprietary o4-mini and o3-mini models, respectively. The models are available today via platforms such as Hugging Face, Azure, AWS and Databricks under the permissive Apache 2.0 licence, enabling unrestricted commercial use and fine-tuning. OpenAI says the release followed extensive external safety testing and is its most rigorously evaluated model family to date. Chief Executive Officer Sam Altman acknowledged earlier this year that the company had been “on the wrong side of history” by keeping its technology closed. The open release comes amid intensifying competition from Chinese groups such as DeepSeek and Alibaba’s Qwen, as well as U.S. rivals including Meta’s Llama series, and is aimed at giving developers a domestically produced alternative that can run on their own infrastructure.