OpenAI has introduced GPT-5, the latest version of its flagship large-language model, positioning it as its most capable system for coding, writing and multi-step reasoning. The company is offering four versions—gpt-5, gpt-5-mini, gpt-5-nano and a higher-capacity Pro/Thinking model—each routed automatically depending on the complexity of a request. The standard API is priced at $1.25 per million input tokens and $10 per million output tokens, with cheaper tiers for the Mini and Nano models. GPT-5 supports a 400,000-token context window. The model is being rolled out to OpenAI’s roughly 700 million weekly ChatGPT users, including free accounts, while paid Plus and Pro subscribers receive access to higher reasoning limits. Enterprise customers retain legacy APIs for now, but GPT-5 replaces older engines such as GPT-4o inside the ChatGPT interface. Microsoft simultaneously enabled GPT-5 across Microsoft 365 Copilot, Windows 11 Copilot, Azure AI services and GitHub Copilot, extending the new capabilities to millions of corporate and consumer users. Chief Executive Officer Sam Altman called GPT-5 “the best model in the world” and said OpenAI will keep prioritizing growth and compute investment even at the expense of short-term profitability. He later told users in a Reddit AMA that Plus message limits will double and that the company is considering restoring access to GPT-4o after an autoswitching glitch left GPT-5 appearing “dumber” during launch day. The abrupt retirement of earlier models and initial routing failures have drawn criticism, with some paying customers canceling subscriptions and technology forums filling with complaints. Altman acknowledged the missteps and promised interface updates for easier manual selection of GPT-5-Thinking when deeper analysis is needed. OpenAI’s aggressive pricing undercuts Anthropic’s Claude Opus and Google’s Gemini in many workloads and could trigger a fresh round of price competition among AI providers. Rival xAI claimed its Grok 4 model outperformed GPT-5 on the ARC-AGI-2 reasoning benchmark, underscoring the intensifying race among frontier-model developers.
Now that GPT-5 is out, how do you think I did? Were my predictions for the model correct? https://t.co/fx3vJDXHlR
GPT-5 API = amazing chatGPT = bad Unified model is hard, and routing incorrectly to a dumb smaller model that thinks less behind the scenes, more or less randomly feels so bad when it fails. Especially if you err on the side of dumber quicker answers, potentially to reduce cost
GPT-5 pretty much settled this question ... no AGI by OpenAI in 2025. https://t.co/4lDmLKyWBN