Anthropic has quintupled the context window of its mid-tier Claude Sonnet 4 model, allowing the system to accept up to one million tokens in a single prompt. The upgrade, rolled out on the Anthropic API and through cloud partners Amazon Bedrock and Google Cloud’s Vertex AI, lets developers feed the model roughly 750,000 words or 75,000 lines of code at once. The jump from the previous 200,000-token limit positions Claude Sonnet 4 ahead of OpenAI’s GPT-5, which tops out at 400,000 tokens, and narrows the gap with Google’s two-million-token Gemini 2.5 Pro. Anthropic says the larger window improves performance on extended software-engineering and other long-horizon tasks by letting the model keep more working context in memory. Anthropic is keeping its existing price of $3 per million input tokens and $15 per million output tokens for prompts up to 200,000 tokens, while charging $6 and $22.50, respectively, for requests that exceed that threshold. “AI coding platforms will get a lot of benefit from this update,” Brad Abrams, product lead for the Claude platform, told TechCrunch, adding that the API business is “growing” despite rising competition.
Claude Sonnet 4 now supports 1million Tokens of context 🔥 https://t.co/2aPkMyQEry
Codebases beware! > Claude Sonnet 4 now support 1 MILLION tokens of context > That's 75k lines of code > This will have a bigger impact on AI coding than people realize > Is Claude still the coding favorite? https://t.co/aw8bATehjF
Anthropic updates Claude Sonnet 4 to support a 1M token context window, letting it process prompts up to 750K words or 75K lines of code, up 5x on its old limit (@zeffmax / TechCrunch) https://t.co/LUgPqMRn4k https://t.co/7dxeFHAqyf https://t.co/ZOzeer1FAj