Anthropic has enlarged the context window of its Claude Sonnet 4 model to one million tokens, a five-fold jump from the previous 200,000-token limit. The change lets the model ingest the equivalent of about 750,000 words—or 75,000 lines of code—in a single request, catering to developers who want to analyse full software repositories or large research collections at once. The expanded context is available immediately in public beta through the Anthropic API and Amazon Bedrock, with support for Google Cloud’s Vertex AI expected soon. Anthropic says the longer window improves performance on long-horizon coding and document-analysis tasks, an area where the company is competing with OpenAI’s GPT-5 and other large-language-model providers. Anthropic is keeping its existing rates for prompts up to 200,000 tokens at $3 per million input tokens and $15 per million output tokens. For longer prompts the company will charge $6 per million input tokens and $22.50 per million output tokens. The pricing reflects the additional computing required while giving enterprise customers a pathway to larger, more complex workloads.
Started to run vibe checks on Claude 4 Sonnet with 1M context support. Compared to Gemini 2.5 Pro on a paper analysis task, Sonnet 4 is fast, less verbose (concise), and pays attention to details. Makes it ideal for AI agents. More expensive, though. More thoughts below: https://t.co/hjOIUoayyj
Claude Sonnet 4 now has 1 million token context on Anthropic API 👀 https://t.co/0bvR17V4Wv
🔧 Claude Sonnet 4 now supports 1M tokens of context, a 5x jump that fits whole repos or dozens of papers in 1 prompt. Pricing rises past 200K tokens to $6 per 1M input and $22.50 per 1M output. This scale means the model can read about 750,000 words or roughly 75,000 lines of https://t.co/j1mvP2jKCE https://t.co/IsG3sy3E8i