Independent benchmarking platform LM Arena updated its leaderboards on 18 Aug. 2025, showing Anthropic’s newly released Claude Opus 4.1 Thinking model debuting at No. 1 in the Text, Coding and WebDev arenas. It is the first system to hold the top spot across all three flagship categories simultaneously, according to the site’s publicly available rankings. Both the thinking and non-thinking versions of Claude Opus 4.1 now occupy first and second place respectively on the coding list, displacing OpenAI’s GPT-5-high. The update consolidates Claude’s position in a field where performance differentials can determine which models are selected by corporate developers and research teams.
Opus 4.1 thinking and non-thinking versions takes 1st and 2nd place in @lmarena_ai Moving ahead of GPT-5-thinking-high https://t.co/dc7rh3FyKt https://t.co/24K5ehy9VB
Claude 4.1 Opus Thinking back in shared 1st place with GPT-5-high on WebDev Arena https://t.co/hpSkUwfWPe
Claude 4.1 Opus taking #1 spot on lmarena's coding category even the non-reasoning version is ahead of GPT-5-high https://t.co/tmGJxyqCzr