Lmao at Anthropic spox insinuating that GPT-5 still is worse at coding than Claude. https://t.co/dsz0ShlI7N
This is f hilarious 🤣 Anthropic revoked OpenAI’s access to its models for using Claude’s code. LMAO OpenAI hype their own models and usea Anthropic’s for development. Also, Dario is the most jealous man I’ve seen, jealous of open source, jealous of Windsurf, and now jealous of https://t.co/w1RL02z1sq https://t.co/dUDLEVV30o
Anthropic cut off OpenAIs's access to API, they were using Claude Code https://t.co/GAffQmCmNf
AI start-up Anthropic has suspended OpenAI’s access to the Claude application-programming interface after concluding the ChatGPT maker violated contractual terms barring customers from using Claude to develop competing systems. The cutoff, implemented on Tuesday, affects Claude Code and other Anthropic large-language models, according to people familiar with the decision. Anthropic spokesperson Christopher Nulty told WIRED that internal testing showed OpenAI engineers had been routing Claude outputs through proprietary tools to benchmark coding, creative-writing and safety performance ahead of an expected GPT-5 release. Anthropic’s commercial agreement prohibits customers from "building a competing product or service" or "reverse-engineering" the models, Nulty said. OpenAI acknowledged the suspension and said cross-model evaluations are standard industry practice for safety research. "While we respect Anthropic’s decision, it is disappointing given that our own API remains available to them," chief communications officer Hannah Wong said in a statement. Anthropic indicated it may still permit limited access for independent safety benchmarking, but declined to clarify how that will be handled. The dispute highlights rising competitive tension among leading AI providers as they race to release more capable models and protect valuable training data and usage telemetry.