Representatives of Anthropic stated that they revoked OpenAI’s access to the Claude API because the ChatGPT developers were found to be in violation of the service’s terms of use and allegedly used Claude in the development of GPT‑5.
“Claude Code has become the primary choice for programmers worldwide, so it’s not surprising that OpenAI’s technical staff also used our tools before the release of GPT-5. [However,] this is a direct violation of our terms of service,” stated Anthropic’s representative Christopher Nulty to Wired.
In particular, the Terms of Service for commercial use prohibit using Anthropic products to create competing products and services, train competing AI models, as well as reverse-engineer and copy.
According to sources from Wired, OpenAI engineers used Claude via API and integrated it into their internal tools. This allowed them to compare the performance of Claude and GPT-5 in terms of code generation, creative writing, and responses to potentially dangerous queries, such as those related to CSAM (Child Sexual Abuse Material), self-harm, and defamation. In other words, this helped OpenAI enhance its own models.
At the same time, Anthropic noted that they are not completely closing access for OpenAI: the company will still be able to access tasks related to security and benchmarking. However, it is not clarified what exactly is meant by this and what restrictions are imposed on OpenAI.
“Comparing competing models is an industry standard aimed at progress and safety. While we respect Anthropic’s decision to close our access to their API, it is disappointing given that our API remains available to them,” commented OpenAI spokesperson Hannah Wong to Wired.