Anthropic is dealing with another service disruption affecting its AI assistant Claude, only hours after resolving a previous outage. Thousands of users reported problems accessing the chatbot and related tools, raising concerns about platform stability.
More than 4,000 users in the US flagged issues, according to outage-tracking website Downdetector. In India, around 500 complaints had been reportedly logged.
Anthropic's status page indicates “elevated errors” across several services. The disruption is affecting Claude Opus 4.6, the flagship model, along with Claude.ai, Cowork, the Anthropic platform, and Claude Code. This means both general users and developers relying on the company's tools are experiencing instability.
Data from Downdetector indicates a range of problems. Nearly 39 per cent of users reported trouble with Claude's chat function, while 36 per cent faced issues with the mobile app. Another 15 per cent said they were unable to access the website.
Anthropic's status page currently lists the incident as “unresolved,” although the company said a fix has been rolled out. “A fix has been implemented, and we are monitoring the results,” the company said, adding that teams are tracking performance.
This is the second disruption in under 24 hours. On March 2, Anthropic reported elevated errors across Claude and related services, with users encountering HTTP 500 and HTTP 529 error messages.
An HTTP 500 error points to a server-side failure, while HTTP 529 signals that the system is overloaded and unable to handle incoming traffic.
Anthropic is a US-based artificial intelligence company founded in 2021. It focuses on building advanced AI systems with strong safety and ethical safeguards. The company was started by former OpenAI researchers and works on developing reliable and controllable AI models. Claude is Anthropic's AI assistant.
The Tuesday outage came after a standoff between Anthropic and the US Department of Defence over how artificial intelligence should be used in military settings.
It began when Anthropic refused Pentagon demands to remove ethical limits on how its AI model Claude could be applied, especially in areas like mass domestic surveillance and fully autonomous weapons systems.
In response, the Trump government directed all federal agencies to stop using Anthropic's AI tools, citing national security concerns. The Pentagon also signalled it may officially designate Anthropic as a “supply chain risk,” a label normally used for foreign adversaries.
Anthropic's CEO, Dario Amodei, has strongly rejected these actions, calling the designation “retaliatory and punitive.” Other AI firms, including OpenAI and xAI, have struck or expanded deals with defence agencies.