Anthropic’s “Supply Chain Risk” Label
At the end of last month, the United States Department of War designated Anthropic a national security supply chain risk. It is a label almost exclusively reserved for foreign adversaries. American technology companies have never been placed in that category. The closest example in recent memory is the government’s treatment of Huawei, which faced sweeping restrictions over concerns that its equipment could compromise national security infrastructure. This time the target is not a Chinese telecom giant. It is one of the fastest growing AI companies right at home in Silicon Valley.
The designation followed the collapse of a $200 million Pentagon contract and triggered an immediate restriction across the defense ecosystem. Any contractor, supplier, or partner doing business with the U.S. military may not conduct commercial activity with Anthropic in connection with those contracts. For a company whose technology is increasingly used across enterprise software and developer workflows, the implications, naturally, extend far beyond a single contract.
So what went wrong?
The conflict between Anthropic and the Pentagon was not about technical capability, model performance, compute or any other performance metric. It was about control over how the technology could be used.
Anthropic had two hard red lines governing the use of its models.
The first prohibits the use of Claude for mass domestic surveillance.
The second prevents the deployment of the model within lethal autonomous weapons systems unless there is meaningful human oversight in operational decision making.
Defense officials wanted more. The Pentagon is rapidly integrating AI into military planning, intelligence analysis, cyber operations, and battlefield modeling. Officials wanted fewer constraints on how frontier models could be deployed within those systems; Anthropic refused to remove those guardrails.
Negotiations collapsed. The Pentagon cancelled the $200 million contract and designated the company a supply chain risk, effectively blocking its technology from being used within Department of War procurement chains. Anthropic has since filed suit to challenge the designation, arguing that the action is legally unsound and that the statute governing supply chain risks was never intended to be used as a punitive measure against a domestic technology company.
A rather strange moment for a company on a hot streak
The timing is notable because Anthropic has been gaining major momentum across the tech ecosystem.
Claude, the company’s flagship model family, has become one of the most widely used AI tools among developers and technical teams. In particular, Claude has been crushing it with dev tools, reasoning, and long context workflows. Many engineering organizations increasingly rely on it for code generation, debugging, research synthesis, and documentation.
That adoption has translated into huge commercial growth. Anthropic’s ARR grew from roughly $1 billion in late 2024 to more than $9 billion by the end of 2025 and before this story, was one of the top private companies to watch for a marquee IPO in 2026.
An unusual precedent
The supply chain risk designation is typically used to protect U.S. infrastructure from foreign suppliers that might introduce vulnerabilities into government systems. Historically, the designation has been applied to telecommunications equipment providers or hardware manufacturers with links to rival states. Applying it to a domestic AI company is unprecedented.
That distinction matters because the designation carries structural consequences across the defense ecosystem. Contractors working with the Department of War must certify that restricted suppliers are not embedded within the technology stack supporting government systems. In practice, that forces companies participating in defense contracts to remove the restricted technology from any workflows connected to those contracts.
In Anthropic’s case, the legal scope is narrower than the initial headlines suggested. The underlying statute focuses on protecting specific government supply chains rather than banning a supplier from all commercial activity. Anthropic has argued that the designation only affects the use of Claude within specific Department of War contracts and does not apply to most of its commercial customers.
Even so, the signal from Washington is unmistakable. If a supplier refuses to align with defense requirements, the government has mechanisms to remove that supplier from the ecosystem.
A cautionary tale for startups chasing federal contracts
The Pentagon has spent the last several years actively courting frontier technology companies. AI, autonomous systems, cyber, and space have been core priorities for the U.S. military.
For startups, defense contracts are attractive opportunities. They are large, multi year agreements backed by one of the deepest pools of capital in the world. Many emerging technology companies view federal contracts as a way to accelerate revenue growth and establish credibility in strategic industries. The Anthropic saga illustrates the tradeoffs that come with those opportunities.
Government contracts operate inside a political system. Terms can evolve, priorities often shift, and companies may find themselves negotiating with institutions whose operational priorities are fundamentally different from those of commercial technology firms. In this case, the dispute centered on how much autonomy the military should have over a general purpose AI system.
Anthropic believed certain boundaries were necessary. The Pentagon believed those boundaries were unacceptable. When negotiations broke down, the government simply moved to another supplier. In this case, OpenAI became the beneficiary of the situation, which comes at a time when its standing with parts of the tech community has been strained. Following the Pentagon deal, U.S. uninstalls of the ChatGPT mobile app surged 295% day over day, reflecting a wave of backlash from some users and developers over the company’s decision to work with the Department of War under such terms.
The broader question
The larger question at hand is not what company ultimately won the contract. The more important question is how control over general purpose AI systems will be distributed between private companies and governments.
Frontier AI models are no longer just chat bots for you to ask what you should make for “this week’s high protein meal prep”, or a tool to help troubleshoot your SQL query. They are the infrastructure that supports everything from enterprise productivity to the software powering autonomous drone strikes in the changing weapons environment we find ourselves in. As such, governments view them as one of their most strategic assets.
Anthropic’s position reflects one view of the future, in which AI developers retain meaningful limits over how their systems are deployed. The Pentagon’s position reflects another view, in which “national security” requirements override those limits.
That tension will not disappear. As AI becomes more powerful and more deeply embedded in state power, the relationship between frontier AI labs and governments will become one of the defining questions shaping the defense and broader technology industry.