A high-stakes legal battle is unfolding in Washington that could reshape the relationship between artificial intelligence companies and the U.S. military.

AI firm Anthropic has launched a lawsuit against the Pentagon after being blacklisted from defense contracts, accusing the government of abusing national security laws to punish the company for its stance on AI safety and autonomous weapons.

Legal experts say the case could become a landmark test of an obscure law rarely used by the U.S. government—and it may determine how far the military can go when cutting technology companies out of critical contracts.

The Pentagon’s Sudden Blacklisting

The dispute began when the U.S. Department of Defense designated Anthropic as a “supply chain risk,” effectively excluding the company from certain military contracts.

The move could cost the AI developer billions of dollars in potential revenue by 2026, according to company executives, while also damaging its reputation in one of the fastest-growing sectors of the global economy.

Anthropic responded Monday with a lawsuit claiming the decision violated both constitutional rights and federal law.

The company argues the designation was not about security risks—but about punishing its policies restricting how the military can use its AI models.

An Obscure Law Suddenly in the Spotlight

To justify the blacklisting, the Pentagon invoked Section 3252, a little-known national security statute designed to protect federal information systems from foreign sabotage.

The law allows the defense secretary to bar companies from contracts if they pose a risk that adversaries could:

  • Sabotage military networks

  • Insert malicious code or hidden functions

  • Disrupt or surveil defense systems

But here’s the twist: according to legal databases reviewed by reporters, the statute has never been tested in court—and has never been publicly used against a U.S. company before.

That alone could make the case a historic legal showdown.

“It’s not at all clear that the statute can even apply to an American company where there’s no foreign entanglement,” said Alan Rozenshtein, a national security law professor at the University of Minnesota.

The AI at the Center of the Fight

The dispute revolves around Claude, Anthropic’s flagship artificial intelligence system.

Claude has already been used by the U.S. military, including reportedly during recent strikes on Iran.

Ironically, Anthropic’s lawsuit says Defense Secretary Pete Hegseth himself praised the technology during a meeting in February, calling it “exquisite” and expressing interest in working with the company.

Yet just days later, the Pentagon labeled the firm a national security risk.

Anthropic says it was never given a clear explanation for the dramatic reversal.

The Policy Clash That Sparked the Conflict

At the heart of the dispute is a philosophical divide over how AI should be used in warfare.

Anthropic has imposed strict safeguards on Claude, including bans on:

  • Autonomous weapons systems

  • Domestic surveillance of civilians

The company argues current AI technology is not reliable enough to control lethal weapons without human oversight.

Pentagon officials disagree.

According to government officials, such restrictions could limit military capabilities and potentially endanger American troops.

The situation escalated when Anthropic refused to remove those safeguards, prompting the Pentagon’s blacklisting decision.

Political Tensions Enter the Picture

The dispute quickly spilled into politics.

In public comments and social media posts, officials from President Donald Trump’s administration sharply criticized the company.

At one point, Trump reportedly described Anthropic as a “RADICAL LEFT WOKE COMPANY.”

Defense Secretary Hegseth also accused the firm of hiding behind “sanctimonious rhetoric” about AI ethics to pressure the military.

Legal experts say those comments could weaken the government’s case.

“A lot of what officials said suggests personal animus,” said Joel Dodge, a legal expert at Vanderbilt University. “It raises the possibility that the decision was driven by bad blood rather than genuine security concerns.”

Anthropic’s Constitutional Argument

Anthropic’s lawsuit claims the government violated two key constitutional protections:

First Amendment (Free Speech)

The company argues it was punished for expressing views about AI safety in warfare.

Fifth Amendment (Due Process)

Anthropic says the government imposed severe economic penalties without:

  • Evidence of wrongdoing

  • A transparent review process

  • Any chance for the company to defend itself

The lawsuit describes the government’s action as “draconian.”

Legal analysts say the government’s actions may contain contradictions that could work in Anthropic’s favor.

For example, the Pentagon has reportedly continued using Claude in military operations even while labeling the company a supply chain risk.

At the same time, officials have discussed potentially forcing AI firms to supply technology under emergency powers such as the Defense Production Act.

“The government was simultaneously threatening to force Anthropic to sell its services while saying the technology was too dangerous to use,” said Professor Rozenshtein.
“Not all of these things can be true.”

Why the Government Still Might Win

Despite those arguments, the Pentagon holds a powerful advantage in court: national security authority.

U.S. courts traditionally defer heavily to the executive branch when decisions involve military or intelligence concerns.

Government lawyers are likely to argue that:

  • The president has broad authority to choose defense suppliers

  • Military readiness requires unrestricted access to technology

  • Vendors that limit how the military uses their tools may not be reliable partners

If the court agrees, the case could reinforce the government’s sweeping power over defense procurement.

What’s at Stake for the AI Industry

Beyond Anthropic, the lawsuit could reshape the relationship between Silicon Valley and the Pentagon.

Many AI companies are grappling with ethical questions about how their technologies should be used in warfare and surveillance.

If the government wins, it could signal that AI developers must comply with military demands—or risk being shut out of the defense market entirely.

If Anthropic wins, it could establish that technology companies have the right to set ethical limits on how their AI is deployed.

A Defining Moment for AI and National Security

The case arrives at a moment when artificial intelligence is rapidly becoming one of the most strategically important technologies in the world.

Military planners see AI as critical to:

  • Cyberwarfare

  • Battlefield intelligence

  • Autonomous defense systems

But the legal fight between Anthropic and the Pentagon highlights a deeper question:

Who ultimately controls the future of AI—technology companies or governments?

The answer could shape not just defense policy, but the entire trajectory of the global AI race.

ChainStreet