“U.S. Judge Blocks Pentagon’s Blacklisting of Anthropic”

Date:

Share post:

A U.S. judge has issued a temporary block on the Pentagon’s decision to blacklist Anthropic, marking a significant development in the company’s ongoing dispute with the military regarding AI safety in combat settings. The lawsuit filed by Anthropic in a California federal court asserts that the U.S. Secretary of War, Pete Hegseth, exceeded his authority by classifying Anthropic as a national security supply-chain risk without allowing the company to challenge the designation.

According to Anthropic, the government’s actions infringed upon its First Amendment right to free speech and its Fifth Amendment right to due process. The company contends that it was unfairly targeted for its stance on AI safety without the opportunity to defend itself. U.S. District Judge Rita Lin, appointed by former President Joe Biden, sided with Anthropic in a detailed 43-page ruling, but delayed the enforcement of the decision for seven days to permit the administration to appeal.

The controversy arose following Anthropic’s objection to the military using its AI chatbot Claude for U.S. surveillance or autonomous weaponry, resulting in Hegseth blocking Anthropic from specific military contracts. Anthropic executives raised concerns over potential financial losses and damage to their reputation due to this restriction.

Anthropic argues that current AI models lack the reliability needed for safe deployment in autonomous weapons and opposes domestic surveillance as a violation of civil liberties. While the Pentagon maintains that private entities should not impede military activities, it clarified that it does not intend to employ such technologies for unauthorized purposes.

Judge Lin’s ruling cast doubt on the government’s motives, suggesting that Anthropic may have been penalized for publicly criticizing the government’s contracting decisions. Anthropic’s spokesperson, Danielle Cohen, expressed satisfaction with the court’s decision and emphasized the company’s commitment to collaborating with the government for the advancement of secure and dependable AI applications.

The designation of Anthropic as a supply-chain risk marked the first instance of a U.S. company publicly receiving such a classification under a government-procurement statute designed to safeguard military systems from potential sabotage. In response to the lawsuit, the Justice Department argued that Anthropic’s reluctance to comply with contractual terms could introduce uncertainties in the Pentagon’s utilization of Claude, risking operational disruptions in military systems.

As the legal battle continues, Anthropic faces another challenge in Washington concerning a separate supply-chain risk designation by the Pentagon that could lead to its exclusion from civilian government contracts. The government maintains that the restrictions imposed on Anthropic stem from contractual disagreements rather than the company’s AI safety concerns.

Related articles

“World’s Largest Cricket Farm in Ontario Fails Financially”

The insect farming industry was anticipated to experience significant growth, particularly exemplified by Aspire Food Group Canada in...

“NDP Leadership Race Heats Up: Five Candidates Face Off in Worker-First Forum”

On Wednesday, the NDP leadership race progresses with the party's inaugural leadership candidate forum featuring five candidates. The...

“Tate McRae Sweeps 2026 Junos, Bieber Snubbed”

At the recent Juno Awards Gala, Justin Bieber, despite being one of the most nominated artists with six...

“Quebec Premier Legault Resigns, CAQ Leadership Shift Looms”

Quebec's political scene is undergoing changes with the unexpected resignation of Premier François Legault after over seven years...