World

Pentagon threat to review Anthropic contract sparks debate on AI and war by US

AI model Claude at centre of Pentagon row as Anthropic risks $200M contract to defend guardrails

Representational image
Representational image  Screenshot of Claude

Anthropic is locked in an escalating public dispute with the United States Department of Defense — recently rebranded by the administration as the “Department of War” — over restrictions on how its AI model, Claude, may be deployed for military purposes.

The row involving AI model Claude is at the centre of the problem as Anthropic risks $200M contract to defend guardrails. The issue intensified in February 2026 following reports that Claude had been used during a January operation in Caracas that resulted in the capture of Venezuelan president Nicolás Maduro and his wife, Cilia Flores.

The operation, carried out by US special forces including Delta Force, reportedly involved airstrikes and a ground raid. Maduro was subsequently transferred to New York to face drug-trafficking charges. US officials reported no American casualties, although Venezuelan sources claimed dozens of local deaths.

Claude was integrated into military systems via platforms operated by Palantir Technologies, which are widely used by the Pentagon for operational planning and data analysis. While the precise role of the AI system remains classified, it is understood to have supported mission planning, intelligence assessment or logistics functions. Anthropic has said that any such deployment complied with its existing contractual safeguards.

The reported use of Claude in the Maduro operation prompted internal scrutiny at Anthropic and, according to sources familiar with the matter, led to renewed demands from the Pentagon that the company remove what it views as restrictive conditions on the model’s military applications.

At the centre of the dispute are two red lines Anthropic says it will not cross: mass domestic surveillance and the development or deployment of fully autonomous weapons without meaningful human oversight.

In a statement published on 26 February, Anthropic chief executive Dario Amodei said he believed deeply in the importance of using AI to defend the United States and its democratic allies. He noted that Anthropic had been the first frontier AI firm to deploy models within classified US government networks and at National Laboratories, and that Claude is already used across defence and intelligence agencies for “mission-critical applications” including intelligence analysis, modelling, cyber operations and operational planning.

Published: undefined

However, Amodei argued that certain applications risk undermining democratic values or exceed the safe capabilities of current AI systems.

On mass domestic surveillance, he warned that advanced AI could enable the large-scale aggregation of commercially available data — such as movement, browsing history and associations — into detailed personal profiles without warrants, raising serious civil liberties concerns. On fully autonomous weapons, he said frontier AI systems were not yet reliable enough to remove humans entirely from decisions about selecting and engaging targets, adding that proper guardrails did not currently exist.

Anthropic maintains that these exclusions have never been part of its contracts with the Pentagon and have not hindered existing military deployments of Claude.

The Department of War, led by Secretary Pete Hegseth, is said to be insisting that contractors agree to provide AI tools for “any lawful use”. According to Anthropic, officials have threatened to cancel contracts reportedly worth up to $200m, designate the company a “supply chain risk”, or invoke the Defense Production Act to compel changes to its safeguards.

No formal written statement has been issued by the department addressing the dispute. However, Pentagon spokesperson Sean Parnell wrote on X on 25 February that the department had no interest in using AI for mass surveillance of Americans or for autonomous weapons without human involvement, but required access for all lawful purposes. Another defence official said existing laws and policies already prohibit such uses and dismissed the need for additional contractual commitments.

The standoff underscores broader tensions over the role of artificial intelligence in modern warfare. Anthropic has previously collaborated extensively with US defence agencies, including a reported $200m agreement in 2025, and has publicly supported export controls and restrictions on AI access by firms linked to the Chinese Communist Party.

As a 27 February deadline set by defence officials looms, Anthropic has said it is prepared to continue working with the Pentagon under its proposed safeguards but would facilitate a smooth transition to another provider if removed from defence systems.

For now, the dispute highlights a fundamental question confronting governments and technology firms alike: how far AI should be allowed to shape the future conduct of war — and at what cost to democratic norms.

Published: undefined

Follow us on: Facebook, Twitter, Google News, Instagram 

Join our official telegram channel (@nationalherald) and stay updated with the latest headlines

Published: undefined