Claude AI, the Iran Strikes, and the Battle Over Who Controls Artificial Intelligence in Warfare


On February 28, 2026, two events unfolded simultaneously that will likely shape the future of artificial intelligence, military strategy, and technology ethics for years to come. The United States and Israel launched major military strikes against Iran, hitting targets across 24 of the country's 31 provinces. And in Washington, President Donald Trump ordered all federal agencies to immediately cease using the AI tools of Anthropic, the San Francisco-based company behind the Claude family of large language models. What made the situation extraordinary was not either event on its own, but the fact that the US military continued using Claude during the Iran operation for hours after the ban was declared.

According to reporting by the Wall Street Journal and confirmed by multiple outlets including Axios and Cybernews, US Central Command in the Middle East deployed Anthropic's Claude AI for intelligence assessments, target identification, and the simulation of combat scenarios during the operation. Claude processed vast amounts of data including satellite imagery, intercepts, and signals intelligence to generate threat evaluations and situational summaries for military planners. It was not operating as an autonomous weapon. Rather, it was functioning as an analytical and decision-support tool, helping human operators understand complex battlefield conditions more quickly than would otherwise be possible.

This was not the first time Claude had been used in a high-profile military mission. In January 2026, the US military deployed the AI in the operation that led to the capture of Venezuelan President Nicolas Maduro. That operation involved partners including Palantir Technologies and Amazon Web Services, which had integrated Claude into classified government networks through secure cloud infrastructure. Anthropic learned of the Maduro operation after the fact and asked Palantir whether Claude had been used in circumstances where shots were fired. The Pentagon interpreted this as Anthropic attempting to retroactively police military operations. Anthropic maintained it was simply verifying compliance with its terms of use. The disagreement that followed set the stage for everything that came after.

Anthropic's terms of service contain two explicit prohibitions that became the crux of the conflict. The company does not permit its tools to be used for mass domestic surveillance of American citizens, and it does not permit them to be used in fully autonomous weapons systems where no human makes the final decision to use lethal force. These are not fringe positions. They reflect widely held views among AI safety researchers and have been the basis of international policy discussions for years. Anthropic argued that these two conditions were reasonable ethical limits that a responsible AI developer should maintain. The Pentagon, under Defense Secretary Pete Hegseth, took the opposite view. Hegseth demanded full and unrestricted access to all of Anthropic's AI models for any lawful military purpose, without carve-outs or conditions imposed by the company.

When Anthropic refused to meet those demands, the situation escalated rapidly. On February 27, 2026, Trump posted on Truth Social calling Anthropic a "leftwing" and "woke" company whose actions were putting American lives, troops, and national security at risk. On February 28, he directed all federal agencies to immediately stop using Anthropic's technology, with a six-month phase-out window given to departments already running the company's tools in embedded systems. Hegseth went further, formally designating Anthropic a supply chain risk to national security under US law. That designation, experts noted, is a classification historically applied to adversarial foreign companies, particularly Chinese and Russian technology firms. Applying it to an American company was, by any measure, an unprecedented step.

Anthropic responded quickly and forcefully. The company stated that designating a US firm as a national security supply chain risk under the relevant legal provision, specifically 10 USC 3252, had never before been done publicly to an American company. Anthropic said the designation could only legally restrict the use of Claude within Department of War contracts and could not affect how contractors used Claude to serve other customers. The company announced it would challenge the decision in court. In a public statement, Anthropic said it had permitted defense use of its technology all along, with the exception of the two conditions it had always maintained. It framed the conflict not as a refusal to support national security, but as a principled stand against being forced to remove safety guardrails that protect civilians and prevent the unchecked automation of lethal decision-making.

The broader technology industry took notice. More than 430 employees from Google and OpenAI signed a solidarity petition expressing concern about the treatment of Anthropic and the precedent being set. The fact that employees from OpenAI, a direct commercial competitor, joined in signing the petition suggests the concerns ran deeper than corporate loyalty. Many in the AI field viewed the situation as a test case for whether private AI developers could maintain ethical limits on how their tools are used by governments, or whether national security demands would override any such limits.

OpenAI moved quickly in the aftermath of the ban. CEO Sam Altman announced within hours that his company had reached an agreement with the Pentagon for deployment of its tools, including ChatGPT, in classified environments. The contract language published by OpenAI stated that the AI system could be used for all lawful purposes consistent with applicable law, operational requirements, and safety protocols, and would not be used to independently direct autonomous weapons in cases where law or policy requires human control. Notably, this language is not substantially different from the principles Anthropic had been insisting on. The key difference appeared to be in the willingness to negotiate the scope and enforcement of those principles, rather than in the principles themselves.

xAI, Elon Musk's artificial intelligence company, also secured a deal with the Pentagon for classified deployment around the same time. The rapid pivot from Anthropic to OpenAI and xAI illustrated both how critical AI tools had become to military operations, and how quickly alternatives could be sourced when a political or contractual dispute created an opening.

The Iran operation itself was significant in military and geopolitical terms far beyond the AI controversy. Strikes hit cities including Tehran, Qom, Isfahan, and Kermanshah. Iran's Supreme Leader Ayatollah Ali Khamenei was killed during the operation. The Red Crescent reported more than 200 deaths and 700 injuries across Iran in the initial phase of the strikes. The scale and coordination of the operation, described as a joint US-Israel campaign, marked a dramatic escalation in tensions that had been building throughout 2025.

The fact that Claude remained active in these operations even after Trump's ban speaks to how deeply the technology had been integrated into military systems. The six-month phase-out window Trump provided was not a courtesy gesture. It was a practical acknowledgment that you cannot simply switch off an AI tool that has been embedded across multiple command structures, secure cloud environments, and operational workflows without risking significant disruption to ongoing missions. In a conflict environment, that disruption carries real risks.

This situation raises questions that are not going away. When an AI company sets ethical limits on the use of its technology, who has the authority to override those limits? Is it the government that funds the contracts? Is it a court? Is it the market? The designation of Anthropic as a security risk suggests that the current US administration's answer is that the government retains that authority absolutely, and that a private company refusing to comply with government demands for unrestricted access to its tools will be treated not as a responsible actor exercising reasonable caution, but as a threat.

That framing is troubling for reasons that extend well beyond Anthropic. If safety-focused AI companies can be classified as national security risks for refusing to remove guardrails against mass surveillance and autonomous weapons, the incentive structure for the entire industry shifts. Companies that want to maintain government contracts will need to decide whether to comply with unlimited use demands or risk the kind of treatment Anthropic has received. Companies that resist may find themselves locked out of government markets entirely.

The AI industry has long debated whether it is possible to develop transformative technology responsibly, to build systems that are genuinely useful for complex and high-stakes tasks while maintaining meaningful limits on their most dangerous potential applications. The events of February 28, 2026, did not resolve that debate. But they moved it out of conference rooms and research papers and into the middle of an active military conflict. The implications will take years to fully understand.

What is clear right now is this. Artificial intelligence is no longer a future concern for military planners, policymakers, and ethicists. It is present in intelligence analysis, it is present in target selection, and it is present in battlefield simulation during active operations. The tools are deployed, the contracts are signed, and the geopolitical pressures that drive their use are intensifying. The question of who gets to set the rules for how those tools are used, and what happens when a company says no, is now one of the defining questions of the AI era.

Comments

Popular posts from this blog

Why Apple Is Paying Google One Billion Dollars a Year to Fix Siri