Floating Button
Home Views Artificial Intelligence

Anthropic, Pentagon, Iran and the future of wars

Assif Shameen
Assif Shameen • 10 min read
Anthropic, Pentagon, Iran and the future of wars
Anthropic CEO Dario Amodei refused to drop the safety restrictions on its AI technology, stating his firm would rather not work with the Pentagon than compromise on its guard rails. Photo: Bloomberg
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Two weeks ago, Israel and the US began a massive coordinated attack on Iran that left its Supreme Leader Ayatollah Ali Hosseini Khamenei and defence top brass dead. The attacks triggered an ongoing war in which Iran has retaliated by launching waves of missile and drone strikes across the Middle East, including Israel, Saudi Arabia, the United Arab Emirates, Bahrain and Qatar. The joint Israel-US attacks came just three hours after the deadline imposed by the US Department of Defense, or the Pentagon, for AI start-up Anthropic to comply with its ultimatum that it allow “any lawful use” of its AI technology without additional restrictions, or face blacklisting and designation of its technology as a “supply chain risk” — a classification until recently reserved for foreign adversaries like China’s tech conglomerate Huawei, which reportedly has close ties with the People’s Liberation Army.

When Anthropic CEO Dario Amodei refused to drop the safety restrictions on its AI technology, stating his firm would rather not work with the Pentagon than compromise on its guard rails, President Donald Trump announced that his administration would cancel all US military contracts and direct all US federal agencies away from using Claude, Anthropic’s powerful and widely acclaimed AI platform, designed for advanced reasoning, coding and natural conversation. The White House and Pentagon announced that the military and US government agencies had just six months to phase out Anthropic and replace it with AI tools from rival firms like OpenAI and billionaire Elon Musk’s xAI, now part of SpaceX.

Just eight months earlier, Anthropic had signed a US$200 million ($256 million) contract with the Pentagon to deploy Claude within the military’s classified systems. The deal included specific usage restrictions — Anthropic’s terms of service prohibited use for mass domestic surveillance and fully autonomous weapons. At that time, neither the Pentagon nor Defense Secretary Pete Hegseth raised any objection to the terms of service.

In the aftermath of the attacks that killed the Ayatollah, it was not a surprise when it quickly became clear that Anthropic’s AI tools were playing a key role in Iran. The Pentagon is leveraging Claude against Iran despite a bitter feud with the AI start-up. The Washington Post reported that the US military relied on Claude for intelligence and battlefield simulations during the initial barrage of strikes because the technology was too “embedded” to be removed instantly. Among other things, Claude played a significant role in helping the US military decide precisely where to strike. Ironically, OpenAI’s deal included the same restrictions that Anthropic was demanding: “No mass domestic surveillance, no fully autonomous weapons without human oversight.” The Pentagon hastily accepted from OpenAI what it had punished Anthropic for requesting. The fact that the US military is still using Claude underscores the Pentagon’s challenge in giving up such an effective AI tool even as it blacklists its maker.

Anthropic’s high-stakes dispute with the Pentagon began after America’s early January military strike and special operations that led to the capture of Venezuelan President Nicolás Maduro and his wife, Cilia Flores. They were brought to New York to face charges of narco-terrorism. It turns out that the US government relied on Anthropic’s AI tools as well as Palantir’s software and data analytics to attack Venezuela and snatch Maduro from its capital Caracas. An Anthropic employee raised concerns with software partner Palantir about how Claude was used in that operation, which alarmed the Pentagon. Hegseth reportedly reacted angrily and issued a memo directing all AI companies to immediately remove restrictions and amend their terms of service.

For its part, Anthropic argued that the Pentagon’s proposed language was “paired with legalese that would allow those safeguards to be disregarded at will”. CEO Amodei said his firm could not “in good conscience” accede to the request, arguing that uses like mass surveillance and autonomous weapons are “outside the bounds of what today’s technology can safely and reliably do”. As Trump began talking up an impending attack on Iran if it did not agree to dismantle its nuclear programme, Amodei requested specific guard rails for military use of its AI tools, reiterating restrictions on the usage of Claude to conduct mass surveillance and prohibitions on autonomous weapons systems.

See also: NCS, Nvidia to drive sovereign agentic AI adoption in regulated sectors

The spat represents a fundamental clash between national security priorities and AI safety principles. The Pentagon views Anthropic’s restrictions as limiting military AI capabilities, while the company argues these safeguards are essential to prevent misuse of powerful AI systems in warfare and surveillance.

Profound transformation of wars

Here is why the stance of an AI start-up like Anthropic matters so much to the Pentagon in the midst of a war in Iran. There has been a profound transformation of wars and what matters most in battlefields. In the traditional hardware era, tanks, armoured cars and fighter jets were the strategic advantage — a sharp contrast from the current era where software, data, satellite broadband connectivity, satellite imagery, drone swarms, AI assisted targeting and real-time intelligence sharing gives warring nations the edge. AI can quickly summarise intelligence, generate target shortlists, rank high-priority threats and indeed even recommend strikes. “Software-defined warfare” with data analysis, automated targeting and cyber operations paralysing opponents without direct, large-scale physical engagement has changed how wars are fought these days.

See also: Alibaba planning major revamp to heighten focus on AI profits

“The future of warfare will require advanced hardware, AI-enabled software, autonomy, cyber capabilities, and scaled manufacturing processes,” Morgan Stanley noted in a recent report. This year, the US will spend US$960 billion on defence. Trump wants to boost annual spending to over US$1.5 trillion within a couple of years. Powering the widespread disruption are VC-backed firms like Anduril Industries and Shield AI that combine AI and software with hardware for next-generation cutting-edge weapons.

Yet, it isn’t just about “hardware versus software” but much more about the pace of integration. The US military has the concept of “mosaic warfare”, or making an enemy fight an unexpectedly large, asymmetric volume and variety of weaponry and platforms. On the other hand, China has its own “intelligentised warfare” or advanced form of combat primarily driven by AI and autonomous systems. It uses AI to accelerate decision-making, enhance battlefield awareness and execute precision strikes, with the ultimate goal of paralysing an adversary’s system-of-systems. Whoever is able to connect sensors, shooters, commanders and data into a coherent system fastest will win the war. A slightly inferior missile guided by superior AI and better intelligence beats a far superior missile flying blind. Hardware sets the ceiling, but software, AI and intelligence determine how quickly and close you get to victory.

Clearly, hardware will continue to matter. Hypersonic missiles, aircraft carriers and submarine fleets will define great power competition at the strategic level. Sovereign nations defending their borders and superpowers flexing their muscles or policing the globe need soldiers, and human judgment in complex environments remains irreplaceable particularly in counter-insurgency, diplomacy and post-conflict stabilisation. AI and software are only as good as the physical infrastructure, like satellites and the communications networks they run on, which require hardware. Yet, future wars will be won by those who have software and data as well as AI tools that can harness that data effectively to precisely target people or key assets.

The speed of AI is creating a phenomenon known as “decision compression”. Because AI can suggest hundreds of targets in seconds — far quicker than “the speed of thought” — there are growing ethical concerns that human commanders may become “rubber stamps” who merely approve machine-generated strike plans without having the time to evaluate the humanitarian risks.

In the ongoing war in Iran, AI has played a decisive role in what is called “shortening the kill chain”. The US military reportedly used a combination of Palantir’s Maven Smart System and Claude AI to analyse petabytes of satellite imagery and signals intelligence. This system identified and prioritised roughly 1,000 targets in the first 24 hours alone. The Israel Defense Forces utilised its own proprietary AI systems, such as the Gospel (Habsora) and Lavender, which were previously deployed in Gaza and are now being leveraged to automate target recommendations against Iranian military infrastructure.

The dispute between Anthropic and the Pentagon has become a defining moment for the Claude creator’s brand identity around AI safety, and has attracted user support despite losing government contracts. It underscores the growing tension between military AI development and responsible AI governance. Anthropic’s refusal to compromise on safeguards also reflects its stated commitment to safety-first AI development, which many analysts view as principled, though right-wing supporters of Trump argue that cutting-edge AI tools such as those created by Anthropic and deployed by the Pentagon are necessary to keep up with adversaries like China and any constraints will likely hinder military effectiveness. Indeed, Under Secretary of War Emil Michael went so far as to accuse Amodei of having a “God complex” and, in effect, endangering the nation.

AI ethics

Sink your teeth into in-depth insights from our contributors, and dive into financial and economic trends

The war of words between the Pentagon and Anthropic highlights the widening gap between AI firms trying to enforce ethical safeguards and a Defense Department that wants unfettered access to technology to maintain a “wartime footing”. The conflict is seen as a defining moment in AI ethics, essentially forcing a choice between — as one commentator put it — “killer robots” and adhering to safety-conscious AI development.

Despite the potential financial blow, the standoff with Pentagon has helped bolster Anthropic’s reputation among those favouring AI safety, with a surge in public support and consumer adoption through mass app downloads. Claude climbed to the top spot in Apple’s App Store, from No 42 just four weeks earlier. Its total free users have grown more than 60% since end-January while paid subscribers have more than doubled in the 10 weeks this year. Claude’s total global downloads in a single day exceeded 500,000 on Feb 28, the day after the Defense Department designated Anthropic a “supply chain risk”.

Not surprisingly, OpenAI was widely accused of “opportunism” and has seen its reputation damaged with safety-focused users. OpenAI’s own employees publicly flooded internet forums saying they “really respect” Anthropic for standing up to the Pentagon. Indeed, even CEO Sam Altman later acknowledged the Pentagon deal “looked opportunistic and sloppy” and said OpenAI “shouldn’t have rushed” it.

Standing up to the government and fighting for privacy and safety can be a good business decision. A decade ago, Apple refused to build an encryption backdoor after the FBI demanded access to an iPhone during its investigation of the 2015 San Bernardino terrorist shooting. Apple CEO Tim Cook warned that forcing the iPhone maker to weaken its security could create a slippery slope that would only make it easier to someday force it to build surveillance software and “undermine the very freedoms and liberty our government is meant to protect”. The stand-off lasted for weeks. Eventually, the FBI found another way to access the data inside the iPhone, but the battle against the government helped solidify Apple’s reputation as a privacy-focused company and ultimately built customer trust that has lasted for over a decade.

Anthropic, which has a distinctive “constitutional AI” alignment approach, is also known for nuanced reasoning, long context and strong writing. It is regarded as more research-focused, with a penchant for publishing safety-oriented work. Anthropic sees itself as a business-to-business AI firm while OpenAI views itself as the AI era’s Apple, with a focus on consumers. While Anthropic’s business model is purely subscription-based, OpenAI is focused on a hybrid advertising and subscription model. What’s next? Over the next few months, it is likely that Anthropic and the Pentagon will strike some form of compromise. If America wants to win the next big war, it would need to go with a national champion like Anthropic that has winning AI tools rather than have its hands tied at the back.

Assif Shameen is a technology and business writer based in North America

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2026 The Edge Publishing Pte Ltd. All rights reserved.