March 19, 2026

AI Goes to War

Automated targeting, autonomous weapons, and nuclear decision-making.

Michael T. Klare
Activists place signs featuring AI robot dogs on the grounds of the National Mall to protest OpenAI's decision to allow the Pentagon to use its AI technologies in developing autonomous weapons, on March, 6, 2026.
Activists place signs featuring AI robot dogs on the grounds of the National Mall to protest OpenAI’s decision to allow the Pentagon to use its AI technologies in developing autonomous weapons, on March, 6, 2026. (Heather Diehl / Getty Images)

Last July, the Pentagon’s chief digital and artificial intelligence officer, Doug Matty, announced awards of $200 million each to four of America’s leading tech companies—Anthropic, Google, OpenAI, and xAI—to supply advanced AI models to the Department of Defense. “Leveraging commercially available solutions into an integrated capabilities approach will accelerate the use of advanced AI as part of our Joint mission essential tasks in our warfighting domain,” Matty said when announcing the awards. Beyond this, very little information was provided about the awards, except that they were intended to exploit recent advances in generative AI—sophisticated software that can digest vast amounts of data and provide operators with suggested courses of action.

In the months that followed, the Pentagon continued to impose a shroud of secrecy over the multimillion-dollar AI awards, citing national security considerations. At the end of February, however, this shroud was broken, at least in part, when Anthropic insisted on imposing certain limits on the military use of Claude, its premier AI model. “I believe deeply in the existential importance of using AI to defend the United States and other democracies,” Anthropic CEO Dario Amodei affirmed on February 26. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” These included, he noted, the use of AI in “mass domestic surveillance” and the creation of “fully autonomous weapons,” or self-guided combat drones.

Senior Pentagon officials responded to Amodei’s statement by insisting that they had no intention of using AI for domestic surveillance and that unmanned weapons systems will always remain under human oversight. They affirmed, however, that private firms like Anthropic could not impose restrictions on how the Pentagon employs AI. “We won’t have any BigTech company decide Americans’ civil liberties,” declared Emil Michael, the undersecretary of defense for research and engineering. At the same time, however, Michael broadened the discussion by identifying another potential use of AI: to help shoot down enemy missiles in a nuclear war. Would Anthropic oppose Claude’s use in nuclear operations? Michael asked Amodei during one set of negotiations. (Amodei reportedly said no.)

The Anthropic-Pentagon fight has shed considerable light on the military’s fraught relationship with the tech giants of Silicon Valley. The dispute also demonstrated the Trump administration’s fierce determination to employ AI for strategic advantage, despite widespread concerns over its safety. But however significant in their own right, these aspects of the Anthropic-Pentagon dispute are not the most important to have been unveiled. What is more revealing, in the long term, is what it tells us about the uses to which AI is being put by the US military. As suggested by Amodei’s concerns and Undersecretary Michael’s retort, there are three areas we should be looking at: the use of AI in mass surveillance and automated targeting; lethal autonomous weapons systems; and the integration of AI into nuclear weapons control systems.


Surveillance and Targeting

When the Department of Defense first explored the utilization of artificial intelligence for military use, in 2017, its focus was highly specific: to reduce the cognitive burden of human drone pilots conducting search-and-kill missions against Middle Eastern insurgents by automating the task of searching through video footage for signs of enemy hideouts. To accomplish this mission, the Pentagon created the Algorithmic Warfare Cross-Functional Team, or Project Maven. The head of Maven, Air Force Lt. Gen. John (“Jack”) Shanahan, then turned to Google to generate the required software. When thousands of Google employees signed a petition opposing the company’s involvement in a military-oriented project of this sort, the company’s leadership then chose to terminate its contract for Maven, Shanahan reassigned the work to Palantir, a defense-oriented startup chaired by Peter Thiel, a conservative-leaning billionaire investor. Palantir then developed the algorithms that enabled Maven software to identify potential targets for attack by armed Predator drones.

Current Issue

Cover of April 2026 Issue

Although intended originally for the task of identifying militant hideouts, Project Maven morphed over the years into a program for collating multiple streams of data—including news feeds, government records, and social media accounts—in order to identify, habits, family ties, and political views of potential adversaries. When made available to combat units, this information that could then be used for lethal operations against hostile leaders and their subordinates.

In 2022, oversight responsibility for Project Maven was transferred to the National Geospatial-Intelligence Agency, a little-known Pentagon entity responsible for interpreting the imagery provided by satellites and surveillance aircraft, allowing Palantir to incorporate detailed maps into the software, now rechristened the Maven Smart System (MSS). At that time, the US Central Command (Centcom) was equipped with MSS, giving it access to detailed information on potential enemy targets throughout the Middle East. Was this technology used in designating targets during the recent US strikes on the Iranian leaders? It is hard to imagine otherwise.

Now we come to Dario Amodei’s fears about domestic surveillance: Last August, US Immigration and Customs Enforcement (ICE) contracted with Palantir to employ its technology in seeking out undocumented immigrants for detention and deportation. Using a Palantir-designed system called ImmigrationOS, for Immigration Lifecycle Operating System, ICE can generate a dossier on potential deportation targets by drawing on passport records, Social Security files, IRS tax data, and other government databases. Recently, ICE has also begun using AI and facial recognition technology to identify and track anti-ICE protesters for possible arrest and prosecution as “domestic terrorists.” So far, there is no indication that the Department of Defense has joined this effort, but Amodei clearly has good reason to fear the utilization of AI in domestic surveillance operations.


Lethal Autonomous Weapons Systems

The Nation Weekly

Fridays. A weekly digest of the best of our coverage.
By signing up, you confirm that you are over the age of 16 and agree to receive occasional promotional offers for programs that support The Nation’s journalism. You may unsubscribe or adjust your preferences at any time. You can read our Privacy Policy here.

The other area identified by Amodei as of concern, autonomous weapons, also entails significant dangers. Spurred by the widespread use of combat drones in Gaza, Ukraine, and other recent conflicts, the Pentagon has sought to field a wide array of unmanned weapons systems—unmanned aerial vehicles (UAVs), unmanned ground vehicles, unmanned surface vessels, and unmanned subsea vessels. Such devices, it is widely believed, can be deployed in especially hazardous front-line operations, thereby reducing the risk to human combatants.

At present, most of the unmanned combat systems in US arsenals are designed to be remotely controlled by human operators. Although the employment of such systems would reduce the exposure of their human operators to enemy fire, it would not reduce the cognitive demand of such operations nor allow for the massing of unmanned vehicles in offensive assaults. To overcome this deficiency, Pentagon officials seek to invest drones with a high degree of autonomy, allowing them to operate in swarms with minimal human oversight. Under a program called Collaborative Operations in Denied Environment (CODE), the Defense Advanced Research Projects Agency (DARPA) has developed software enabling groups of UAVs to “find, track, identify, and engage targets” on their own, so long as they abide by preset “rules of engagement.”

Official Pentagon policy, as articulated in Department of Defense Directive 3000.09, stipulates that autonomous weapons “will be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” Many analysts fear, however, that this wording provides too much leeway for the military services to employ DARPA’s technology in a way that greatly diminishes human oversight. This, in turn, could result in the unintended slaughter of civilians by “rogue” autonomous weapons. “Without proper safeguards, AI models could cause all kinds of unintended harm,” former undersecretary of defense Michèle Flournoy wrote in Foreign Affairs. “Rogue systems could even kill US troops or unarmed civilians in or near areas of combat.”

In recognition of this danger, a coalition of human rights organizations, the Campaign to Stop Killer Robots, and many governments have called for a legally binding international ban on the development and deployment of autonomous weapons. However, the United States (along with Israel and Russia) has opposed any such constraints, claiming that unilateral measures, notably Directive 3000.09, are sufficient to prevent misuse. Here again, Amodei’s reluctance to trust the Pentagon in this regard is telling.


Nuclear Command and Control

Finally, there was that brief interchange between Amodei and Michael regarding the use of AI in nuclear weapons command, control, and communications, or NC3. Neither figure elaborated on this aspect of AI’s use by the military, but it is the one deserving of our greatest concern.

The existing NC3 architecture was created during the Cold War era to ensure that the president receives notice of an impending enemy nuclear strike and is able to order a commensurate counterattack. Many of these systems incorporate obsolete technology, and the entire NC3 system is being modernized at an estimated cost of $154 billion over the next 10 years. As part of this modernization, AI is being integrated into every aspect of NC3, potentially diminishing the role of humans in nuclear decision-making.

From what can be determined from unclassified sources, AI will be used to calculate the trajectory of enemy missiles and help interceptor missiles collide with them. (A failed attempt at such an interception is portrayed in the Netflix movie A House of Dynamite.) Once an enemy assault is detected, moreover, AI will be used to generate possible US responses, ranging from limited counterstrikes to full-scale retaliation. This poses a danger that AI programs will miscalculate the nature and extent of enemy actions and/or generate excessively escalatory courses of action, deterring leaders from seeking alternatives to mutual annihilation.

Advanced AI models like Claude, ChatGPT, and Meta’s Llama are capable of many wondrous feats but are also known to malfunction at times, producing fabricated responses, or “hallucinations,” when prompted by human interrogators. When tested in war games, moreover, all of these models have displayed a tendency to favor escalatory actions in a crisis situation, including the precipitous use of nuclear weapons. It is absolutely essential, then, that humans retain oversight over every step in the nuclear decision-making process.

Support independent journalism that does not fall in line

Even before February 28, the reasons for Donald Trump’s imploding approval rating were abundantly clear: untrammeled corruption and personal enrichment to the tune of billions of dollars during an affordability crisis, a foreign policy guided only by his own derelict sense of morality, and the deployment of a murderous campaign of occupation, detention, and deportation on American streets. 

Now an undeclared, unauthorized, unpopular, and unconstitutional war of aggression against Iran has spread like wildfire through the region and into Europe. A new “forever war”—with an ever-increasing likelihood of American troops on the ground—may very well be upon us.  

As we’ve seen over and over, this administration uses lies, misdirection, and attempts to flood the zone to justify its abuses of power at home and abroad. Just as Trump, Marco Rubio, and Pete Hegseth offer erratic and contradictory rationales for the attacks on Iran, the administration is also spreading the lie that the upcoming midterm elections are under threat from noncitizens on voter rolls. When these lies go unchecked, they become the basis for further authoritarian encroachment and war. 

In these dark times, independent journalism is uniquely able to uncover the falsehoods that threaten our republic—and civilians around the world—and shine a bright light on the truth. 

The Nation’s experienced team of writers, editors, and fact-checkers understands the scale of what we’re up against and the urgency with which we have to act. That’s why we’re publishing critical reporting and analysis of the war on Iran, ICE violence at home, new forms of voter suppression emerging in the courts, and much more. 

But this journalism is possible only with your support.

This March, The Nation needs to raise $50,000 to ensure that we have the resources for reporting and analysis that sets the record straight and empowers people of conscience to organize. Will you donate today?

Michael T. Klare

Michael T. Klare, The Nation’s defense correspondent, is professor emeritus of peace and world-security studies at Hampshire College and senior visiting fellow at the Arms Control Association in Washington, DC. Most recently, he is the author of All Hell Breaking Loose: The Pentagon’s Perspective on Climate Change.

More from The Nation

US Secretary of State Marco Rubio sits down for an interview with Bloomberg Television during the Munich Security Conference in Munich, Germany, on February 14, 2026.

Trump’s National Security Strategy Is a Blueprint for White Nationalism Trump’s National Security Strategy Is a Blueprint for White Nationalism

Trump’s foreign policy reasoning mirrors the crackpot logic of a runaway authoritarian.

Juan Cole

A plume of smoke rises following a reported explosion in Tehran on February 28, 2026.

The Iran War Is America’s Own Suez Crisis The Iran War Is America’s Own Suez Crisis

The recent US military assault on Iran looks like past desperate bids to reclaim fading imperial glory.

Alfred McCoy

France’s President Emmanuel Macron delivers a speech next to the nuclear-powered ballistic missile submarine “Le Téméraire” during his visit to the Nuclear Submarine Navy Base of Ile Longue in Crozon, northwestern France, on March 2, 2026.

Is Macron Greenlighting a New US-Led Nuclear Arms Race? Is Macron Greenlighting a New US-Led Nuclear Arms Race?

As the global arms control regime collapses, France plans to expand and Europeanize its nuclear arsenal.

Harrison Stetler

Avi Lewis

Canada’s Left Is in Crisis. Can Avi Lewis Revive It? Canada’s Left Is in Crisis. Can Avi Lewis Revive It?

As Mark Carney’s deceptive centrism pushes the country to the right, Avi Lewis offers a compelling alternative.

Jeet Heer

Anti-war demonstrators gathered outside the White House in Washington, DC, to protest the US and Israeli bombardment of Iran on February 28, 2026.

Why Is There No Anti-War Movement? Why Is There No Anti-War Movement?

Exploring what might help us move to start building one.

Eric Blanc

An Iranian bomb site rendered as a Wii target by the Trump White House.

The Trump White House’s Vision of War as Nihilist Entertainment The Trump White House’s Vision of War as Nihilist Entertainment

In a new X post building on an earlier Hollywood action-clip montage, the White House tries to render the horrors of war as a Wii game.

Ben Schwartz