December 6, 2023

The Pentagon’s Rush to Deploy AI-Enabled Weapons Is Going to Kill Us All

While experts warn about the risk of human extinction, the Department of Defense plows full speed ahead.

Michael T. Klare

Attendee interact with a SoftBank Group Corp. Pepper humanoid robot at the COP28 Climate Conference.

(Annie Sakkab / Bloomberg via Getty Images)

The recent boardroom drama over the leadership of OpenAI—the San Francisco–based tech startup behind the immensely popular ChatGPT computer program—has been described as a corporate power struggle, an ego-driven personality clash, and a strategic dispute over the release of more capable ChatGPT variants. It was all that and more, but at heart represented an unusually bitter fight between those company officials who favor unrestricted research on advanced forms of artificial intelligence (AI) and those who, fearing the potentially catastrophic outcomes of such endeavors, sought to slow the pace of AI development.

At approximately the same time as this epochal battle was getting under way, a similar struggle was unfolding at the United Nations in New York and government offices in Washington, D.C., over the development of autonomous weapons systems—drone ships, planes, and tanks operated by AI rather than humans. In this contest, a broad coalition of diplomats and human rights activists have sought to impose a legally binding ban on such devices—called “killer robots” by opponents—while officials at the Departments of State and Defense have argued for their rapid development.

At issue in both sets of disputes are competing views over the trustworthiness of advanced forms of AI, especially the “large language models” used in “generative AI” systems like ChatGPT. (Programs like these are called “generative” because they can create human-quality text or images based on a statistical analysis of data culled from the Internet). Those who favor the development and application of advanced AI—whether in the private sector or the military—claim that such systems can be developed safely; those who caution against such action, say it cannot, at least not without substantial safeguards.

Without going into the specifics of the OpenAI drama—which ended, for the time being, on November 21 with the appointment of new board members and the return of AI whiz Sam Altman as chief executive after being fired five days earlier—it is evident that the crisis was triggered by concerns among members of the original board of directors that Altman and his staff were veering too far in the direction of rapid AI development, despite pledges to exercise greater caution.

As Altman and many of his colleagues see things, humans technicians are on the verge of creating “general AI” or “superintelligence”—AI programs so powerful they can duplicate all aspects of human cognition and program themselves, making human programming unnecessary. Such systems, it is claimed, will be able to cure most human diseases and perform other beneficial miracles—but also, detractors warn, will eliminate most human jobs and may, eventually, choose to eliminate humans altogether.

“In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past,” Altman and his top lieutenants wrote in May. “We can have a dramatically more prosperous future; but we have to manage risk to get there.”

Current Issue

Cover of April 2026 Issue

For Altman, as for many others in the AI field, that risk has an “existential” dimension, entailing the possible collapse of human civilization—and, at the extreme, human extinction. “I think if this technology goes wrong, it can go quite wrong,” he told a Senate hearing on May 16. Altman also signed an open letter released by the Center for AI Safety on May 30 warning of the possible “risk of extinction from AI.” Mitigating that risk, the letter avowed, “should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”

Nevertheless, Altman and other top AI officials believe that superintelligence can, and should be pursued, so long as adequate safeguards are installed along the way. “We believe that the benefits of the tools we have deployed so far vastly outweigh the risks, but ensuring their safety is vital to our work,” he told the Senate subcommittee on privacy, technology and the law.

Washington Promotes the “Responsible” Use of AI in Warfare

A similar calculus regarding the exploitation of advanced AI governs the outlook of senior officials at the Departments of State and Defense, who argue that artificial intelligence can and should be used to operate future weapons systems—so long as it is done so in a “responsible” manner.

“We cannot predict how AI technologies will evolve or what they might be capable of in a year or five years,” Amb. Bonnie Jenkins, under secretary of state for arms control and nonproliferation, declared at a Nov. 13 UN presentation. Nevertheless, she noted, the United States was determined to “put in place the necessary policies and to build the technical capacities to enable responsible development and use [of AI by the military], no matter the technological advancements.”

The Nation Weekly

Fridays. A weekly digest of the best of our coverage.
By signing up, you confirm that you are over the age of 16 and agree to receive occasional promotional offers for programs that support The Nation’s journalism. You may unsubscribe or adjust your preferences at any time. You can read our Privacy Policy here.

Jenkins was at the UN that day to unveil a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” a US-inspired call for voluntary restraints on the development and deployment of AI-enabled autonomous weapons. The declaration avows, among other things, that “States should ensure that the safety, security, and effectiveness of military AI capabilities are subject to appropriate and rigorous testing,” and that “States should implement appropriate safeguards to mitigate risks of failures in military AI capabilities, such as the ability to… deactivat[e] deployed systems, when such systems demonstrate unintended behavior.”

None of this, however, constitutes a legally binding obligation of states that sign the declaration; rather, it simply entails a promise to abide by a set of best practices, with no requirement to demonstrate compliance with those measures or risk of punishment if found to be in non-compliance.

Although several dozen countries—mostly close allies of the United States—have signed the declaration, many other nations, including Austria, Brazil, Chile, Mexico, New Zealand, and Spain, insist that voluntary compliance with a set of US-designed standards is insufficient to protect against the dangers posed by the deployment of AI-enabled weapons. Instead, they seek a legally binding instrument setting strict limits on the use of such systems or banning them altogether. For these actors, the risks of such weapons “going rogue,” and conducting unauthorized attacks on civilians, is simply too great to allow their use in combat.

“Humanity is about to cross a major threshold of profound importance when the decision over life and death is no longer taken by humans but made on the basis of pre-programmed algorithms. This raises fundamental ethical issues,” Amb. Alexander Kmentt, Austria’s chief negotiator for disarmament, arms control, and nonproliferation, told The Nation.

For years, Austria and a slew of Latin American countries have sought to impose a ban on such weapons under the aegis of the Convention on Certain Conventional Weapons (CCW), a 1980 UN treaty that aims to restrict or prohibit weapons deemed to cause unnecessary suffering to combatants or to affect civilians indiscriminately. These countries, along with the International Committee of the Red Cross and other non-governmental organizations, claim that fully autonomous weapons fall under this category as they will prove incapable of distinguishing between combatants and civilians in the heat of battle, as required by international law. Although a majority of parties to the CCW appear to share this view and favor tough controls on autonomous weapons, decisions by signatory states is made by consensus and a handful of countries, including Israel, Russia, and the United States, have used their veto power to block adoption of any such measure. This, in turn, has led advocates of regulation to turn to the UN General Assembly—where decisions are made by majority vote rather than consensus—as an arena for future progress on the issue.

On October 12, for the first time ever, the General Assembly’s First Committee—responsible for peace, international security, and disarmament—addressed the dangers posed by autonomous weapons, voting by a wide majority—164 to 5 (with 8 abstentions)—to instruct the secretary-general to conduct a comprehensive study of the matter. The study, to be completed in time for the next session of the General Assembly (in fall 2024), is to examine the “challenges and concerns” such weapons raise “from humanitarian, legal, security, technological, and ethical perspectives and on the role of humans in the use of force.”

Although the UN measure does not impose any binding limitations on the development or use of autonomous weapons systems, it lays the groundwork for the future adoption of such measures, by identifying a range of concerns over their deployment and by insisting that the secretary-general, when conducting the required report, investigate those dangers in detail, including by seeking the views and expertise of scientists and civil society organizations.

“The objective is obviously to move forward on regulating autonomous weapons systems,” Ambassador Kmentt indicated. “The resolution makes it clear that the overwhelming majority of states want to address this issue with urgency.”

What will occur at next year’s General Assembly meeting cannot be foretold, but if Kmentt is right, we can expect a much more spirited international debate over the advisability of allowing the deployment of AI-enabled weapons systems—whether or not participants have agreed to the voluntary measures being championed by the United States.

At the Pentagon, It’s Full Speed Ahead

For officials at the Department of Defense, however, the matter is largely settled: the United States will proceed with the rapid development and deployment of numerous types of AI-enabled autonomous weapons systems. This was made evident on August 28, with the announcement of the “Replicator” initiative by Deputy Secretary of Defense Kathleen Hicks.

Noting that the United States must prepare for a possible war with China’s military, the People’s Liberation Army (PLA), in the not-too-distant future, and that US forces cannot match the PLA’s weapons inventories on an item-by-item basis (tank-for-tank, ship-for-ship, etc.), Hicks argued that the US must be prepared to overcome China’s superiority in conventional measures of power—its military “mass”—by deploying “multitude thousands” of autonomous weapons.

“To stay ahead, we’re going to create a new state of the art—just as America has before—leveraging attritable [i.e., disposable], autonomous systems in all domains,” she told corporate executives at a National Defense Industrial Association meeting in Washington. “We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat.”

In a follow-up speech, delivered on September 6, Hicks provided (slightly) more detail on what she called all-domain attritable autonomous (ADA2) weapons systems. “Imagine distributed pods of self-propelled ADA2 systems afloat…packed with sensors aplenty…. Imagine fleets of ground-based ADA2 systems delivering novel logistics support, scouting ahead to keep troops safe…. Imagine flocks of [aerial] ADA2 systems, flying at all sorts of altitudes, doing a range of missions, building on what we’ve seen in Ukraine.”

As per official guidance, Hicks assured her audience that all these systems “will be developed and fielded in line with our responsible and ethical approach to AI and autonomous systems.” But except for that one-line one nod to safety, all the emphasis in her talks was on smashing bureaucratic bottlenecks in order to speed the development and deployment of autonomous weapons. “If [these bottlenecks] aren’t tackled,” she declared on August 28, “our gears will still grind too slowly, and our innovation engines still won’t run at the speed and scale we need. And that, we cannot abide.”

And so, the powers that be—in both Silicon Valley and Washington—have made the decision to proceed with the development and utilization of even more advanced versions of artificial intelligence despite warnings from scientists and diplomats that the safety of these programs cannot be assured and that their misuse could have catastrophic consequences. Unless greater effort is made to slow these endeavors, we may well discover what those consequences might entail.

Support independent journalism that does not fall in line

Even before February 28, the reasons for Donald Trump’s imploding approval rating were abundantly clear: untrammeled corruption and personal enrichment to the tune of billions of dollars during an affordability crisis, a foreign policy guided only by his own derelict sense of morality, and the deployment of a murderous campaign of occupation, detention, and deportation on American streets. 

Now an undeclared, unauthorized, unpopular, and unconstitutional war of aggression against Iran has spread like wildfire through the region and into Europe. A new “forever war”—with an ever-increasing likelihood of American troops on the ground—may very well be upon us.  

As we’ve seen over and over, this administration uses lies, misdirection, and attempts to flood the zone to justify its abuses of power at home and abroad. Just as Trump, Marco Rubio, and Pete Hegseth offer erratic and contradictory rationales for the attacks on Iran, the administration is also spreading the lie that the upcoming midterm elections are under threat from noncitizens on voter rolls. When these lies go unchecked, they become the basis for further authoritarian encroachment and war. 

In these dark times, independent journalism is uniquely able to uncover the falsehoods that threaten our republic—and civilians around the world—and shine a bright light on the truth. 

The Nation’s experienced team of writers, editors, and fact-checkers understands the scale of what we’re up against and the urgency with which we have to act. That’s why we’re publishing critical reporting and analysis of the war on Iran, ICE violence at home, new forms of voter suppression emerging in the courts, and much more. 

But this journalism is possible only with your support.

This March, The Nation needs to raise $50,000 to ensure that we have the resources for reporting and analysis that sets the record straight and empowers people of conscience to organize. Will you donate today?

Michael T. Klare

Michael T. Klare, The Nation’s defense correspondent, is professor emeritus of peace and world-security studies at Hampshire College and senior visiting fellow at the Arms Control Association in Washington, DC. Most recently, he is the author of All Hell Breaking Loose: The Pentagon’s Perspective on Climate Change.

More from The Nation

An Iranian bomb site rendered as a Wii target by the Trump White House.

The Trump White House’s Vision of War as Nihilist Entertainment The Trump White House’s Vision of War as Nihilist Entertainment

In a new X post building on an earlier Hollywood action-clip montage, the White House tries to render the horrors of war as a Wii game.

Ben Schwartz

Donald Trump leaves after speaking to reporters during a news conference at Trump National Doral Miami on March 9, 2026, in Doral, Florida.

The Iran War Is Spurring Global Anger at America The Iran War Is Spurring Global Anger at America

Trump’s reckless and unnecessary conflict is hurting allies as well as foes.

Jeet Heer

Nepali Rastriya Swatantra Party prime ministerial candidate Balendra Shah waves to supporters during a campaign roadshow in the district of Jhapa on March 1, 2026, in Bidhabare, Nepal.

A Sweeping Victory for Gen Z in Nepal—but Not Yet a “Revolution” A Sweeping Victory for Gen Z in Nepal—but Not Yet a “Revolution”

Nepal’s “Gen Z revolution” achieved historic and unexpected electoral success—but transformational change remains elusive.

Wen Stephenson

Former French prime minister Dominique de Villepin, during his appearance at the 107th Congress of Mayors of France in Paris on November 19, 2025.

“We’re Not Calling Things What They Are” “We’re Not Calling Things What They Are”

Years after leading opposition to the US war in Iraq, France’s Dominique de Villepin speaks out against another illegal war in the Middle East and Europe’s timid response.

Cole Stangler

Motorcyclist and passenger riding in the streets of Port-Au-Prince, Haiti, on February 27, 2025. Although gang violence is omnipresent in the capital, daily life continues as usual.

Haiti Doesn’t Need War. It Needs Peace. Haiti Doesn’t Need War. It Needs Peace.

As Haiti confronts deepening violence and political collapse, calls for military intervention risk repeating a long history of foreign policies that have destabilized the country....

Jake Johnston

Peter Thiel speaks during a news conference in Tokyo, Japan, on November18, 2019.

Welcome to the Era of the AI-Powered War Machine Welcome to the Era of the AI-Powered War Machine

How a clique of unhinged techno-optimists is putting humanity at risk.

Janet Abou-Elias and William D. Hartung