Feature / April 7, 2026

The Pentagon Is Going “AI First”

The US military is placing the technology at the center of its mission, and the human costs promise to be staggering.

Janet Abou-Elias and William D. Hartung

As President Donald Trump’s administration has hurtled into a military conflict with Iran, the Pentagon has gone all in on artificial intelligence, both as a military tool in this and other possible conflicts and as a PR instrument in the quest for ever more of your tax dollars.

The Pentagon is accelerating the use of artificial intelligence across all of its mission areas, touting it as a revolutionary component of the emerging US military posture. The drive to apply AI as quickly as possible is behind the Trump War Department’s campaign to eliminate virtually all of the controls that would normally govern the introduction of a new technology. This approach is being framed as absolutely necessary for maintaining the US technological advantage over China and cementing US military dominance, but the haste with which regulations are being cast aside will almost certainly lead to flawed weapons systems, exorbitant prices, reduced accountability, and an accelerated AI arms race.

For the Pentagon, 2026 is the year of AI. On January 9, Secretary of War Pete Hegseth issued a memorandum directing the Pentagon to become an “AI-first” war-fighting institution. Three days later, Hegseth launched an “AI Acceleration Strategy” and then announced a sweeping overhaul of the department’s systems for researching, developing, and purchasing new weapons, which would include AI. These steps will formalize a system intended to produce next-generation technology at “wartime speed.”

At the center of the strategy are seven “Pace-Setting Projects,” or PSPs, designed to push AI into war fighting, intelligence, business practices, and data-processing functions within months rather than years. The initiatives range from AI-enabled battlefield-decision support and simulation tools to systems intended to convert intelligence into military action as rapidly as possible. Delays, risk aversion, and procedural safeguards are framed as liabilities; speed is all that counts.

The new AI acceleration strategy will give even greater power and influence to private companies by increasing the reliance on AI funding from venture-capital firms, forming new partnerships with emerging military-tech companies, and drawing up open-ended contracts to help ensure that military systems can incorporate the latest technology within weeks.

The shift in approach is already under way: The Army just awarded Salesforce a 10-year, $5.6 billion contract to provide AI-enabled systems for the so-called Department of War, which the company says will “increase mission readiness” by consolidating fragmented data sources into “one interoperable platform,” allowing war fighters to make “quicker, more effective decisions.”

Current Issue

Cover of May 2026 Issue

Taken together, the steps outlined above will further centralize decision-making within the Pentagon and dispense with traditional checks against shoddy work and price gouging, as inadequate as our current strictures are. It will be speed first and other concerns be damned.

But with the focus on speed front and center, Hegseth’s January 9 memo offers no real guidance on how to meet crucial goals such as ensuring that the laws of armed conflict are being followed, or allowing time for adequate congressional oversight or coordinating with allies.

By positioning AI as the foundation for US military dominance going forward, the new approach reflects a timeworn myth that has dominated US planning since World War II, an approach that equates technological advancement with security. But technology alone does not win wars. And past technological “miracles,” from the electronic battlefield in Vietnam to the reliance on networked warfare and precision-guided strike capabilities in Iraq and Afghanistan, have failed to achieve US military objectives, while causing immense harm to civilians in the target nations and to US combat personnel.

For example, the purported technological miracle of the Vietnam era was described by The New York Times as follows: “Gen. William C. Westmoreland, Army Chief of Staff, believes that the new electronics technology has brought the Army to the threshold of a new concept of the battlefield that may be as revolutionary in warfare as the introduction of the helicopter or the tank.” In the real world, the Vietcong developed a series of relatively simple countermeasures, and the new surveillance and targeting systems did not turn the tide in the war.

The Nation Weekly

Fridays. A weekly digest of the best of our coverage.
By signing up, you confirm that you are over the age of 16 and agree to receive occasional promotional offers for programs that support The Nation’s journalism. You may unsubscribe or adjust your preferences at any time. You can read our Privacy Policy here.

Even in the 1991 Gulf War, when the use of precision-guided munitions was credited with playing a central role in evicting Saddam Hussein’s invading forces from Kuwait, the story was more complicated. The coalition victory against Hussein’s forces had more to do with the volume of munitions dropped and the relative weakness of Iraqi air defenses than it did with networked warfare or precision strikes. An extensive analysis of the air war in the 1991 conflict by what was then known as the General Accounting Office (now the Government Accountability Office) pointed out that “the claim by [the Department of Defense] and contractors of a one-target, one-bomb capability for laser-guided munitions was not demonstrated in the air campaign where, on average, 11 tons of guided and 44 tons of unguided munitions were delivered on each successfully destroyed target.”

Without firm policy guardrails, AI may amplify risk rather than reduce it, putting more emphasis on hitting targets quickly than on why those targets are being chosen in the first place. The result could be more failed wars and more unnecessary suffering, not the much touted revolution in US capabilities promised by Hegseth and Trump.

The Pentagon has made its urge to deploy AI for any and every purpose as soon as possible abundantly clear. Whether commonsense controls over its deployment or a realistic strategy governing its use become part of the mix remains to be seen. Without a new approach to defining US interests and a sounder understanding of the limits of military force, rushing new technologies to the battlefield will only yield a more dangerous, less stable world.

Before going all in on AI, the US government should think more carefully about the human consequences of the current, deeply counterproductive strategies and actions this new technology is being deployed to advance.

Janet Abou-Elias

Janet Abou-Elias is a researcher at the Democratizing Foreign Policy Project at the Quincy Institute for Responsible Statecraft and a founder of Women for Weapons Trade Transparency. Her writing has appeared in The Hill, In These Times, Responsible Statecraft, The National Interest, Fair Observer, and other outlets.

William D. Hartung

William D. Hartung is a senior research fellow at the Quincy Institute for Responsible Statecraft.

More from The Nation

Letters Icon

Letters From the May 2026 Issue Letters From the May 2026 Issue

Voting for vets… The meaning of evangelical… Billionaire ball clubs…

Our Readers

A mobile billboard assails the censorship regime of FCC Chair Brendan Carr outside the agency's headquarters in Washington, DC.

Why Fascists Fear Free Speech Why Fascists Fear Free Speech

The White House is following an old authoritarian playbook to suppress dissent.

Greg Ruggiero

A march in solidarity with Selma, in Harlem, on March 15, 1965.

Selma Still Matters Selma Still Matters

What was born there was a new definition of who gets to be an American. And that legacy is under threat.

Keith Ellison and Yusef D. Jackson

What We Need to Ask Ourselves About AI

What We Need to Ask Ourselves About AI What We Need to Ask Ourselves About AI

Seven questions to resolve before we let this fast-moving technology run rampant.

Sen. Bernie Sanders

What Happens When “Your Honor” Is a Robot?

What Happens When “Your Honor” Is a Robot? What Happens When “Your Honor” Is a Robot?

The age of artificial judges is fast approaching. What will that mean for justice?

Feature / Elie Mystal

“The Nation” Is Siding With Humanity

“The Nation” Is Siding With Humanity “The Nation” Is Siding With Humanity

As unregulated, profit-driven AI threatens our economy, climate, and safety, we can’t let tech-bro profiteers define our future.

Katrina vanden Heuvel, John Nichols for The Nation