Feature / October 20, 2025

Trump’s AI Deregulation Is His Oppenheimer Moment

He has chosen to unleash a powerful and potentially cataclysmic new technology on the world with no regard for consequences.

Michael T. Klare

During the summer of 1945, the leaders of the Manhattan Project in Los Alamos, New Mexico, faced a momentous decision. The original motive for developing the atomic bomb—the need to counter a possible German A-bomb—had evaporated in May with the end of the war in Europe, and many atomic scientists were opposed to the use of such a weapon on Japan (then already on the brink of surrender). Nonetheless, top officials at Los Alamos, led by J. Robert Oppenheimer, chose to accelerate work on the bomb, enabling the fateful attacks on Hiroshima and Nagasaki. In doing so, they knowingly ignited a global nuclear arms race that persists to this day.

We might think of this deeply consequential decision sequence as an “Oppenheimer moment”—a time when senior officials choose to unleash a powerful and potentially cataclysmic new technology on the world without knowing the consequences of doing so or having adopted rigorous safeguards beforehand.

Now, in perhaps the most significant Oppenheimer moment since 1945, President Donald Trump has chosen to unleash another powerful and potentially cataclysmic new technology on the world: superintelligent artificial intelligence. On July 23, at the “Winning the AI Race” summit in Washington, DC, Trump released his administration’s “AI Action Plan”—an official blueprint for the unfettered corporate development of “frontier” AI models with vast but unknown capabilities. Claiming that the United States is in an existential struggle to achieve AI dominance before its rivals do—language long used with respect to nuclear weapons—he insisted that the US must “win” the AI “race,” no matter the risks or the costs.

“America is the country that started the AI race. And as president of the United States, I’m here today to declare that America is going to win it,” Trump announced. “My administration will use every tool at our disposal to ensure that the United States can build and maintain the largest, most powerful, and most advanced AI infrastructure anywhere on the planet.”

To attain this objective, top industry and government officials believe, the United States must lead in the development of advanced or “frontier” AI models—those capable of feats far surpassing the ones achieved by the original ChatGPT. Such models are expected to duplicate or outperform human cognition in many respects, such as by analyzing vast troves of data, identifying significant patterns, and devising (and then carrying out) responses to any identified threats or problems. These capabilities, it is claimed, will enable scientists to find cures for diseases and discover novel solutions to climate change—as well as equip robots with a capacity to locate and attack enemy forces on their own.

But the development of these frontier AI models poses two major challenges: the need for giant data centers and other computing infrastructure, along with massive amounts of electricity and water to keep them running; and the risk that the technology will fail, with unforeseeable but potentially calamitous consequences. Neither of these challenges was addressed in any meaningful way in the administration’s AI Action Plan, but they require our careful attention.

The need for mammoth computing capabilities (or “compute,” in industry lingo) derives from the fact that the large language models (LLMs) that will be used to develop advanced AI must be fed enormous amounts of data (think: everything ever posted on the Internet) in order to “train” them to recognize and respond to patterns in speech, writing, visual imagery, and so on. Storing all this raw data and enabling AI systems to sift through it millions and millions of times during the training process requires giant data centers with enormous banks of computer servers arranged in stacks, linked by endless miles of cables.

To improve on existing LLMs, the leading AI firms will need vastly increased compute power—which will require the construction of many more and substantially larger data centers. Many of the giant centers now being built by Google, Meta (Facebook), Amazon, Microsoft, and OpenAI are the size of a small airport, and some in the planning stage are said to be the size of a small city. Powering all of these stacked servers requires enormous amounts of electricity and water (to cool the machines). OpenAI, for example, plans to build five giant data centers with a combined electrical demand equivalent to 3 million households (which is roughly the number in the entire state of Massachusetts.)

Those five companies are expected to spend $320 billion on new construction in 2025 alone. Yet hundreds of billions more will be needed to ensure that there will be enough compute power to underwrite the next big advances in AI. And then there are the potential limitations on the availability of energy and water. According to a recent report from the Lawrence Berkeley National Laboratory, AI data centers could account for as much as 12 percent of total US electricity consumption in 2028.

To acquire all that electricity, AI firms are seeking every available source of energy, including coal, natural gas, nuclear power, and renewables. But domestic energy production is not keeping pace with this rising demand, and the Trump administration is obstructing the expansion of renewable energy capacity, so the scramble for advanced AI is likely to result in both rising carbon emissions (as these firms consume more fossil fuels) and increased competition with states and municipalities for electricity, leading to higher consumer prices.

Trump proudly displays a signed executive order that will stymie all AI regulation.
Unleashing havoc: Trump proudly displays a signed executive order that will stymie all AI regulation.(Chip Somodevilla / Getty Images)

Proponents of advanced AI models, both in government and industry, claim that future systems will endow the United States with unprecedented wealth and health. “AI will enable Americans to discover new materials, synthesize new chemicals, manufacture new drugs, and develop new methods to harness energy,” the AI Action Plan asserts. Whether AI will actually achieve all these outcomes remains to be seen: Google, Meta, Amazon, and Microsoft are not in the business of solving our health problems or developing new (presumably climate-friendly) energy sources; rather, their overriding objective is to sell business products and services in order to recoup their colossal investments in compute power. But whatever the intended use of frontier models, we can be sure of one thing: AI will remain an unreliable, error-prone technology.

The Nation Weekly

Fridays. A weekly digest of the best of our coverage.
By signing up, you confirm that you are over the age of 16 and agree to receive occasional promotional offers for programs that support The Nation’s journalism. You may unsubscribe or adjust your preferences at any time. You can read our Privacy Policy here.

The public release of ChatGPT in 2022 generated wonder around the world, as ordinary citizens found themselves having human-like conversations with seemingly thoughtful machines. However, the more that ChatGPT and other LLMs were put to the test, the more they demonstrated a propensity to produce false and nonsensical answers—called “hallucinations” by computer scientists. Engineers at OpenAI, the creator of ChatGPT, have struggled laboriously to correct these tendencies, with little success. Hallucinations, it turns out, are an inherent property of the method of statistical analysis that powers LLMs: It gets things right a lot of the time, but it often gets tripped up by unfamiliar topics or imprecise prompts, causing it to fabricate answers. To compound the problem, LLMs are unable to explain how they derive particular outcomes, so efforts to diagnose and correct errors often prove fruitless.

These shortcomings are merely inconveniences if you’re simply asking AI to summarize a document or create a dinner recipe, but they become deeply worrisome when AI is being used to steer automobiles in traffic or—gasp—control nuclear weapons. AI-enabled self-driving cars, such as Teslas equipped with Autopilot, fail on occasion, sometimes causing fatal injuries. AI-governed weapons systems have also been known to fail: In June, an unmanned naval vessel started behaving erratically near a California harbor, capsizing another boat and sending its captain into the water (fortunately, there were no serious injuries). For these reasons, many AI experts have warned against the hasty adoption of advanced models.

“Deploying AI is an ongoing process that holds tremendous promise—and equally tremendous danger,” write Zachary Arnold and Helen Toner, of Georgetown University’s Center for Security and Emerging Technology, in AI Accidents: An Emerging Threat. “Today’s cutting-edge AI systems…often lack any semblance of common sense, can be easily fooled or corrupted, and fail in unexpected and unpredictable ways.”

Can we expect more advanced systems—with far greater capabilities—to be exempt from these kinds of failures? No one can answer this with certainty, but many computer scientists have warned that superintelligent AI could cause unpredictable and catastrophic harm, such as diverting all available electricity into further computing (with the resulting collapse of human civilization) or precipitating an unintended nuclear war. In March 2023, for example, more than 1,000 prominent AI developers signed an open letter calling for a pause in the development of advanced models, warning that in the absence of meaningful restraints, such systems pose “profound risks to society and humanity.”

At the extreme end of those risks is a Terminator-like scenario in which superintelligent AIs choose to eliminate human beings or, Matrix-like, reduce them to slaves. But other, more down-to-earth risks abound, such as the elimination of white-collar jobs (including the coding tasks that make AI possible) or the collapse of entire industries (and the livelihoods of the workers they sustain). Many AI systems have also been found to display racial and gender biases (a product of the unrepresentative nature of the data sets fed to advanced algorithms during their training phase) and to amplify hate speech.

In response to such concerns, President Joe Biden’s administration adopted a number of measures aimed at ensuring government oversight of frontier AI models. In October 2023, Biden signed Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” with this in mind. Warning that the “irresponsible use [of AI] could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security,” the order required AI companies to conduct “red-teaming,” or harsh testing of their advanced AI models, to identify (and correct) any potential failures.

J. Robert Oppenheimer (third from left) examines the Trinity test site after the bombings of Hiroshima and Nagasaki.
The damage done: J. Robert Oppenheimer (third from left) examines the Trinity test site after the bombings of Hiroshima and Nagasaki.(Trinity Atomic Test Site via Wikimedia)

Now we come to Trump’s Oppenheimer moment: the decision to eliminate all restraints on AI and to facilitate the development of superintelligent models. President Trump demonstrated his intent on January 23, when he signed an executive order rescinding Biden’s October 2023 measure and authorizing the unbridled development of advanced AI. The Trump order also mandated the development of an AI action plan within 180 days, resulting in the blueprint he embraced in July.

As Trump made clear when he announced the plan’s adoption, his administration will seek to eliminate all obstacles to the development of advanced AI. As part of this effort, he explained, the administration will work closely with the major AI firms to facilitate the rapid construction of giant data centers, regardless of environmental restraints, zoning provisions, or local regulations. “To ensure America maintains the world-class infrastructure,” he declared, “I will sign a sweeping executive order to fast-track federal permitting, streamline reviews, and do everything possible to expedite construction of all major AI infrastructure projects. And this will be done.”

To make sure that state and local officials do not stand in the way of this imperial edict, Trump indicated that his administration will punish any state or municipality that imposes limits on AI, such as those being considered by the California Legislature. “You can’t have a state with standards that are so high that it’s going to hold you up,” Trump told the industry officials in the audience. “You have to have a federal rule and regulation. Hopefully, you’ll have the right guy at this position [i.e., Trump] that’s going to supplant the states.”

Trump also made it clear that he is undeterred by talk of AI risks and AI biases. “As with any such breakthrough, this technology brings the potential for bad as well as for good, for peril as well as for progress,” he said. “But…it’s not going to be a reason for retreat from this new frontier. On the contrary, it is the more reason we must ensure it is pioneered first and best. We have to have the best, the first pioneer.”

After conducting the first test of an atomic explosive device on July 16, 1945, Oppenheimer and his associates at Los Alamos were aware that they were about to inflict massive death and destruction on Japanese cities and that this act would spur other countries to seek similar capabilities—but this did not deter them from going ahead anyway.

Much the same can be said of Trump’s Oppenheimer moment in July 2025. Although Trump and the leaders of the industry know that the creation of frontier AI models “brings the potential for bad as well as for good,” this has not deterred them from proceeding with their development. They also know that no matter how many trillions of dollars the US spends on mammoth data centers and other AI infrastructure, America’s rivals will be able to match US progress in a relatively short amount of time, and probably for less money. Nevertheless, they insist that the United States must always remain ahead in this AI arms race, no matter the cost.

We know the long-term consequences of Oppenheimer’s fateful choice: a world replete with nuclear weapons, some ready to be used at a moment’s notice. What the long-term consequences of Trump’s AI decision will be cannot be foreseen, but they are sure to be on an equal scale and to entail comparable perils. And just as we must work harder than ever to prevent future Hiroshimas, we must demand adequate safeguards on advanced AI models before they inflict equivalent damage.

Michael T. Klare

Michael T. Klare, The Nation’s defense correspondent, is professor emeritus of peace and world-security studies at Hampshire College and senior visiting fellow at the Arms Control Association in Washington, DC. Most recently, he is the author of All Hell Breaking Loose: The Pentagon’s Perspective on Climate Change.

More from The Nation

The Tunnel Home: A Story of Housing First

The Tunnel Home: A Story of Housing First The Tunnel Home: A Story of Housing First

In the 1990s, a group of New Yorkers helped prove the effectiveness of a bold but simple approach to homelessness. Now Trump wants to end it.

Feature / Patrick Markee

US Secretary of Defense Pete Hegseth laughs during a cabinet meeting hosted by President Donald Trump on December 2, 2025.

Pete Hegseth Should Be Charged With Murder Pete Hegseth Should Be Charged With Murder

No matter how you look at the strikes on alleged “drug boats”—as acts of war or attacks on civilians—Hegseth has committed a crime and should be prosecuted.

Elie Mystal

Members of the National Guard patrol along Constitution Ave. on December 1, 2025, in Washington, DC. Two West Virginia National Guard troops were shot on November 26, resulting in the death of Sarah Beckstrom on November 27.

After the DC National Guard Shooting Comes the Big Lie After the DC National Guard Shooting Comes the Big Lie

West Virginia Governor Patrick Morrisey is inventing reasons for the National Guard to occupy Washington, DC. We cannot let his outrageous fabrications take hold.

Dave Zirin

An AI security camera demo at an event in Las Vegas, Nevada.

Can We Opt Out of Facial Recognition Technology? Can We Opt Out of Facial Recognition Technology?

I traveled through airports and reported in sports stadiums this year. At each, I was asked to scan my face for security.

Nicholas Russell

Children eating Thanksgiving dinner in Harlem.

Make Thanksgiving Radical Again Make Thanksgiving Radical Again

The holiday’s real roots lie in abolition, liberation, and anti-racism. Let’s reconnect to that legacy.

Kali Holloway

“The First Thanksgiving, 1621,” by Jean Leon Gerome Ferris (1863–1930).

The Pilgrims Were Doomsday Cultists The Pilgrims Were Doomsday Cultists

The settlers who arrived in Plymouth were not escaping religious persecution. They left on the Mayflower to establish a theocracy in the Americas.

Jane Borden