Toggle Menu

Will AI Lead to Human Extinction?

We can create a world where artificial intelligence meets its spectacular promise to improve the lives of all humanity—but only if we work together.

Katrina vanden Heuvel

April 25, 2023

The ChatGPt website on a tablet.(Eduardo Parra / Europa Press via Getty Images)

ChatGPT, the new generative AI chatbot created by OpenAI, was released less than five months ago, but it has already become the fastest-growing Internet application in history. It’s almost powerful enough to work some white-collar jobs. New York Times columnist Farhad Manjoo says it’s already changed the way he does his.

For all the talk of AI’s potential, the people who understand it best are frightened by how quickly the technology is accelerating. On one hand, it could usher in an era of unprecedented human health and happiness. On the other, it has the potential for massive economic disruption, weakened national security, and the erosion of personal privacy.

It gets worse: In a survey of AI experts conducted last year, almost half said the chance that AI would lead to human extinction was 10 percent or more. Even Elon Musk, a founding member of OpenAI and someone rarely seen as concerned about consequences, said last week that AI “has the potential of civilization destruction.”

We are at an inflection point with AI. New technology demands new regulations. But AI development is outpacing regulators’ ability to act, or even to understand. To avert the worst possible outcomes, leaders need to listen to the experts who can anticipate the ramifications, and regulate now—before it’s too late.

Current Issue

View our current issue

Subscribe today and Save up to $129.

We’ve seen this show before. Proponents of the Industrial and Digital Revolutions dangled utopian visions in front of our eyes while shunning the regulations that would have ensured safer and more equitable change. As a result, we’re still uncovering and cleaning up industrial waste sites from the 1800s, and only beginning to understand how our failure to regulate social-media-created crises of democracy and public health.

AI may be our most threatening revolution yet. Google CEO Sundar Pichai has said that the impact of AI will be “more profound than fire.” But even cavemen understood that playing with fire can have third-degree consequences.

The United States government is woefully ill-equipped to tackle the existential threat of AI. Republican Representative Jay Obernolte, the only member of Congress with a master’s degree in AI, has to explain to his colleagues that the threat “will not come from evil robots with red lasers.” Congress is clearly not ready to legislate.

But we can’t let the complexity of AI prevent us from acting today while we unravel the broader questions we’ll face tomorrow. As Representative Ted Lieu reminds us, we don’t fully understand all the ways pharmaceuticals improve health, but the FDA still regulates drug manufacturing to keep us safe.

An essential first step is to make sure AI doesn’t fall victim to the same monopolistic practices that undermined the first age of the Internet. FTC Chair Lina Khan is already gearing up the agency to watch out for antitrust violations to make sure the technology isn’t dangerously concentrated in too few hands.

The Biden administration has taken steps to ask questions and get better informed. The Commerce Department is seeking public comment as it considers measures to regulate AI. And last year, the White House unveiled a Blueprint for an AI Bill of Rights, but critics have noted that its protections are “toothless” and “uneven.”

As the government gathers information about this new technology, it cannot make the mistake of privileging the voices of the tech industry already lobbying against regulations. Instead, it should look to states like California and Texas, which are regulating AI tools to protect civil rights and demand transparency.

Support urgent independent journalism this Giving Tuesday

I know that many important organizations are asking you to donate today, but this year especially, The Nation needs your support. 

Over the course of 2025, the Trump administration has presided over a government designed to chill activism and dissent. 

The Nation experienced its efforts to destroy press freedom firsthand in September, when Vice President JD Vance attacked our magazine. Vance was following Donald Trump’s lead—waging war on the media through a series of lawsuits against publications and broadcasters, all intended to intimidate those speaking truth to power. 

The Nation will never yield to these menacing currents. We have survived for 160 years and we will continue challenging new forms of intimidation, just as we refused to bow to McCarthyism seven decades ago. But in this frightening media environment, we’re relying on you to help us fund journalism that effectively challenges Trump’s crude authoritarianism. 

For today only, a generous donor is matching all gifts to The Nation up to $25,000. If we hit our goal this Giving Tuesday, that’s $50,000 for journalism with a sense of urgency. 

With your support, we’ll continue to publish investigations that expose the administration’s corruption, analysis that sounds the alarm on AI’s unregulated capture of the military, and profiles of the inspiring stories of people who successfully take on the ICE terror machine. 

We’ll also introduce you to the new faces and ideas in this progressive moment, just like we did with New York City Mayor-elect Zohran Mamdani. We will always believe that a more just tomorrow is in our power today.  

Please, don’t miss this chance to double your impact. Donate to The Nation today.

Katrina vanden Heuvel 

Editor and publisher, The Nation

Better yet, look overseas. As usual, Europe is ahead of us when it comes to regulation. ChatGPT was released only six months ago, but several EU countries are already investigating privacy concerns. Italy has banned it outright.

A group of EU officials have called on President Biden to convene a summit to encourage collaboration on global solutions. The White House should accept that invitation.

Last week, two AI experts in The Economist called for an “International Agency for AI” to coordinate a global response—not unlike the International Atomic Energy Agency. But, as Robert Wright notes, the problem doesn’t have to reach nuclear levels to merit global governance. No country wants to be the first to step in and regulate. That’s why we should do it together.

Given the breathtaking speed at which AI is evolving, it’s unlikely that we will find a one-size-fits-all solution. Addressing the potential threat of AI will require the humility to admit what we don’t know, and a willingness to heed the criticism of skeptics as much as the promises of evangelists.

We can create a world where AI meets its spectacular promise to improve the lives of all humanity—but only if we work together, and use our unique capacity to make moral choices to develop a truly intelligent (read not artificial) response.

Katrina vanden HeuvelTwitterKatrina vanden Heuvel is editor and publisher of The Nation, America’s leading source of progressive politics and culture. An expert on international affairs and US politics, she is an award-winning columnist and frequent contributor to The Guardian. Vanden Heuvel is the author of several books, including The Change I Believe In: Fighting for Progress in The Age of Obama, and co-author (with Stephen F. Cohen) of Voices of Glasnost: Interviews with Gorbachev’s Reformers.


Latest from the nation