Tech companies claim artificial general intelligence systems will propel our society forward. But the cost to our humanity may not be worth the risk.
OpenAI claims “to ensure AGI benefits all of humanity,” but in reality it threatens the very concept of humanity.(Photo: CFOTO / Future Publishing via Getty Images)
OpenAI’s latest system, Sora, creates videos on the basis of text prompts. You write a prompt and Sora does the rest. The result: highly photorealistic videos and endearing animations. From mammoths sauntering through a snowy meadow to a market full of people in Lagos, Nigeria, in the year 2056. Nothing seems too wild for Sora’s imagination.
Sora is not yet available to the wider public, because OpenAI is first letting “red teamers” assess the product in regard to its potential for abuse. OpenAI also asked creative professionals for feedback on how Sora can benefit them, framing Sora as a tool that could enhance their output.
At the same time, however, technologies like Sora may render many creatives irrelevant. Just last month, famed director and producer Tyler Perry halted an $800 million studio expansion after seeing what Sora is capable of. It would eliminate the need for him to build sets or travel to film on-location. “I can sit in an office and do this with a computer, which is shocking to me,” he told The Hollywood Reporter.
It should come as no surprise that advances in AI have sparked debates about the future of work as well as concerns about misinformation, bias, and copyright infringements. Responding to the backlash, tech companies have (virtue-)signaled consideration for these issues by stressing the importance of Ethical AI, Responsible AI, or Trustworthy AI and developing guidelines for these noble goals. Indeed, OpenAI’s website includes the disclaimer that despite extensive testing, it cannot predict all possible benefits and harms of its technology, and that (ironically) it is critical to release (potentially harmful) AI into the real world to increase AI’s safety over time.
Policymakers are also actively discussing how to regulate AI in order to prevent or, at least, limit the aforementioned problems. In fact, earlier this month the EU passed the landmark AI Act, which includes bans and restrictions related to biometric identification systems, manipulative uses of AI, and artificial general intelligence (AGI, or AI with human-level or above intelligence and the ability to self-teach).
While the widespread attention for AI ethics is to be applauded, it is guiding attention away from a deeper issue. Focusing on ways to regulate and improve AI confirms a techno-deterministic narrative, one that does not question the overall desirability of technological advancements and assumes that AI and AGI are inevitable. However, we as a society must question whether generative AI systems like Sora really bring about progress and whether these systems should be welcomed in the first place.
OpenAI claims “to ensure AGI benefits all of humanity,” but in reality it threatens the very concept of humanity. We are beyond AI just beating us at chess and Go, optimizing pricing, and recognizing faces. Although these previous milestones were impressive, shocking, and maybe even demeaning to those with a strong sense of human superiority, that technology “merely” automated routine and rule-based tasks. Now, as our preemptive concerns about super intelligence have slowly dampened, AI has begun to master a realm many long thought was uniquely human—the realm of creativity and imagination.
Creativity and imagination are related but different phenomena. To create or be creative means to materialize a vision or idea into a painting, a movie, a song, and so on. To imagine goes a step further. Imagination requires the ability to develop that vision in the first place—to see what has not yet been manifested. Sora fills the gaps in our prompts. The model not only understands prompts (a point OpenAI repeatedly emphasizes on Sora’s site), but also imagines what the scenes described could and should look like. Yes, that imagination is based on training data, but so too is ours.
For humans, imagination takes time and space to develop. But in our capitalist society, which determines our value based on productivity and volume, artists are told that AI will be helpful to them because it will allow them to create more content so that they can work faster. These promises of productivity make it easier for people to overlook the fact that AGI-driven intelligent systems are on track to have capacities equal or better than humans’. AI’s imagination will likely become more grandiose in years to come.
We should not let the promise of productivity or narrow debates about AI’s ethical implications distract us from the bigger picture. Under the guise of improving humanity by increasing productivity, we risk releasing our ultimate replacement.
We should not overestimate the durability of human skills in the face of technological advancements. Calculators decreased the need for mental math. GPS has limited independent exploration and map navigation. While some might be glad to let these skills go, there must be a limit to the qualities we are willing to outsource.
I know that many important organizations are asking you to donate today, but this year especially, The Nation needs your support.
Over the course of 2025, the Trump administration has presided over a government designed to chill activism and dissent.
The Nation experienced its efforts to destroy press freedom firsthand in September, when Vice President JD Vance attacked our magazine. Vance was following Donald Trump’s lead—waging war on the media through a series of lawsuits against publications and broadcasters, all intended to intimidate those speaking truth to power.
The Nation will never yield to these menacing currents. We have survived for 160 years and we will continue challenging new forms of intimidation, just as we refused to bow to McCarthyism seven decades ago. But in this frightening media environment, we’re relying on you to help us fund journalism that effectively challenges Trump’s crude authoritarianism.
For today only, a generous donor is matching all gifts to The Nation up to $25,000. If we hit our goal this Giving Tuesday, that’s $50,000 for journalism with a sense of urgency.
With your support, we’ll continue to publish investigations that expose the administration’s corruption, analysis that sounds the alarm on AI’s unregulated capture of the military, and profiles of the inspiring stories of people who successfully take on the ICE terror machine.
We’ll also introduce you to the new faces and ideas in this progressive moment, just like we did with New York City Mayor-elect Zohran Mamdani. We will always believe that a more just tomorrow is in our power today.
Please, don’t miss this chance to double your impact. Donate to The Nation today.
Katrina vanden Heuvel
Editor and publisher, The Nation
Given AI’s current skills and capacity for imagination, it seems plausible that AI will not propel humanity forward, as OpenAI claims, but threaten humanity instead, by de-skilling us and rendering our unique features superfluous. That is why it’s past time that we discuss not only ethical implications like copyright breaches and algorithmic biases, but also what AI’s ability to imagine means for humanity. What does it mean to be human when our distinctive and characteristic skills and features are no longer uniquely ours?
It’s also crucial that we not allow ourselves to be misled by narratives about how ChatGPT and Sora will save humanity by making us more productive in work and everyday life, when they will likely primarily boost earnings for the one percent. Ultimately, as AI continues to develop, we may be left skill-less, uninspired, and dependent on our replacement. At that point, it will be too late to ask ourselves if building AI was worth the loss of our humanity.
Sage Cammers-GoodwinDr. Sage Cammers-Goodwin is a philosophy of technology researcher at the University of Twente. Her interests and expertise span corporate social responsibility, smart cities, and emerging technologies, including artificial intelligence. Her publications appear across multiple platforms including Oxford University Press.
Rosalie WaelenDr. Rosalie Waelen is currently working at the Sustainable AI Lab, which is a part of the Institute for Science and Ethics at the University of Bonn, Germany. Rosalie has previously published on the importance of critical theory perspectives in the AI debate and on the ethical and societal issues related to computer vision applications.