Toggle Menu

Frankenstein’s Regrets

What is AI?

Ben Tarnoff

Today 5:00 am

A Facebook Data Center in Swedish Lapland.(Jonathan Nackstrand / Getty)

Bluesky

Artificial intelligence is a nightmare to write about. It’s not just the technical parts, which are complicated, or the fact that the field is moving fast enough to give most commentary on it a short shelf life. It’s that the discourse is so extreme that trying to find one’s footing in the scrum can feel hopeless. Artificial intelligence is both a technology and a theology, and in its latter aspect, it too often resembles a doctrinal dispute among an assortment of shrieking priests.

Books in review
The AI Paradox: How to Make Sense of a Complex Future Buy this book

Artificial intelligence will bring us heaven on earth or kill us all. It is the most important invention in human history or a scam. It will eliminate millions of jobs and produce permanent mass employment, or it will prove to be vastly overhyped, in which case the abrupt collapse of the technology’s trillion-dollar investment boom will tank the economy.

We need careful nondenominational thinking to guide us through this mess. The computer scientist Virginia Dignum is well-placed to play this role. Currently a professor at Umeå University in Sweden, she has been working in artificial intelligence since the 1980s. Dignum is an expert on “responsible AI,” which studies how to create and use AI systems in ethical ways, and has written an often-cited textbook on the subject. She is also an influential policy intellectual, having served as an AI adviser to various international organizations and initiatives, including the European Commission, the United Nations, and the World Economic Forum.

In her new book, The AI Paradox, Dignum offers an overview of AI with particular attention to its social ramifications. Each chapter is devoted to a different paradox that serves to illuminate a specific dimension of her theme. The “agreement paradox,” for instance, focuses on the surprisingly thorny question of what AI is in the first place (“the more we explore AI, the harder it becomes to agree on its definition”), while the “solution paradox” summarizes the pitfalls inherent in the tech industry’s fondness for the technological fix (“solving problems with technology often creates more problems”).

Current Issue

View our current issue

Subscribe today and Save up to $129.

Not all of Dignum’s paradoxes seem especially contradictory or counterintuitive, but together they form an effective and creative structure for the book. AI has become something of a cliché in recent years; by probing the riddles and antinomies that exist below the surface, Dignum gives the general reader a truer gauge of the subject’s depth. After all, the useful thing about paradoxes is how, as Dignum notes, they “reveal that reality is rarely as simple as it seems.”

The first paradox Dignum presents is the one that holds the greatest significance for her and for many of her fellow humanists: the notion that AI does not diminish but in fact helps clarify what makes us human. “The more AI can do, the more it highlights the irreplaceable nature of human intelligence,” she writes. AI is good at certain tasks, such as “data analysis, logical reasoning, and linguistic processing.” Yet it struggles with others, especially those involving creativity, empathy, “moral and ethical discernment,” the “capacity for complex reasoning,” and the “ability to reason about relationships between concepts.” This leads Dignum to conclude that our “uniquely human traits” will never be “fully replaced, no matter how advanced AI becomes.” Paradoxically, the growing sophistication of AI only serves to underscore our distinctiveness.

This view places Dignum within a tradition of humanist AI critique that is nearly as old as the field itself. Since the inception of artificial intelligence in the 1950s, first as an academic pursuit and then a commercial one, its partisans have maintained that the mind is a machine and that, consequently, it is possible to endow a machine with the intelligence of a human. The humanists —figures like the philosopher Hubert Dreyfus and the computer scientist Joseph Weizenbaum—have countered that, in fact, no matter what AI can or cannot do, it will never truly replicate the human mind because the human mind is nothing like a machine. “The core difference lies not just in capabilities, but in the essence of being,” as Dignum explains. “AI calculates, while humans feel; AI iterates, while humans imagine.”

This doesn’t mean AI is useless. On the contrary, Dignum is optimistic about the technology’s potential. But fulfilling this potential requires seeing AI “as a complementary tool to human intelligence, not a replacement.” Much like a calculator liberates us from the tedium of doing arithmetic by hand, AI’s facility at finding patterns in data can free us up “to focus on more creative, strategic, and profound aspects of thinking.” Dignum casts AI in a supporting role, as the helpmeet that handles the busywork so that we can spend more time exercising our higher—and, in her view, more distinctly human—functions.

The tech industry, of course, has something else in mind. The vast sums of money flowing into the generative-AI boom means that an acceptable return on investment can be attained only by putting large numbers of people out of work. Companies need their computers to start acting and working like humans; the goal is not to enhance human labor but to purge as much of it as possible from production. It remains unclear to what extent this goal can be realized. At a minimum, AI coding tools such as Claude Code are permanently changing how software is written by making the process of programming much simpler and faster. The consequences for the employability of software engineers may be significant.

Because tech people tend to see programming as the hardest thing a human can do, AI’s increasing proficiency in this area is often taken as the harbinger of a fast-approaching “artificial general intelligence” (AGI) or even “artificial superintelligence” (ASI). AGI refers to the threshold at which AI will match the intelligence of a human; ASI would be the point at which AI exceeds it. For Dignum, such notions are ridiculous. She compares the idea of AI’s “approximating or surpassing human intelligence” to the notion that “airplanes will soon be laying eggs, just because we keep improving their flying capabilities.” The analogy “highlights the absurdity of expecting a machine—a nonliving, mechanical artifact—to attain the full spectrum of human intelligence.”

The Nation Weekly
Fridays. A weekly digest of the best of our coverage.
By signing up, you confirm that you are over the age of 16 and agree to receive occasional promotional offers for programs that support The Nation’s journalism. You may unsubscribe or adjust your preferences at any time. You can read our Privacy Policy here.

More profoundly, Dignum argues that the concepts of AGI and ASI are rooted in a misunderstanding of the nature of human intelligence. The aspiration of today’s AI firms is not only to replace human workers but to build something that goes well beyond them: a god in a box—a single technological system that knows, and can do, everything. But intelligence, Dignum notes, can never emerge purely in isolation; it has always been a collective endeavor. “Our evolutionary history reveals that social behaviors like cooperation, communication, and group living were not just important for survival—they were the very foundation upon which our intelligence developed,” she writes. “The more we chase AGI, the more we discover that true superintelligence lies in human cooperation.” This is what she calls 
the “superintelligence paradox”—another conundrum that illustrates how humanity can never be displaced by AI.

Support independent journalism that does not fall in line

Even before February 28, the reasons for Donald Trump’s imploding approval rating were abundantly clear: untrammeled corruption and personal enrichment to the tune of billions of dollars during an affordability crisis, a foreign policy guided only by his own derelict sense of morality, and the deployment of a murderous campaign of occupation, detention, and deportation on American streets. 

Now an undeclared, unauthorized, unpopular, and unconstitutional war of aggression against Iran has spread like wildfire through the region and into Europe. A new “forever war”—with an ever-increasing likelihood of American troops on the ground—may very well be upon us.  

As we’ve seen over and over, this administration uses lies, misdirection, and attempts to flood the zone to justify its abuses of power at home and abroad. Just as Trump, Marco Rubio, and Pete Hegseth offer erratic and contradictory rationales for the attacks on Iran, the administration is also spreading the lie that the upcoming midterm elections are under threat from noncitizens on voter rolls. When these lies go unchecked, they become the basis for further authoritarian encroachment and war. 

In these dark times, independent journalism is uniquely able to uncover the falsehoods that threaten our republic—and civilians around the world—and shine a bright light on the truth. 

The Nation’s experienced team of writers, editors, and fact-checkers understands the scale of what we’re up against and the urgency with which we have to act. That’s why we’re publishing critical reporting and analysis of the war on Iran, ICE violence at home, new forms of voter suppression emerging in the courts, and much more. 

But this journalism is possible only with your support.

This March, The Nation needs to raise $50,000 to ensure that we have the resources for reporting and analysis that sets the record straight and empowers people of conscience to organize. Will you donate today?

If we took the cooperative aspects of cognition seriously, then what kind of AI would we create? Dignum argues that it would look somewhat different from the AI currently being developed. Rather than systems that try to replace human labor, we would imagine those that “work alongside humans to extend our capabilities and enhance collective intelligence.” Such a shift might be facilitated by moving away from the large, expensive, and monolithic AI models of the sort that underlie services like ChatGPT and toward a more modular approach, in which a mix of smaller and more specialized models is made available to workers in ways that respect their autonomy and expertise. This strategy would have the added virtue of diminishing the power of the tech monopolies, since their control of contemporary AI is, as Dignum notes, inseparable from the fact that they are the only actors with sufficient infrastructure to train and deploy large models.

The central argument of The AI Paradox, then, is that there is nothing inevitable about AI’s present trajectory. Dignum concludes her book with a plea for a more intentional and inclusive approach to AI development, one in which “everyone has a voice in shaping the direction AI takes.” She wants to banish the quietism that too often clouds people’s minds when technology is involved. “We must resist the seductive narratives that portray AI as an unstoppable force beyond human control, narratives that strip us of our agency and render us passive in the face of technological change,” she declares. AI is made by people, and therefore it “is what we, people, make of it…. The power to decide lies with us.”

Dignum’s message is an empowering one: Humans have a monopoly on true intelligence. AI is simply another tool, like an airplane or a car, and we can steer it in any direction we want.

If The AI Paradox had been published a decade ago, these claims would be easier to sustain. But the arrival of large language models (LLMs)—the computational systems that form the engine of generative AI—in 2018, and their rapid subsequent evolution, has cast doubt on some of Dignum’s assertions. While she concedes that “LLMs represent an incredible advancement,” they do not prompt her to revise her overall view of AI. In her account, LLMs have the same fundamental limitation as the AI systems that preceded them: They are incapable of “actual comprehension.” “They do not ‘think’ or ‘know,’” she writes, “they merely simulate patterns extracted from their training data.”

Given Dignum’s career as a distinguished scholar who has worked in AI for decades, few people are as qualified as she is to offer a judgment on LLMs. But opinion within the field is far less settled than she suggests. Because LLMs are more complex than their predecessors, they pose interpretative questions that are harder to answer. Are they purely imitative, or do they exhibit “emergent” properties on account of their complexity? Are they best understood as pattern-matching machines, or can they engage in conceptual reasoning of certain kinds? Among AI researchers and practitioners, these are matters of active debate. And this debate cannot be reduced, as some AI deflationists suggest, to a struggle between truth-tellers and the marketing department of OpenAI. There are genuine disagreements over how to characterize LLMs and their capabilities. In some cases, these involve empirical disputes over what an LLM is actually doing at any given moment. At other times, the disagreements are more semantic or philosophical, centered on the meaning of terms like reasoning 
and comprehension.

The simplest way to describe an LLM is as a system that tries to predict the next word in a sequence, based on the probabilities it has gleaned from its training set. An LLM learns how to make these predictions through a series of baroque computations whose convolutions are not fully understood. We know why LLMs work—that is, we have a good sense of their basic mechanisms. What’s less clear is how they work: Even their creators can’t say with precision why a model produces a particular response. This is the reason that the debates around LLMs are so vigorous and, perhaps, irresolvable. The technology is, in certain important respects, unruly and opaque.

By contrast, cars and airplanes are not: You can open them up and see how they work. They are deterministic systems that do what you tell them to do. Dignum frequently emphasizes that AI is made by humans. But just because something is made by humans doesn’t mean it will remain within the ambit of human comprehension and control.

If AI is not like a car or an airplane, then what is it? At one point, Dignum describes LLMs as “a cognitive Frankenstein’s monster.” She means it in a minimizing way: LLMs work by “piecing together fragments of human language in a way that appears intelligent” but isn’t. “Like Frankenstein’s monster,” she writes, “they lack genuine understanding and intentionality.”

I hope it is not too pedantic to point out that this is a misreading of Mary Shelley’s novel: Frankenstein’s monster does indeed think, feel, and scheme. He teaches himself to read and loves Paradise Lost. He craves companionship and hates his creator for abandoning him, a hatred that moves him to kill the man’s wife and brother.

Support our work with a digital subscription.

Get unlimited access: $9.50 for six months.

Yet Dignum’s analogy does resonate, albeit for different reasons. Frankenstein is a story about the relationship between human beings and their alien offspring. The monster is created by a human scientist and even assembled from human body parts. Yet he is feared and hated by nearly everyone he encounters because of his “unearthly ugliness.” Despite being wholly man-made, he is somehow otherworldly.

LLMs have a comparable set of qualities. They are, on the one hand, a product of human ingenuity—an achievement enabled by more than eight decades of research into computational models loosely inspired by the human brain. They are also a composite of human culture in the broadest sense, having been trained on large portions of the publicly accessible Internet, along with books, academic papers, and other sources. Yet for all the humanness of their inputs, LLMs are irreducibly nonhuman in their operation. They learn by studying large quantities of text and constructing elaborate mathematical maps of the semiotic relationships within them. This is not how the human brain works.

Accordingly, we might think of an LLM as something like Frankenstein’s monster: an alien of human ancestry that is not wholly assimilable to our purposes. This metaphor must be handled with caution; it should not be taken to mean that AI is sentient or supernatural. I agree with Dignum’s assertion that we need to demystify AI and to construct a “simple, clear narrative” about the technology. In doing so, however, we should be careful not to efface the fundamental weirdness of LLMs.

We should also be skeptical of the opposition that she sets up between humanity and technology. This is not a binary that feels supportable, least of all now, as our technological 
entanglements become even more consuming than they were when Donna Haraway christened us “cyborgs” in 1985. LLMs may never “attain the full spectrum of human intelligence,” as Dignum says, on account of their not being human. But they have clearly achieved a kind of hybridity with humanness that enables them to act in ways that most people perceive as at least quasi-intelligent. Rather than dismissing such a perception as delusional, we might see it as evidence of the technology’s mongrel character. LLMs are an object lesson in the porousness of the human as a category, as well as our tendency to extrude ourselves into our artifacts—artifacts that can, in turn, exert influence over us. This is not necessarily a good thing. People are 
having psychotic breakdowns from talking to AI chatbots. Victor Frankenstein dies filled with regret.

You might be asking yourself why the way we interpret LLMs even matters. The answer is that it has consequences for how we respond to AI politically, a subject that Dignum engages with throughout her book. If we think of the technology as a car, for instance, that implies a certain approach. Cars provide certain benefits but also “cause accidents,” Dignum notes. Fortunately, they are “much safer and more efficient” than they were 50 years ago, thanks to regulation. Today’s AI is like “a car without brakes or seatbelts,” which means that we need to find the AI equivalents of such measures. “Just as we regulate cars to protect ourselves from accidents and misuse, AI also requires safeguards to prevent harm and ensure it serves humanity’s best interests,” Dignum advises.

She believes such safeguards should be anchored in the “principles of ethical AI”—justice, accountability, transparency, and the protection of individual rights—and developed through an “ongoing dialogue” among “technologists, ethicists, policymakers, and communities.” Only by “involving diverse stakeholders in the decision-making process” can the correct balance be struck. It is very important to Dignum that regulation not be seen as impeding AI’s development. “Just as brakes and safety measures allow cars to go faster, regulation enables innovation to grow responsibly and sustainably,” she writes. More specifically, the absence of governance could cause “trust in AI [to] erode, leading to slower adoption or even rejection of the technology.” Policymakers can help accelerate AI’s integration into society while ensuring that it remains respectful of our rights and equitable in its distribution of benefits.

These passages convey a faith in managed capitalism that feels distinctly European. The picture is one in which representatives from government, industry, and civil society come together to forge policy frameworks that 
harmonize their interests. We might ask whether harmony is possible, or what kind of struggles might 
need to be waged to compel the tech giants to submit to such a process. (They are currently fighting EU regulators tooth and nail.) But the deeper root of Dignum’s optimism is her view of AI itself. Because AI is a tool, we can retool it. “In whatever way we define AI, it is crucial to understand that it is an artifact, that is, something created by people,” she explains. “Since we build it, we control and are responsible for its trajectory and choices.”

But what if AI is better understood as Frankenstein’s monster—a man-made yet alien entity, by turns familiar and strange, unpredictable and not fully fathomable, semi-obedient at best? Not all AI fits this description, but LLMs do, and LLMs are what the tech industry is trying to make ubiquitous and indispensable. It seems unwise to adopt a policy agenda that promises to help the industry do so, even if the correct technocrats are somehow put in charge. AI can be a tool, and a useful one, but it can also be something else. I am personally not someone who worries about AI killing us all, but I do think that granting such a technology unlimited power over the conditions of our life and work is likely to be a recipe for chaos and misery. Our best hope, at least in the short term, might be to pursue a strategy of containment in which AI is restricted to certain spheres and functions on the theory that alien encounters can be fruitful, but alien invasions are bad.

Ben TarnoffBen Tarnoff is a writer from Massachusetts. His most recent book is Muskism: A Guide for the Perplexed, coauthored with Quinn Slobodian.


Latest from the nation