The Real Reason to Be Nervous About AI

The Real Reason to Be Nervous About AI

The Real Reason to Be Nervous About AI

Declarations of sentience are wildly premature. But the dangers AI poses to labor are very real.

Copy Link
Facebook
X (Twitter)
Bluesky
Pocket
Email

In recent weeks, an unlikely drama has unfolded in the media. The center of this drama isn’t a celebrity or a politician, but a sprawling computational system, created by Google, called LaMDA (Language Model for Dialogue Applications). A Google engineer, Blake Lemoine, was suspended for declaring on Medium that LaMDA, which he interacted with via text, was “sentient.” This declaration (and a subsequent Washington Post article) sparked a debate between people who think Lemoine is merely stating an obvious truth—that machines can now, or soon will, show the qualities of intelligence, autonomy, and sentience—and those who reject this claim as naive at best and deliberate misinformation at worst. Before explaining why I think those who oppose the sentience narrative are right, and why that narrative serves the power interests of the tech industry, let’s define what we’re talking about.

LaMDA is a Large Language Model (LLM). LLMs ingest vast amounts of text—almost always from Internet sources such as Wikipedia and Reddit—and, by iteratively applying statistical and probabilistic analysis, identify patterns in that text. This is the input. These patterns, once “learned”—a loaded word in artificial intelligence (AI)—can be used to produce plausible text as output. The ELIZA program, created in the mid-1960s by the MIT computer scientist Joseph Weizenbaum, was a famous early example. ELIZA didn’t have access to a vast ocean of text or high-speed processing like LaMDA does, but the basic principle was the same. One way to get a better sense of LLMs is to note that AI researchers Emily M. Bender and Timnit Gebru call them “stochastic parrots.”

There are many troubling aspects to the growing use of LLMs. Computation on the scale of LLMs requires massive amounts of electrical power; most of this comes from fossil sources, adding to climate change. The supply chains that feed these systems and the human cost of mining the raw materials for computer components are also concerns. And there are urgent questions about what such systems are to be used for—and for whose benefit.

The goal of most AI (which began as a pure research aspiration announced at a Dartmouth conference in 1956 but is now dominated by the directives of Silicon Valley) is to replace human effort and skill with thinking machines. So, every time you hear about self-driving trucks or cars, instead of marveling at the technical feat, you should detect the outlines of an anti-labor program.

The futuristic promises about thinking machines don’t hold up. This is hype, yes—but also a propaganda campaign waged by the tech industry to convince us that they’ve created, or are very close to creating, systems that can be doctors, chefs, and even life companions.

A simple Google search for the phrase “AI will…” returns millions of results, usually accompanied by images of ominous sci-fi-style robots, suggesting that AI will soon replace human beings in a dizzying array of areas. What’s missing is any examination of how these systems might actually work and what their limitations are. Once you part the curtain and see the wizard pulling levers, straining to keep the illusion going, you’re left wondering: Why are we being told this?

Consider the case of radiologists. In 2016, the computer scientist Geoffrey Hinton, confident that automated analysis had surpassed human insight, declared that “we should stop training radiologists now.” Extensive research has shown his statement to have been wildly premature. And while it’s tempting to see it as a temporarily embarrassing bit of overreach, I think we need to ask questions about the political economy underpinning such declarations.

Radiologists are expensive and, in the US, very much in demand—creating what some call a labor aristocracy. In the past, the resulting shortages were addressed by providing incentives to workers. If this could be remedied instead with automation, it would devalue the skilled labor performed by radiologists, solving the scarcity problem while increasing the power of owners over the remaining staff.

The promotion of the idea of automated radiology, regardless of existing capabilities, is attractive to the ownership class because it holds the promise of weakening labor’s power and increasing—via workforce cost reduction and greater scalability—profitability. Who wants robot taxis more than the owner of a taxi company?

I say promotion, because there is a large gap between marketing hype and reality. This gap is unimportant to the larger goal of convincing the general population that their work can be replaced by machines. The most important AI outcome isn’t thinking machines—still a remote goal—but a demoralized population, subjected to a maze of brittle automated systems sold as being better than the people who are forced to navigate life through these systems.

The AI debate may seem remote from everyday life. But the stakes are extraordinarily high. Such systems already determine who gets hired and fired, who receives benefits, and who’s making their way onto our roads—despite being untrustworthy, error prone, and no replacement for human judgment.

And there is one additional peril: though inherently unreliable, such systems are being used, step by step, to obscure the culpability of the corporations that deploy them through the claim of “sentience.”

This escape hatch from corporate responsibility may represent their greatest danger.

Ad Policy
x