AI Doesn’t Pose an Existential Risk—but Silicon Valley Does

AI Doesn’t Pose an Existential Risk—but Silicon Valley Does

AI Doesn’t Pose an Existential Risk—but Silicon Valley Does

The warning cries about the alleged dangers of AI is drowning out stories about the harms already occurring.

Facebook
Twitter
Email
Flipboard
Pocket

A coalition of the willing has united to confront what they say is a menace that could destroy us all: artificial intelligence. More than 350 executives, engineers, and researchers who work on AI have signed a pithy one-sentence statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” But like the target of the last infamous coalition of the willing—Saddam Hussein and his mythical “weapons of mass destruction”—there is no existential threat here.

This isn’t the first letter to sound the alarm. It features prominent figures in the field—such as Sam Altman, chief executive of Microsoft-backed OpenAI. Generally, the warnings about AI are straightforward: It poses immediate risks like discrimination or automation as well as existential ones like a superintelligent Skynet-like system eradicating humanity.

These claims of an extinction-level threat come from the very same groups creating the technology, and their warning cries about future dangers is drowning out stories on the harms already occurring. There is an abundance of research documenting how AI systems are being used to steal art, control workers, expand private surveillance, and seek greater profits by replacing workforces with algorithms and underpaid workers in the Global South.

The sleight-of-hand trick shifting the debate to existential threats is a marketing strategy, as Los Angeles Times technology columnist Brian Merchant has pointed out. This is an attempt to generate interest in certain products, dictate the terms of regulation, and protect incumbents as they develop more products or further integrate AI into existing ones. After all, if AI is really so dangerous, then why did Altman threaten to pull OpenAI out of the European Union if it moved ahead with regulation? And why, in the same breath, did Altman propose a system that just so happens to protect incumbents: Only tech firms with enough resources to invest in AI safety should be allowed to develop AI.

No, the real threat is the industry that controls our technology ecosystem and lobbies for insulation from states and markets that might rein it in. I want to talk about three factors that make Silicon Valley, not one of its many developments, a “societal-scale risk.”

First, the industry represents the culmination of various lines of thought that are deeply hostile to democracy. Silicon Valley owes its existence to state intervention and subsidy, at different times working to capture various institutions or wither their ability to interfere with private control of computation. Firms like Facebook, for example, have argued that they are not only too large or complex to break up but that their size must actually be protected and integrated into a geopolitical rivalry with China.

Second, that hostility to democracy, more than a singular product like AI, is amplified by profit-seeking behavior that constructs increasingly larger threats to humanity. It’s Silicon Valley and its emulators worldwide, not AI, that create and finance harmful technologies aimed at surveilling, controlling, exploiting, and killing human beings with little to no room for the public to object. The search for profits and excessive returns, with state subsidy and intervention clearing the way of competition, has and will create a litany of immoral business models and empower brutal regimes alongside “existential” threats. At home, this may look like the surveillance firm and government contractor Palantir creating a deportation machine that terrorizes migrants. Abroad, this may look like the Israeli apartheid state exporting spyware and weapons it has tested on Palestinians.

Third, this combination of a deeply antidemocratic ethos and a desire to seek profits while externalizing costs can’t simply be regulated out of Silicon Valley. These are fundamental attributes of the industry that trace back to the beginning of computation. These origins in optimizing plantations and crushing worker uprisings prefigure the obsession with surveillance and social control that shape what we are told technological innovations are for.

Taken altogether, why should we worry about some far-flung threat of a superintelligent AI when its creators—an insular network of libertarians building digital plantations, surveillance platforms, and killing machines—exist here and now? Their Smaugian hoards, their fundamentalist beliefs about markets and states and democracy, and their track record should be impossible to ignore.

Despite the constant crowing about how integral technology is to our society, you and I play virtually no role in deciding what gets built, who builds it, how it gets financed, or why it should be built. The small role the public plays largely boils down to ratification through channels that are built to accommodate larger vessels—states, markets, trade blocs, corporations, capital, political party institutions, robust lobbying networks complete with friends and insiders, and more. Powerful participants formulate policy in private and say “do so” to a public that’s actively excluded.

Contempt for democracy is nothing new, of course. In America, it’s a vaunted pastime that stretches back to the start of our grand experiment. In debates at the Constitutional Convention, James Madison was unambiguous that their government’s goal was “to protect the minority of the opulent against the majority.” The Senate, he argued, would be instrumental to this purpose because that purpose would ensure the creation of “a system which we wish to last for ages.” Still, Madison argued, there was a key tension everyone was overlooking:

An increase of population will of necessity increase the proportion of those who will labour under all the hardships of life, & secretly sigh for a more equal distribution of its blessings. These may in time outnumber those who are placed above the feelings of indigence. According to the equal laws of suffrage, the power will slide into the hands of the former. No agrarian attempts have yet been made in this Country, but symtoms [sic], of a leveling spirit, as we have understood, have sufficiently appeared in a certain quarters to give notice of the future danger.

Peter Thiel—the billionaire cofounder of surveillance firm Palantir, head of the VC firm Founders Firm, and former board member of Facebook—has lamented similar outcomes. In a 2009 essay for Cato Unbound, Thiel admitted, “I no longer believe that freedom and democracy are compatible.” While the wake of the 2008 financial crisis affirmed to him and his fellow libertarians that “the broader education of the body politic has become a fool’s errand,” Thiel believed the problem went back further: “The roaring 1920s were the last decade where one could be genuinely optimistic about politics but since then we’ve seen a troubling development: Since 1920, the vast increase in welfare beneficiaries and the extension of the franchise to women—two constituencies that are notoriously tough for libertarians—have rendered the notion of ‘capitalist democracy’ into an oxymoron.”

Thiel later clarified that he did not believe that disenfranchising women, or any other group, was desirable. He was simply saying suffrage posed a danger to other rights.

There’s a clear thread here: Democracy is a virtue to pay lip service to, but there are other more important priorities that, if left on their own, the public will bungle. Such as politics. In the 20th century, American liberals who were concerned about the public’s ability to interfere in political affairs took up the thorny question of how elites should ensure control of America’s unruly democracy.

Edward Bernays—Sigmud Freud’s nephew and the “father of spin”—argued in his 1928 book Propaganda that “conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element of democratic society.” Why? Because “intelligent minorities” in our society need to “make use of propaganda continuously and systematically.”

At around the same time, Walter Lippmann, a prominent and influential journalist whose major works maintained that reality was becoming too complex for the masses to understand, argues that an inability to distinguish reality from opinion necessitated “the manufacture of consent” to ensure that democracy functioned as desired. A “specialized class” of individuals with the foresight and position to realize those interests would manage those unable to. Among this specialized class, you’d have “public men,” who could ensure “the formation of a sound public opinion.” Lippman wanted to keep the public far from the formation, deliberation, and execution of affairs concerning them. “Public opinion is not a rational force,” Lippman wrote. “It does not reason, investigate, invent, persuade, bargain or settle.” The indolent public’s purpose is to ratify things already deliberated on.

The democratic question isn’t something only liberals have been wrestling with, however. In his recent book Crack Up Capitalism, economic historian Quinn Slobodian documents the intellectual history and consequences of the capitalist right’s attempt to liberate capitalism from democracy. Largely drawn from the superrich, these libertarian utopians are searching for the ideal container for capitalism, for zones of exception where holes can be punched into nation-states to undermine the capacity for democracy to interfere with markets. “Champions of the zone suggest that free-market utopia might be reached through acts of secession and fragmentation, carving out liberated territory within and beyond nations, with both disciplining and demonstration effects,” Slobodian writes in the opening pages of his book.

The text is littered with examples that span the globe. The book’s case studies pour over the obsession with city-state dictatorships like Dubai, Singapore, and Hong Kong. In one chapter, libertarians spend the 1980s trying to save Ciskei, a South African Bantustan, by forming a commission to explore how best to become “African Hong Kong.” The goal was not to eradicate apartheid but engineering a scenario “inviting in foreign capital while encouraging voluntary segregation from below instead of mandatory segregation from above.”

Heading the commission was Leon Louw, a libertarian Afrikaner who founded the Free Market Foundation and styled himself as an abolitionist who could liberate the market from the apartheid democracy. Foreign capital came, not just for the state subsidies but also because of Ciseki’s eagerness to use force on the population. Workers were regularly detained and tortured; the police killed protesters; activists were assassinated; but investors’ needs were satisfied.

There are other examples closer to Silicon Valley: Saudi Arabia’s delusional NEOM, former Andreessen Horowitz (a16z) partner Balaji Srinivasan’s grand strategy to put nation-states on the cloud, and Thiel’s dream for similar vision for an escape beyond politics—a retreat into colonizing outer space, cyberspace, and the oceans. There was also the half-baked plan pitched by Stanford economics professor Paul Romer to craft what reactionary blogger Curtis Yarvin called a “colonialism for the 21st century” and apply it to Honduras. A plan, Slobodian points out, that had commentators drooling over its vision and ambition and slick, forward-looking momentum.

Peter Thiel ended that 2009 essay with a sweet note saying that “all of us must wish Patri Friedman the very best in his extraordinary experiment.” That experiment was to carve out Romer’s enclave in Honduras with the help of people in Thiel’s orbit, backed by investors from the Future Cities Development group (which Friedman cofounded), and bring the “Silicon Valley spirit of innovation to Honduras.” Other investors came; memorandums and agreements with the government were signed; ideologues spoke about the potential of this experiment to revolutionize sovereignty and governance. Many of these people were also attracted to a project called Prospera, which was built on an island off the coast of Honduras. Prospera was not only built and funded by networks involved in Romer’s adventure but also managed to extract a territorial concession from Honduras and lobby for a law that allowed corporations to set up zones in the country.

“While earlier settlers once sought wealth in gold, crops, or railroads, the treasure of zones like Prospera in the twenty-first century was their status as a jurisdiction—their potential as a new place to pick and choose among regulations and licensing requirements,” Slobodian explained. Crucially, the anarcho-capitalists who inspired and helped make this project sought to make a colony where the social contract was “a literal contract” shaped by whatever regulations investors were interested in adhering to or skirting.

That sort of perforation was helped along by the fact that Honduras was, like Ciseki, liberal with its use of force. The Honduran government had already spent decades detaining, torturing, and murdering protesters and activists. For libertarians and their utopias, this willingness to use violence against a population that might protest is important, but it is not sufficient for their control. Ciskei collapsed, and Prospera may soon follow. Last year, the Honduran government rejected the law and constitutional amendment enabling Zones for Employment and Economic Development, the corporate enclaves that libertarians have been so excited about.

Proponents of the liberal variant of antidemocratic thought are also concerned with ensuring that the public and the instruments responsive to it—like the state—don’t get in the way of their own self interest. As I wrote in my previous article, post-WWII planners and their Silicon Valley tech heirs insist that we can solve various crises (i.e., ecological catastrophe or permanent surveillance systems) only by handing over control to them—the very saboteurs responsible for these crises.

Among tech elites, sometimes the general principle that specialized classes alone have the education, position, and inherent ability to act calmly and rationally based on the facts is said loudly. In mid-May, former Google chief executive Eric Schmidt told NBC’s Meet the Press that Big Tech and Big Tech alone should regulate artificial intelligence.

“When this technology becomes more broadly available, which it will, and very quickly, the problem will get worse. I would much rather have the current companies define reasonable boundaries,” Schmidt said in the interview. “There’s no way a non-industry person can understand what’s possible. It’s just too new, too hard; there’s not the expertise. There’s no one in the government that can get it right. The industry can broadly get it right.”

Schmidt has spent years ringing the alarm bell about artificial intelligence, arguing that it will be a key geopolitical fault line and that we risk ceding it to China. In an essay for Le Monde, tech critic Evgeny Morozov dives a bit deeper into Schmidt and connects him to Gilman Louie—a key figure in the coming US-China cold war (Cold War 2.0) who worked with the Air Force, ran the CIA’s venture capital fund, worked with Schmidt at the National Security Commission on Artificial Intelligence, and now runs the Schmidt-backed America’s Frontier Fund.

“Ironically, Gilman Louie, the man who leveraged Cold War 1.0 to hype up Tetris, is now leveraging Cold War 2.0 to hype up AI. Or perhaps vice versa,” Morozov writes. “In today’s Washington, these two operations have become almost indistinguishable, and the only certainty is that all that hype will be monetised.”

Scaremongering about AI is a tactic to sell more AI. But it’s also part of a larger campaign that poses an actual threat to all of us. A deeply entrenched contempt for democracy, a desire to use the state as a vessel for reshaping society into something more amenable to unregulated development and profit-seeking, and a long-standing obsession with surveillance and social control will deliver eye-watering returns for a few. It will also leave us with a world dominated by innovative extraction, violent borders, robust and dynamic repression, and streamlined violence. Don’t fall for the trick: Silicon Valley, not AI, is the existential risk to humanity.

Thank you for reading The Nation!

We hope you enjoyed the story you just read. It’s just one of many examples of incisive, deeply-reported journalism we publish—journalism that shifts the needle on important issues, uncovers malfeasance and corruption, and uplifts voices and perspectives that often go unheard in mainstream media. For nearly 160 years, The Nation has spoken truth to power and shone a light on issues that would otherwise be swept under the rug.

In a critical election year as well as a time of media austerity, independent journalism needs your continued support. The best way to do this is with a recurring donation. This month, we are asking readers like you who value truth and democracy to step up and support The Nation with a monthly contribution. We call these monthly donors Sustainers, a small but mighty group of supporters who ensure our team of writers, editors, and fact-checkers have the resources they need to report on breaking news, investigative feature stories that often take weeks or months to report, and much more.

There’s a lot to talk about in the coming months, from the presidential election and Supreme Court battles to the fight for bodily autonomy. We’ll cover all these issues and more, but this is only made possible with support from sustaining donors. Donate today—any amount you can spare each month is appreciated, even just the price of a cup of coffee.

The Nation does not bow to the interests of a corporate owner or advertisers—we answer only to readers like you who make our work possible. Set up a recurring donation today and ensure we can continue to hold the powerful accountable.

Thank you for your generosity.

Ad Policy
x