Silicon Valley’s Quest to Build God and Control Humanity

Silicon Valley’s Quest to Build God and Control Humanity

Silicon Valley’s Quest to Build God and Control Humanity

The world we have is ugly enough, but tech capitalists desire an even uglier one.

Facebook
Twitter
Email
Flipboard
Pocket

It’s a common Silicon Valley pastime for elites to sit around and imagine what sort of world they should be allowed to create. We got a glimpse into their ideas via effective altruism and its transformation from a utilitarian philosophy concerned with maximizing philanthropy into longtermism, a moral framework concerned with ensuring that as many humans as possible in the far-flung future have the best possible lives. That transformation, proselytized by Oxford professor William MacAskill and embodied in FTX’s disgraced founder and chief executive Sam-Bankman Fried, found purchase in a tech sector convinced that its products were not just good for society but perhaps the most important ones ever crafted.

Timnit Gebru and Emile Torres, two prominent critics of techno-optimism and its vision for artificial intelligence, have been trying to formalize this sort of thinking into a bundle of ideas called TESCERAL: Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. These philosophies, at their core, argue that the future will be full of delicious, unimaginable wonders—but only if we ensure that the already powerful never face any barriers to remaking the world as they see fit.

Unsurprisingly, the loudest ones to insist that some glorious end—whether it be the birth of an AI god or the eternal happiness of trillions of human beings in the far future—will justify nefarious means are Silicon Valley’s hermeticists. Unsatisfied with controlling technological development and who benefits from it, they are now eager to create artificial intelligences that can mediate human life at every level. At the same time, they are trying to build institutions and systems that fortify their positions as the ones designing and directing how humanity experiences politics, social life, economics, and culture.

There are two schools of thought about this that interest me: the transhumanists waiting for the coming technological Rapture, broadly represented by Google’s director of engineering Ray Kurzweil; and an offshoot represented by Marc Andreessen and other venture capitalists burning capital to develop and reign over the infrastructure, markets, and regulations that could constrain their innovations.

For Kurzweil and his cohort, we are approaching a moment he calls “the Singularity,” when computing power will escape our ability to anticipate or control it. By 2045, Kurzweil believes, our world will radically transform, as our bodies and souls transcend the limits of humanity. We will cure the physical and social ills that plague our bodies and societies, become immortal, bring back our loved ones, colonize the stars, and, most importantly, merge with—or be supplanted by—superintelligent machines. We will enter an age of spiritual machines.

Some of the most interesting analysis on this utopian and techno-optimist thinking comes from Meghan O’Gieblyn—who, like me, is a Christian fundamentalist turned transhumanist turned tech critic. In her n+1 essay “Ghost in the Cloud,” O’Gieblyn connects Christian theology to the transhumanist faith of Kurzweil and a host of influential investors and entrepreneurs in Silicon Valley.

She points out that the earliest mention of a “transhuman” appears in the first English translation of Dante’s Paradiso back in 1814, following a resurrection of Dante that escapes description: “Words may not tell of that transhuman change.” Resurrection and the manner of it has been the center of much debate in Christianity for millennia: How will our bodies appear? Where do our souls go? What will our mental experience be? O’Gieblyn homes in on strains of Christian thought that held that science and technology would allow humans to achieve immortality. Alchemists crafted immortality potions. Russian cosmists hoped to revive everyone who has ever died and then colonize space. A French Jesuit priest “believed that evolution would lead to the Kingdom of God” as machines formed a living global network that would merge with human minds, reach a threshold (the Omega Point) then surge forward to meet God on the other side of “Time and Space.”

Transhumanists may claim they are the first to consider how technology will redefine what it means to be human, what types of immortality may be desirable or possible, and the philosophy of mind that comes with uploading or copying your mind, but, as O’Gieblyn points out, Christians started to ask these questions long ago.

“Transhumanism offered a vision of redemption without the thorny problems of divine justice,” she writes. “It was an evolutionary approach to eschatology, one in which humanity took it upon itself to bring about the final glorification of the body and could not be blamed if the path to redemption was messy or inefficient.”

In a section of her book God, Human, Animal, Machine titled “Pattern,” she revisits and expands upon her n+1 essay, and there is a moment near the end when Kurzweil e-mails O’Gieblyn and admits to an “essential equivalence” between metaphors used by transhumanism and Christianity to talk about resurrection, consciousness, and mind uploading. Kurzweil insists that “answers to existential questions are necessarily metaphoric.” The only difference, he writes, between atheists and theists is “a matter of the choice of metaphor” when tackling these questions.

For O’Gieblyn, this confirms a theory in her essay and book: “That all these efforts—from the early Christians’ to the medieval alchemists’ to those of the luminaries of Silicon Valley—amounted to a singular historical quest, one that was expressed through analogies that were native to each era.” Whether it’s alchemical transmutation, divine resurrection, or transhuman digital uploading, the goal is the same: to shine a light on consciousness, quantify and define it, and then free it from mortal flesh.

Today’s transhumanist vision may appeal because of its digital gloss and futurist splendor, but O’Gieblyn notes a fatalistic undertone despite all the ideas borrowed from Christian theology. She writes that transhumanism’s “rather depressing gospel message insists we are inevitably going to be superseded by machines, and that the only way we can survive the Singularity is to become machines ourselves—objects that we for centuries regarded as lower than plants and animals.”

Worse yet, we are reduced to shoddy computational machines. For Kurzweil, it seems being human is a stepping stone toward creating truer machine spirits. One can catch a glimpse of this in Kurzweil’s works dedicated to his prophetic vision: The Age of Spiritual Machines in 1999, followed by The Singularity Is Near in 2005, How to Create a Mind in 2012 and The Singularity Is Nearer in 2022.

The second wave of this transhumanist philosophy, featuring folks like Andreessen, took an even more secular turn, abandoning Kurzweil’s version of spiritualism but ironically embracing even more of the Christian sentiment that underwrites transhumanism. Consider Andreessen’s most recent blog post, a paean to a potential AI god titled “Why AI Will Save the World.” In the essay, he dismisses the panic that AI poses an existential risk (I share his skepticism, but for opposite reasons: He and his ilk are the real existential risk) and argues instead that AI will radically improve everything we care about.

Andreessen envisions AI as augmenting human intelligence. Children, he writes, will have tutors that are “infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful” as they develop to “maximize their potential with the machine version of infinite love.”

This brings me back to my days in Bible school. God’s love, we were told, was not visible the way love from people around us was, but closer examination (and faith) would reveal its presence. By accepting the existence of God’s love, we could grow and develop such that our true potential, our destiny, our capacity to be a better child/sibling/friend/neighbor/lover/human would be realized. Andreessen’s hypothetical AI love is different from God’s love, of course, because with AI, you get the transformative effects of a god’s personal intervention as well as the affirmation of something that undeniably interacts with you.

How will AI bring that infinite patience, compassion, knowledge, and assistance to every person? Andreessen believes AI will do this as everyone’s personal “assistant/coach/mentor/trainer/advisor/therapist” that accompanies them through life. Every person will have an angel on their shoulder that will accelerate productivity and lead to greater wealth and prosperity. AI will help us develop amazing technology, provide scientific insights, and better understand ourselves. This, Andreessen argues, will spark a new golden age, as AI-augmented creators will work faster, harder, and better. At the same time, AI will improve our ability to wage war. No more collateral damage, as AI will reduce death rates by allowing for greater strategic and tactical decisions that minimize “risk, error, and unnecessary bloodshed.”

Andreessen divides AI adversaries into two categories often used by economists: “baptists” and “bootleggers.” Baptists believe in social reform; in this case, they believe AI poses an existential risk. “Bootleggers,” in contrast, “are self-interested opportunists who stand to financially profit by the imposition of new restrictions, regulations, and laws that insulate them from competitors.” For AI risk, he looks at chief executives asking for regulatory barriers like government licensing but also any AI critic receiving a salary from a university, think tank, activist group, or media outlet. Andreessen has no category for those who stand to lose from intensified pursuits of AI—such as workers forced to train it, moderate its activity, or exploited by it.

For Andreessen, the only cost is in not pursuing AI, because if the United States doesn’t, China will jump ahead. Andreessen writes that, unlike the US, China views AI as “a mechanism for authoritarian population control.” The only solution, therefore, is to throw even more money at AI, and let big firms “build AI as fast and aggressively as they can.” The private sector should lead the way on AI, deploying it to solve as many problems as possible as fast as possible—free from the shackles of government restrictions. Any regulation the US does pursue should be to limit China’s capacity to develop AI, not our own.

Andreessen’s Manichaean worldview is obviously self-interested—after all, Andreessen intends to invest in start-ups that will help along the mass proliferation of AI as a product and service. And, despite his insistence, there are real costs to pursuing AI.

Like China, the United States uses AI for authoritarian ends at home and abroad. Surveillance and social control structure much of the digital technology we create. Global efforts to replicate our technological development have helped preserve the very regimes Andreessen claims must be opposed. From authoritarianism in the Kingdom of Saudi Arabia and the other Arab states of the Persian Gulf to apartheid in Israel to China’s totalitarianism, Silicon Valley tends to look the other way if money is to be made. Andreessen’s investment firm is openly courting Saudi Arabia for financing. So much for the rallying cry to stop authoritarian tech proliferation.

Of course, Andreessen is a hypocrite. People like him stand to make billions when our technology development is aimed toward crushing labor, managing a disempowered population, threatening rival powers, and extracting profits by commodifying larger swaths of daily life. For the rest of us, giving Andreessen free rein would be a disaster. The world has been pushed to the brink of collapse: Our ecological niche is disintegrating; the political space we occupy is shrinking as tech firms enjoy almost unchallenged power over our computational infrastructure; and the social realm will continue to fray as speculators and rentiers subject us to increasingly demeaning forms of algorithmically mediated lives. We are left with Silicon Valley’s promises that this time will be different, that AI—unlike the Internet—won’t be a disappointment.

What, then, is Silicon Valley in position to give us? John Ganz, an essayist who focuses on right-wing politics, writes in a recent SubStack post that the roots of Silicon Valley’s contemporary reactionary thought offer some insight: “what typified the thought of the Conservative Revolutionaries and a set of right-wing engineers in Weimar and then the ideologists of the Third Reich was not a rejection of modernity so much as the search for an alternative modernity: a vision of high technics and industrial productivity without liberalism, democracy, and egalitarianism.”

This desire for authoritarianism is combined with eugenics and a crusade against negative aspects of capitalism. Ganz specifically looks at the rise of anti-Semitism among tech capitalists, for whom “the Jew could stand for the parasitic, financialized, and abstract side of capital”—the innovator versus the banker, the engineer versus the merchant, Thiel and Musk versus Soros. Race science, social control, and revitalization of some aspects of capitalism have long been integral to Silicon Valley’s ever-evolving ideologies, just as they have been for other reactionary formations.

Regardless of whether saving the world with AI angels is possible, the basic reason we shouldn’t pursue it is because our technological development is largely organized for immoral ends serving people with abhorrent visions for society.

The world we have is ugly enough, but tech capitalists desire an even uglier one. The logical conclusion of having a society run by tech capitalists interested in elite rule, eugenics, and social control is ecological ruin and a world dominated by surveillance and apartheid. A world where our technological prowess is finely tuned to advance the exploitation, repression, segregation, and even extermination of people in service of some strict hierarchy.

At best, it will be a world that resembles the old forms of racist, sexist, imperialist modes of domination that we have been struggling against. But the zealots who enjoy control over our tech ecosystem see an opportunity to use new tools—and debates about them—to restore the old regime with even more violence that can overcome the funny ideas people have entertained about egalitarianism and democracy for the last few centuries. Do not fall for the attempt to limit the debate and distract from their political projects. The question isn’t whether AI will destroy or save the world. It’s whether we want to live in the world its greatest shills will create if given the chance.

Thank you for reading The Nation!

We hope you enjoyed the story you just read. It’s just one of many examples of incisive, deeply-reported journalism we publish—journalism that shifts the needle on important issues, uncovers malfeasance and corruption, and uplifts voices and perspectives that often go unheard in mainstream media. For nearly 160 years, The Nation has spoken truth to power and shone a light on issues that would otherwise be swept under the rug.

In a critical election year as well as a time of media austerity, independent journalism needs your continued support. The best way to do this is with a recurring donation. This month, we are asking readers like you who value truth and democracy to step up and support The Nation with a monthly contribution. We call these monthly donors Sustainers, a small but mighty group of supporters who ensure our team of writers, editors, and fact-checkers have the resources they need to report on breaking news, investigative feature stories that often take weeks or months to report, and much more.

There’s a lot to talk about in the coming months, from the presidential election and Supreme Court battles to the fight for bodily autonomy. We’ll cover all these issues and more, but this is only made possible with support from sustaining donors. Donate today—any amount you can spare each month is appreciated, even just the price of a cup of coffee.

The Nation does not bow to the interests of a corporate owner or advertisers—we answer only to readers like you who make our work possible. Set up a recurring donation today and ensure we can continue to hold the powerful accountable.

Thank you for your generosity.

Ad Policy
x