Silicon Landlords: On the Narrowing of AI’s Horizon
The one thing science fiction couldn’t imagine is the world we have now: the near-complete control of Artificial Intelligence by a few corporations whose only goal is profit.
As HAL 9000, the true star of Stanley Kubrick´s landmark film, 2001: A Space Odyssey, died a silicon death by memory module removal, the machine, reduced to its infant state (the moment it became operational) recited:
“Good afternoon, gentlemen. I am a HAL 9000 computer. I became operational at the H.A.L. plant in Urbana, Illinois on the 12th of January 1992. My instructor was Mr. Langley, and he taught me to sing a song…”
In HAL’s fictional biography, written by Arthur C Clarke for both the film’s script and its novelization, HAL, a Heuristic Algorithmic Logic machine, was theorized, engineered, and built at the University of Illinois’s Coordinated Science Laboratory, where the real Illinois Automatic Computer (ILLIAC) supercomputers were built from the 1950s until the 1970s. Embedded within the idea of HAL is an assumption: that the artificial intelligence (AI) research programs of the mid-to-late 20th century, centered on universities, scientific inquiry (and yes, military imperatives) would continue uninterrupted into the future, eventually producing thinking machines that would be our partners.
Ironically for a work of the imagination, it turns out that what was unimaginable in the late 1960s for the makers of 2001 was the eventual near-complete control of the field of AI by a small group of North American corporations—the Silicon Valley triumvirate of Amazon, Microsoft, and Google—whose only goal (hyped claims and declarations of serving humanity aside) is profit. These companies claim to be producing HAL-esque machines (which, they suggest, exhibit signs of AGI: artificial general intelligence) but are actually producing narrow systems that enable the extraction of profit for these digital landlords while allowing them to maintain control over access to a technology they dominate and tirelessly work to insert into every aspect of life.
On December 5 of this year, MIT Technology Review published an article titled “Make no mistake—AI is owned by Big Tech,” written by Amba Kak, Sarah Myers West, and Meredith Whittaker. The article, focused on the political economy and power relations of the AI industry, begins with this observation:
Put simply, in the context of the current paradigm of building larger- and larger-scale AI systems, there is no AI without Big Tech. With vanishingly few exceptions, every startup, new entrant, and even AI research lab is dependent on these firms. All rely on the computing infrastructure of Microsoft, Amazon, and Google to train their systems, and on those same firms’ vast consumer market reach to deploy and sell their AI products.
There is no AI without Big Tech. Before the era of Silicon Valley dominance, when AI research programs were largely funded by a combination of government agencies such as DARPA and universities, driven, at least at the level of researchers, by scientific inquiry (it was far from a utopia; Cold War imperatives were always a significant factor), the financing required to build the systems researchers used and the direction of research were subject to public scrutiny as an option if not always in practice.
Today, the most celebrated and hyped methods, such as the resource hungry Large Language Models (LLM)—ChatGPT, Google’s recently released Gemini and Amazon’s Q—are the product of a concentration of capital and computational resources put into service to enhance the profit and market objectives of private entities such as Microsoft (the primary source of funding for OpenAI), completely beyond the reach of public scrutiny.
The Greek economist Yanis Varoufakis uses the term, “technofeudalism” to describe what he sees as the tech industry’s post-capitalist nature (closer in character to feudal lords, only this time with data centers, rather then walled castles or robber barons). I have problems with his argument, but I will grant Varoufakis one key point: the industry’s wealth and power are indeed built almost entirely on a rentier model that places the largest firms between us and the things we need. Rather than controlling land (although that too is a part of the story: Data centers require lots of land), the industry controls access to our entertainments, our memories, and our means of communication.
To this list we can add the collection of algorithmic techniques called AI, promoted as essential and inevitable, owned and commanded by the cloud giants who have crowded out earlier research efforts with programs requiring staggering amounts of data, computational power, and resources. As 20th century Marxists were fond of saying, it is no accident that the very methods that depend on techniques controlled at-scale by the tech giants are the ones we are told we can’t live without (how many times have you been told ChatGPT was the future—as inevitable as death and taxes?).
Continuing their analysis, the authors of the MIT article describe the power relationships—seldom discussed in most breathlessly adoring tech media accounts—that shape how the AI industry actually works:
Microsoft now has a seat on OpenAI’s board, albeit a nonvoting one. But the true leverage that Big Tech holds in the AI landscape is the combination of its computing power, data, and vast market reach. In order to pursue its bigger-is-better approach to AI development, OpenAI made a deal. It exclusively licenses its GPT-4 system and all other OpenAI models to Microsoft in exchange for access to Microsoft’s computing infrastructure.
For companies hoping to build base models, there is little alternative to working with either Microsoft, Google, or Amazon. And those at the center of AI are well aware of this…
A visit to the Microsoft website for what it calls its Azure OpenAI Service (the implementation of OpenAI’s platform via Microsoft’s Azure cloud computing service) shows the truth of the statement, “There is little alternative to working with either Microsoft, Google, or Amazon.” Computing hardware for AI research costs oceans of money (Microsoft’s $10 billion investment in OpenAI is an example) and demands constant maintenance—things smaller firms can scarcely afford. By offering a means through which start-ups and, really, all but the deepest-pocketed organizations can get access to what are considered cutting-edge methods, Microsoft and its fellow travelers have become the center of the AI ecosystem. The AI in your school, hospital, or police force (the list goes on) can, like roads leading to Rome, be traced back to Microsoft et al.
In the fictional world of HAL 9000, thinking machines—built at universities, watched over by scientists and engineers, disconnected from profit incentives—emerged onto the world stage, becoming a part of life, even accompanying us to the stars. In our world, now 22 years past the 2001 imagined in the film, a small and unregulated group of corporations steer the direction of research, own the computers used for that research, and sell the results as products the world can’t do without. These products—generative AI image generators like Dall-E, text calculators like ChatGPT, and a host of other systems, all derivative—are being pushed into the world, not as partners as with the fabled HAL but as profit vectors.
Popular“swipe left below to view more authors”Swipe →
The Nixonian “New York Times” Stonewalls on a Discredited Article About Hamas and Rape The Nixonian “New York Times” Stonewalls on a Discredited Article About Hamas and Rape
The Story of Late Capitalism as Told Through Panera Bread The Story of Late Capitalism as Told Through Panera Bread
Good Riddance to Mitch McConnell, an Enemy of Democracy Good Riddance to Mitch McConnell, an Enemy of Democracy
Power, like life itself, is not eternal. The power of the tech industry, facilitated by the purposeful neglect of unconcerned, mis- or poorly informed governments and modern laissez-faire policies, is not beyond challenge. There are groups, such as the Distributed AI Research Institute, and even legislation, like the flawed EU AI Act, that offer glimpses of a different approach.
To borrow from linguistics professor Emily Bender, we must “resist the urge to be impressed” and focus our thoughts and efforts instead on ensuring that the tech industry and the AI systems it sells are firmly brought under democratic control.
The alternative is a chaotic dystopia in which we’re all at the mercy of the profit-driven whims of a few companies. This isn’t a future anyone deserves. Not even Elon Musk’s (dwindling) army of reality-challenged fans.
Thank you for reading The Nation!
We hope you enjoyed the story you just read. It takes a dedicated team to publish timely, deeply researched pieces like this one. For over 150 years, The Nation has stood for truth, justice, and democracy. Today, in a time of media austerity, articles like the one you just read are vital ways to speak truth to power and cover issues that are often overlooked by the mainstream media.
This month, we are calling on those who value us to support our Spring Fundraising Campaign and make the work we do possible. The Nation is not beholden to advertisers or corporate owners—we answer only to you, our readers.
Can you help us reach our $20,000 goal this month? Donate today to ensure we can continue to publish journalism on the most important issues of the day, from climate change and abortion access to the Supreme Court and the peace movement. The Nation can help you make sense of this moment, and much more.
Thank you for being a supporter of independent journalism.
More from Dwayne Monroe
Fordism Comes to the Gallery—and AI Comes for the Artists Fordism Comes to the Gallery—and AI Comes for the Artists
Though hyped in the media as the latest thing, the images generated by AI art are actually old, trapping the viewer in a time loop of kitsch.