Culture / December 19, 2023

Silicon Landlords: On the Narrowing of AI’s Horizon

The one thing science fiction couldn’t imagine is the world we have now: the near-complete control of Artificial Intelligence by a few corporations whose only goal is profit.

Dwayne Monroe
Sam Altman, chief executive officer of OpenAI, at the Hope Global Forums annual meeting in Atlanta, Ga., on Monday, December 11, 2023.(Dustin Chambers / Bloomberg via Getty Images)

As HAL 9000, the true star of Stanley Kubrick´s landmark film, 2001: A Space Odyssey, died a silicon death by memory module removal, the machine, reduced to its infant state (the moment it became operational) recited:

“Good afternoon, gentlemen. I am a HAL 9000 computer. I became operational at the H.A.L. plant in Urbana, Illinois on the 12th of January 1992. My instructor was Mr. Langley, and he taught me to sing a song…”

In HAL’s fictional biography, written by Arthur C Clarke for both the film’s script and its novelization, HAL, a Heuristic Algorithmic Logic machine, was theorized, engineered, and built at the University of Illinois’s Coordinated Science Laboratory, where the real Illinois Automatic Computer (ILLIAC) supercomputers were built from the 1950s until the 1970s. Embedded within the idea of HAL is an assumption: that the artificial intelligence (AI) research programs of the mid-to-late 20th century, centered on universities, scientific inquiry (and yes, military imperatives) would continue uninterrupted into the future, eventually producing thinking machines that would be our partners.

Ironically for a work of the imagination, it turns out that what was unimaginable in the late 1960s for the makers of 2001 was the eventual near-complete control of the field of AI by a small group of North American corporations—the Silicon Valley triumvirate of Amazon, Microsoft, and Google—whose only goal (hyped claims and declarations of serving humanity aside) is profit. These companies claim to be producing HAL-esque machines (which, they suggest, exhibit signs of AGI: artificial general intelligence) but are actually producing narrow systems that enable the extraction of profit for these digital landlords while allowing them to maintain control over access to a technology they dominate and tirelessly work to insert into every aspect of life.

On December 5 of this year, MIT Technology Review published an article titled “Make no mistake—AI is owned by Big Tech,” written by Amba Kak, Sarah Myers West, and Meredith Whittaker. The article, focused on the political economy and power relations of the AI industry, begins with this observation:

Put simply, in the context of the current paradigm of building larger- and larger-scale AI systems, there is no AI without Big Tech. With vanishingly few exceptions, every startup, new entrant, and even AI research lab is dependent on these firms. All rely on the computing infrastructure of Microsoft, Amazon, and Google to train their systems, and on those same firms’ vast consumer market reach to deploy and sell their AI products.

Current Issue

Cover of June 2024 Issue

There is no AI without Big Tech. Before the era of Silicon Valley dominance, when AI research programs were largely funded by a combination of government agencies such as DARPA and universities, driven, at least at the level of researchers, by scientific inquiry (it was far from a utopia; Cold War imperatives were always a significant factor), the financing required to build the systems researchers used and the direction of research were subject to public scrutiny as an option if not always in practice.

Today, the most celebrated and hyped methods, such as the resource hungry Large Language Models (LLM)—ChatGPT, Google’s recently released Gemini and Amazon’s Q—are the product of a concentration of capital and computational resources put into service to enhance the profit and market objectives of private entities such as Microsoft (the primary source of funding for OpenAI), completely beyond the reach of public scrutiny.

The Greek economist Yanis Varoufakis uses the term, “technofeudalism” to describe what he sees as the tech industry’s post-capitalist nature (closer in character to feudal lords, only this time with data centers, rather then walled castles or robber barons). I have problems with his argument, but I will grant Varoufakis one key point: the industry’s wealth and power are indeed built almost entirely on a rentier model that places the largest firms between us and the things we need. Rather than controlling land (although that too is a part of the story: Data centers require lots of land), the industry controls access to our entertainments, our memories, and our means of communication.

To this list we can add the collection of algorithmic techniques called AI, promoted as essential and inevitable, owned and commanded by the cloud giants who have crowded out earlier research efforts with programs requiring staggering amounts of data, computational power, and resources. As 20th century Marxists were fond of saying, it is no accident that the very methods that depend on techniques controlled at-scale by the tech giants are the ones we are told we can’t live without (how many times have you been told ChatGPT was the future—as inevitable as death and taxes?).

The Nation Weekly

Fridays. A weekly digest of the best of our coverage.
By signing up, you confirm that you are over the age of 16 and agree to receive occasional promotional offers for programs that support The Nation’s journalism. You may unsubscribe or adjust your preferences at any time. You can read our Privacy Policy here.

Continuing their analysis, the authors of the MIT article describe the power relationships—seldom discussed in most breathlessly adoring tech media accounts—that shape how the AI industry actually works:

Microsoft now has a seat on OpenAI’s board, albeit a nonvoting one. But the true leverage that Big Tech holds in the AI landscape is the combination of its computing power, data, and vast market reach. In order to pursue its bigger-is-better approach to AI development, OpenAI made a deal. It exclusively licenses its GPT-4 system and all other OpenAI models to Microsoft in exchange for access to Microsoft’s computing infrastructure.

For companies hoping to build base models, there is little alternative to working with either Microsoft, Google, or Amazon. And those at the center of AI are well aware of this…

A visit to the Microsoft website for what it calls its Azure OpenAI Service (the implementation of OpenAI’s platform via Microsoft’s Azure cloud computing service) shows the truth of the statement, “There is little alternative to working with either Microsoft, Google, or Amazon.” Computing hardware for AI research costs oceans of money (Microsoft’s $10 billion investment in OpenAI is an example) and demands constant maintenance—things smaller firms can scarcely afford. By offering a means through which start-ups and, really, all but the deepest-pocketed organizations can get access to what are considered cutting-edge methods, Microsoft and its fellow travelers have become the center of the AI ecosystem. The AI in your school, hospital, or police force (the list goes on) can, like roads leading to Rome, be traced back to Microsoft et al.

In the fictional world of HAL 9000, thinking machines—built at universities, watched over by scientists and engineers, disconnected from profit incentives—emerged onto the world stage, becoming a part of life, even accompanying us to the stars. In our world, now 22 years past the 2001 imagined in the film, a small and unregulated group of corporations steer the direction of research, own the computers used for that research, and sell the results as products the world can’t do without. These products—generative AI image generators like Dall-E, text calculators like ChatGPT, and a host of other systems, all derivative—are being pushed into the world, not as partners as with the fabled HAL but as profit vectors.

Power, like life itself, is not eternal. The power of the tech industry, facilitated by the purposeful neglect of unconcerned, mis- or poorly informed governments and modern laissez-faire policies, is not beyond challenge. There are groups, such as the Distributed AI Research Institute, and even legislation, like the flawed EU AI Act, that offer glimpses of a different approach.

To borrow from linguistics professor Emily Bender, we must “resist the urge to be impressed” and focus our thoughts and efforts instead on ensuring that the tech industry and the AI systems it sells are firmly brought under democratic control.

The alternative is a chaotic dystopia in which we’re all at the mercy of the profit-driven whims of a few companies. This isn’t a future anyone deserves. Not even Elon Musk’s (dwindling) army of reality-challenged fans.

Dear reader,

I hope you enjoyed the article you just read. It’s just one of the many deeply-reported and boundary-pushing stories we publish everyday at The Nation. In a time of continued erosion of our fundamental rights and urgent global struggles for peace, independent journalism is now more vital than ever.

As a Nation reader, you are likely an engaged progressive who is passionate about bold ideas. I know I can count on you to help sustain our mission-driven journalism.

This month, we’re kicking off an ambitious Summer Fundraising Campaign with the goal of raising $15,000. With your support, we can continue to produce the hard-hitting journalism you rely on to cut through the noise of conservative, corporate media. Please, donate today.

A better world is out there—and we need your support to reach it.


Katrina vanden Heuvel
Editorial Director and Publisher, The Nation

Dwayne Monroe

Dwayne Monroe is a cloud architect, Marxist tech analyst, and Internet polemicist based in Amsterdam. He is currently writing a book, Attack Mannequins, exploring the use of AI as propaganda.

More from Dwayne Monroe

Fordism Comes to the Gallery—and AI Comes for the Artists

Fordism Comes to the Gallery—and AI Comes for the Artists Fordism Comes to the Gallery—and AI Comes for the Artists

Though hyped in the media as the latest thing, the images generated by AI art are actually old, trapping the viewer in a time loop of kitsch.

Dwayne Monroe

The Real Reason to Be Nervous About AI

The Real Reason to Be Nervous About AI The Real Reason to Be Nervous About AI

Declarations of sentience are wildly premature. But the dangers AI poses to labor are very real.

Editorial / Dwayne Monroe