How Sam Altman Ran Afoul of the Keepers of the AI Faith

How Sam Altman Ran Afoul of the Keepers of the AI Faith

How Sam Altman Ran Afoul of the Keepers of the AI Faith

Altman’s departure from OpenAI was less a managerial dispute than a religious schism. 

Facebook
Twitter
Email
Flipboard
Pocket

You may not have heard of OpenAI and its nerd-celebrity CEO Sam Altman before this weekend, but OpenAI has probably heard of you. As the creator of the wildly popular ChatGPT chatbot, OpenAI has been training its software on unfathomable amounts of Internet data that happen to include a great deal of “publicly available information that is freely and openly available on the internet”—your tweets, perhaps, or even your personal information. OpenAI, which is technically a start-up but is locked into a complicated ownership structure that mixes some profit-making capacity with nonprofit supervision, has soared to an $80 billion valuation. It’s been helped along by $13 billion in investment and computing resources from Microsoft, its main patron.

In the last year, Altman has become the foremost figure in tech’s AI boom, and on Friday, OpenAI’s board suddenly and summarily fired him. Now, with rank-and-file staff eyeing the exits, the future of the world’s hottest start-up may be in jeopardy.

“Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities,” OpenAI announced on its company blog. “The board no longer has confidence in his ability to continue leading OpenAI.”

Crucially, OpenAI didn’t say what Altman had actually done. What wasn’t he “candid” about? Had he lied to his own board? No one seemed to know, so chaos, speculation, and gossip took over.

Altman’s firing was inexplicable for an industry that had quickly lionized him. Altman was formerly the head of Y Combinator, the premiere Silicon Valley start-up incubator, which made him one of the best-connected people in tech and a favorite of the venture-capitalist class. Immediately upon his firing, there was speculation that he would found a new company by the end of the weekend. Indeed, hundreds of OpenAI employees pledged to follow him wherever he went, as did some of his financial backers. Altman reportedly tried to engineer a countercoup that would see him reinstalled as the company’s CEO and some of his board enemies vanquished. It didn’t work.

But on Sunday, as the clock inched toward midnight, The Information reported that Altman had secured a cushy new job leading Microsoft’s AI research lab—which, by extension, includes some authority over Microsoft’s relationship with his former employer. Hundreds of AI employees are still expected to follow Altman to Microsoft, which has practically unlimited money to pour into an industry that devours hardware, data, talent, and cash in astonishing quantities. And Altman has the full support of Microsoft CEO Satya Nadella.

Amid all this frantic reshuffling, OpenAI’s board stood behind its firing of Altman, writing in a statement: “Put simply, Sam’s behavior and lack of transparency in his interactions with the board undermined the board’s ability to effectively supervise the company in the manner it was mandated to do.”

So what did Altman do? The follow-up statement offered no clarity; it only suggested that he seemed to threaten the board’s responsibilities, which center around protecting the company’s nonprofit component. The allusion to possible deception seems significant. In recent months, Altman was reportedly traveling in the Middle East, hoping to convince sovereign wealth funds to invest in a new AI chip start-up—an immensely ambitious project that would require billions in capital and years of runway, among other challenges.

Many tech observers have suggested that a quasi-religious schism is at play. Altman is a move-fast-and-break-things type of operator: someone who believes in pushing artificial intelligence research quickly and boldly, come what may. Like other AI executives, he has said that the technology could one day lead to a global apocalypse, but , despite that risk, it remains too important not to research. This general attitude is widespread in Silicon Valley. Rather than being cowed by risk, many tech boosters display an almost religious faith in the necessity of pursuing Artificial General Intelligence, or AGI, a computer intelligence that would come to rival that of human beings.

Altman’s rivals at OpenAI seem to belong to the more conservative camp, believing that by trying to create God in a box they may be going too far. Perhaps Altman’s heedless embrace of chatbots-for-all—he is AI’s most effective salesman—ran up against the pragmatism of more cautious researchers. That all of this betrays a towering egotism as well as a supreme faith in the power of technology usually goes unsaid. In Silicon Valley, AGI is a prophetic inevitability, a necessity. The only real question is how to manage the potentially seismic threat of massive planetary disruption that comes with the wholesale adoption of AI. Until Friday, some thought that OpenAI was the industry leader best positioned to do just that.

The AI faith has many popes—an almost exclusively white male cohort of thirtysomething executives and programmers who genuinely believe they are working on the most important thing in the world. Right now, they command the attention of the tech industry, its financial backers, politicians, and the millions of people who have taken up hallucinating chatbots as productivity tools or simply idle entertainments. Tens of billions of dollars, vast amounts of chip-making production capacity, and some important policymakers are exclusively focused on elevating the religion of AI, embedding it throughout our economy, educational system, and military operations.

Astute analysts and scholars have pointed out that AI’s real, immediate threat is not that it surpasses human intelligence. Rather, it’s that these systems replace humans or become our overseers, a flawed technology dictating not only the terms of our labor but also our patterns of material consumption, and increasingly our politics. This dark eventuality is already emerging—in call centers, in algorithms that determine prisoners’ parole, and in chatbots flooding social networks with bad information. The utopian potential of AI remains hazy and mostly undefined—something in the distance that only seers like Altman can lead us to. The industry’s high priests remain utterly sure of their own prophecies, so when an industry eminence is challenged by his own board, the response isn’t to ask whether there might be a problem in the brave new vision of a social and digital world imbued with AI. No, as in past challenges to the one true faith, the mandate for believers is to double down on the messianic conviction that promises their universal deliverance. The one sure thing to emerge after Sam Altman’s weekend of high-managerial chaos is that he will continue to preach his gospel from the lucrative empyrean of Microsoft.

Thank you for reading The Nation!

We hope you enjoyed the story you just read. It’s just one of many examples of incisive, deeply-reported journalism we publish—journalism that shifts the needle on important issues, uncovers malfeasance and corruption, and uplifts voices and perspectives that often go unheard in mainstream media. For nearly 160 years, The Nation has spoken truth to power and shone a light on issues that would otherwise be swept under the rug.

In a critical election year as well as a time of media austerity, independent journalism needs your continued support. The best way to do this is with a recurring donation. This month, we are asking readers like you who value truth and democracy to step up and support The Nation with a monthly contribution. We call these monthly donors Sustainers, a small but mighty group of supporters who ensure our team of writers, editors, and fact-checkers have the resources they need to report on breaking news, investigative feature stories that often take weeks or months to report, and much more.

There’s a lot to talk about in the coming months, from the presidential election and Supreme Court battles to the fight for bodily autonomy. We’ll cover all these issues and more, but this is only made possible with support from sustaining donors. Donate today—any amount you can spare each month is appreciated, even just the price of a cup of coffee.

The Nation does not bow to the interests of a corporate owner or advertisers—we answer only to readers like you who make our work possible. Set up a recurring donation today and ensure we can continue to hold the powerful accountable.

Thank you for your generosity.

Ad Policy
x