How Sam Altman Ran Afoul of the Keepers of the AI Faith

How Sam Altman Ran Afoul of the Keepers of the AI Faith

How Sam Altman Ran Afoul of the Keepers of the AI Faith

Altman’s departure from OpenAI was less a managerial dispute than a religious schism. 

Facebook
Twitter
Email
Flipboard
Pocket

You may not have heard of OpenAI and its nerd-celebrity CEO Sam Altman before this weekend, but OpenAI has probably heard of you. As the creator of the wildly popular ChatGPT chatbot, OpenAI has been training its software on unfathomable amounts of Internet data that happen to include a great deal of “publicly available information that is freely and openly available on the internet”—your tweets, perhaps, or even your personal information. OpenAI, which is technically a start-up but is locked into a complicated ownership structure that mixes some profit-making capacity with nonprofit supervision, has soared to an $80 billion valuation. It’s been helped along by $13 billion in investment and computing resources from Microsoft, its main patron.

In the last year, Altman has become the foremost figure in tech’s AI boom, and on Friday, OpenAI’s board suddenly and summarily fired him. Now, with rank-and-file staff eyeing the exits, the future of the world’s hottest start-up may be in jeopardy.

“Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities,” OpenAI announced on its company blog. “The board no longer has confidence in his ability to continue leading OpenAI.”

Crucially, OpenAI didn’t say what Altman had actually done. What wasn’t he “candid” about? Had he lied to his own board? No one seemed to know, so chaos, speculation, and gossip took over.

Altman’s firing was inexplicable for an industry that had quickly lionized him. Altman was formerly the head of Y Combinator, the premiere Silicon Valley start-up incubator, which made him one of the best-connected people in tech and a favorite of the venture-capitalist class. Immediately upon his firing, there was speculation that he would found a new company by the end of the weekend. Indeed, hundreds of OpenAI employees pledged to follow him wherever he went, as did some of his financial backers. Altman reportedly tried to engineer a countercoup that would see him reinstalled as the company’s CEO and some of his board enemies vanquished. It didn’t work.

But on Sunday, as the clock inched toward midnight, The Information reported that Altman had secured a cushy new job leading Microsoft’s AI research lab—which, by extension, includes some authority over Microsoft’s relationship with his former employer. Hundreds of AI employees are still expected to follow Altman to Microsoft, which has practically unlimited money to pour into an industry that devours hardware, data, talent, and cash in astonishing quantities. And Altman has the full support of Microsoft CEO Satya Nadella.

Amid all this frantic reshuffling, OpenAI’s board stood behind its firing of Altman, writing in a statement: “Put simply, Sam’s behavior and lack of transparency in his interactions with the board undermined the board’s ability to effectively supervise the company in the manner it was mandated to do.”

So what did Altman do? The follow-up statement offered no clarity; it only suggested that he seemed to threaten the board’s responsibilities, which center around protecting the company’s nonprofit component. The allusion to possible deception seems significant. In recent months, Altman was reportedly traveling in the Middle East, hoping to convince sovereign wealth funds to invest in a new AI chip start-up—an immensely ambitious project that would require billions in capital and years of runway, among other challenges.

Many tech observers have suggested that a quasi-religious schism is at play. Altman is a move-fast-and-break-things type of operator: someone who believes in pushing artificial intelligence research quickly and boldly, come what may. Like other AI executives, he has said that the technology could one day lead to a global apocalypse, but , despite that risk, it remains too important not to research. This general attitude is widespread in Silicon Valley. Rather than being cowed by risk, many tech boosters display an almost religious faith in the necessity of pursuing Artificial General Intelligence, or AGI, a computer intelligence that would come to rival that of human beings.

Altman’s rivals at OpenAI seem to belong to the more conservative camp, believing that by trying to create God in a box they may be going too far. Perhaps Altman’s heedless embrace of chatbots-for-all—he is AI’s most effective salesman—ran up against the pragmatism of more cautious researchers. That all of this betrays a towering egotism as well as a supreme faith in the power of technology usually goes unsaid. In Silicon Valley, AGI is a prophetic inevitability, a necessity. The only real question is how to manage the potentially seismic threat of massive planetary disruption that comes with the wholesale adoption of AI. Until Friday, some thought that OpenAI was the industry leader best positioned to do just that.

The AI faith has many popes—an almost exclusively white male cohort of thirtysomething executives and programmers who genuinely believe they are working on the most important thing in the world. Right now, they command the attention of the tech industry, its financial backers, politicians, and the millions of people who have taken up hallucinating chatbots as productivity tools or simply idle entertainments. Tens of billions of dollars, vast amounts of chip-making production capacity, and some important policymakers are exclusively focused on elevating the religion of AI, embedding it throughout our economy, educational system, and military operations.

Astute analysts and scholars have pointed out that AI’s real, immediate threat is not that it surpasses human intelligence. Rather, it’s that these systems replace humans or become our overseers, a flawed technology dictating not only the terms of our labor but also our patterns of material consumption, and increasingly our politics. This dark eventuality is already emerging—in call centers, in algorithms that determine prisoners’ parole, and in chatbots flooding social networks with bad information. The utopian potential of AI remains hazy and mostly undefined—something in the distance that only seers like Altman can lead us to. The industry’s high priests remain utterly sure of their own prophecies, so when an industry eminence is challenged by his own board, the response isn’t to ask whether there might be a problem in the brave new vision of a social and digital world imbued with AI. No, as in past challenges to the one true faith, the mandate for believers is to double down on the messianic conviction that promises their universal deliverance. The one sure thing to emerge after Sam Altman’s weekend of high-managerial chaos is that he will continue to preach his gospel from the lucrative empyrean of Microsoft.

Can we count on you?

In the coming election, the fate of our democracy and fundamental civil rights are on the ballot. The conservative architects of Project 2025 are scheming to institutionalize Donald Trump’s authoritarian vision across all levels of government if he should win.

We’ve already seen events that fill us with both dread and cautious optimism—throughout it all, The Nation has been a bulwark against misinformation and an advocate for bold, principled perspectives. Our dedicated writers have sat down with Kamala Harris and Bernie Sanders for interviews, unpacked the shallow right-wing populist appeals of J.D. Vance, and debated the pathway for a Democratic victory in November.

Stories like these and the one you just read are vital at this critical juncture in our country’s history. Now more than ever, we need clear-eyed and deeply reported independent journalism to make sense of the headlines and sort fact from fiction. Donate today and join our 160-year legacy of speaking truth to power and uplifting the voices of grassroots advocates.

Throughout 2024 and what is likely the defining election of our lifetimes, we need your support to continue publishing the insightful journalism you rely on.

Thank you,
The Editors of The Nation

Ad Policy
x