Hollywood’s Robot Overlords

Hollywood’s Robot Overlords

The tech-driven takeover of the film industry aims at squeezing maximum profit out of art, draining it of all its humanity in the process.


Improbable as it may seem, Hollywood, Calif., is now arguably the epicenter of labor strife in the United States. For more than two months, the screenwriters have been out on strike. Now, Hollywood’s actors have joined them. It is the first time in 63 years that writers and actors have struck simultaneously. Roughly 160,000 actors belong to SAG-AFTRA, and 11,500 writers are members of the Writers Guild of America. That’s an awful lot of people flexing their economic muscle simultaneously.

Last week, luminaries such as George Clooney, Matt Damon, and Susan Sarandon could be found walking the picket lines. And directors such as Christopher Nolan, who is currently basking in the acclaim for his new movie, Oppenheimer, have spoken of the movie industry being at an inflection point, and of the need for the studios to negotiate new contracts with actors and with writers that meet the new needs of the industry in an era of Internet streaming and of artificial intelligence.

Put simply, actors and writers are being royally ripped off by a compensation system that chronically underpays them for residuals owed from viewers streaming material. Actors are also at particular risk of their AI likenesses’ being used either in lieu of, or to supplement, the real deal.

This is an issue that the creative classes are, increasingly, having to confront. In recent months, AI has been used to create soundalike songs by artists such as Drake. AI software advertises the ability to create Van Gogh–like art. Researchers have even worked with artificial intelligence to “finish” Beethoven’s incomplete tenth symphony. In Brazil, Elis Regina, a renowned singer who died of a suspected drug overdose in 1982, was “resurrected” by AI so that she could perform a duet with her now-adult daughter in a car commercial for Volkswagen.

All of this work is derivative. It relies on feeding vast amounts of data into computer systems that learn to respond in kind, up to and including bringing avatars of the dead onto television screens. That’s not the “creative spark,” or the unexplainable, irreducible inspiration that has, since time immemorial, fueled human accomplishments in the arts. It’s a probability game, made possible by rapid computational capabilities. When AI writes words or music, paints pictures, or creates facsimiles of real actors who populate scenes in a film, it is, in essence, pillaging the intellectual commons created by generations of thinkers and artists. It is “learning,” not to become a creative master in its own right but to produce a realistic simulacrum that can be used to turn a quick buck. It’s not terribly different from the work of a skilled forger.

That’s why growing numbers of these thinkers and writers are finally rising up against Meta, Google, Open AI, and other companies. In some cases, they are suing those companies for copyright infringement or the unauthorized, and uncompensated, use of intellectual material to create AI systems that can generate quick—and large—profits. In May of this year, an open letter by a coalition of musicians, writers, and artists, described the vacuum cleaner–like action of large language models, hoovering up anything and everything in their path and regurgitating it, as the “greatest art heist in history.” They charged the “respectable-seeming corporate entities backed by Silicon Valley venture capital” with what amounts to “daylight robbery.”

This is a quintessential California story: creative classes ranged against each other, artists versus techno-utopians, writers and actors versus programmers and creators of neural networks.

The endless need to reinvent and to break down norms, no matter how much else gets broken in the process, has defined Silicon Valley for more than half a century. This dynamic is now playing out at warp speed in the AI arms race between tech behemoths. Astoundingly, a significant number of the top scientists and engineers at these companies are plunging ahead with the development of ever-more-powerful artificial intelligence despite, at the same time, acknowledging a statistically significant risk that these technological innovations have the potential to inflict catastrophic damage on human civilization. That’s not just shortsighted; it’s epically unethical.

When my mother was a young student at UC Berkeley in the 1960s, there was a physics professor there by the name of Edward Teller. Teller was one of the smartest men on the planet. He was also the brains behind America’s hydrogen bomb—a fusion-based explosive vastly more powerful than the fission-based atomic weapons that led Oppenheimer to declare, “Now I am become death, the destroyer of worlds,” and which obliterated Hiroshima and Nagasaki. Oppenheimer had refused to work on the hydrogen bomb, viewing it as all too likely to result in the destruction of humanity. Teller, by contrast, viewed it largely as an intellectual project: He knew that nuclear theory pointed to the possibility of such a weapon, and it was, for him, an irresistible challenge to see if he could conjure up such a weapon in real life.

By the time my mother was at Berkeley, Teller, who had turned on Oppenheimer and urged the government to withdraw his security clearance because of his opposition to the H-bomb, was vilified by many people for the moral shortsightedness of his actions, having put his genius to work in pursuit of inventions that had the potential to deliver death on a virtually unimaginable scale. I wonder if, years from now, we will look askance at the geniuses of Silicon Valley in the same way, those who plunged ahead with AI technology that they themselves realized had the potential to sabotage human culture, to destroy the job security of millions of workers, and, quite possibly, to ultimately undermine the web of life.

Ad Policy