The Future of War Is AI

The Future of War Is AI

AI, whatever its positives, looks like anything but what the world needs right now to save us from a hell on earth.

Copy Link
Facebook
X (Twitter)
Bluesky
Pocket
Email

EDITOR’S NOTE: This article originally appeared at TomDispatch.com. To stay on top of important articles like these, sign up to receive the latest updates from TomDispatch.com.

After almost 79 years on this beleaguered planet, let me say one thing: This can’t end well. Really, it can’t. And no, I’m not talking about the most obvious issues ranging from the war in Ukraine to the climate disaster. What I have in mind is that latest, greatest human invention: artificial intelligence.

It doesn’t seem that complicated to me. As a once-upon-a-time historian, I’ve long thought about what, in these centuries, unartificial and—all too often—unartful intelligence has “accomplished” (and yes, I’d prefer to put that in quotation marks). But the minute I try to imagine what that seemingly ultimate creation AI, already a living abbreviation of itself, might do, it makes me shiver. Brrr…

Let me start with honesty, which isn’t an artificial feeling at all. What I know about AI you could put in a trash bag and throw out with the garbage. Yes, I’ve recently read whatever I could in the media about it and friends of mine have already fiddled with it. TomDispatch regular William Astore, for instance, got ChatGPT to write a perfectly passable “critical essay” on the military-industrial complex for his Bracing Views newsletter—and that, I must admit, was kind of amazing.

Still, it’s not for me. Never me. I hate to say never because we humans truly don’t know what we’ll do in the future. Still, consider it my best guess that I won’t have anything actively to do with AI. (Although my admittedly less than artificially intelligent spellcheck system promptly changed “chatbox” to “hatbox” when I was e-mailing Astore to ask him for the URL to that piece of his.)

But let’s stop here a minute. Before we even get to AI, let’s think a little about LTAI (Less Than Artificial Intelligence, just in case you don’t know the acronym) on this planet. Who could deny that it’s had some remarkable successes? It created the Mona Lisa, The Starry Night, and Diego and I. Need I say more? It’s figured out how to move us around this world in style and even into outer space. It’s built vast cities and great monuments, while creating cuisines beyond compare. I could, of course, go on. Who couldn’t? In certain ways, the creations of human intelligence should take anyone’s breath away. Sometimes, they even seem to give “miracle” a genuine meaning.

And yet, from the dawn of time, that same LTAI went in far grimmer directions, too. It invented weaponry of every kind, from the spear and the bow and arrow to artillery and jet fighter planes. It created the AR-15 semiautomatic rifle, now largely responsible (along with so many disturbed individual LTAIs) for our seemingly never-ending mass killings, a singular phenomenon in this “peacetime” country of ours.

And we’re talking, of course, about the same Less Than Artificial Intelligence that created the Holocaust, Joseph Stalin’s Russian gulag, segregation and lynch mobs in the United States., and so many other monstrosities of (in)human history. Above all, we’re talking about the LTAI that turned much of our history into a tale of war and slaughter beyond compare, something that, no matter how “advanced” we became, has never—as the brutal, deeply destructive conflict in Ukraine suggests—shown the slightest sign of cessation. Although I haven’t seen figures on the subject, I suspect that there has hardly been a moment in our history when, somewhere on this planet (and often that somewhere would have to be pluralized), we humans weren’t killing each other in significant numbers.

And keep in mind that in none of the above have I even mentioned the horrors of societies regularly divided between and organized around the staggeringly wealthy and the all too poor. But enough, right? You get the idea.

Oops, I left one thing out in judging the creatures that have now created AI. In the last century or two, the “intelligence” that did all of the above also managed to come up with two different ways of potentially destroying this planet and more or less everything living on it. The first of them it created largely unknowingly. After all, the massive, never-ending burning of fossil fuels that began with the 19th-century industrialization of much of the planet was what led to an increasingly climate-changed Earth. Though we’ve now known what we were doing for decades (the scientists of one of the giant fossil-fuel companies first grasped what was happening in the 1970s), that hasn’t stopped us. Not by a long shot. Not yet anyway.

Over the decades to come, if not taken in hand, the climate emergency could devastate this planet that houses humanity and so many other creatures. It’s a potentially world-ending phenomenon (at least for a habitable planet as we’ve known it). And yet, at this very moment, the two greatest greenhouse gas emitters, the United States and China (that country now being in the lead, but the US remaining historically number one), have proven incapable of developing a cooperative relationship to save us from an all-too-literal hell on Earth. Instead, they’ve continued to arm themselves to the teeth and face off in a threatening fashion while their leaders are now not exchanging a word, no less consulting on the overheating of the planet.

The second path to hell created by humanity was, of course, nuclear weaponry, used only twice to devastating effect in August 1945 on the Japanese cities of Hiroshima and Nagasaki. Still, even relatively small numbers of weapons from the vast nuclear arsenals now housed on Planet Earth would be capable of creating a nuclear winter that could potentially wipe out much of humanity.

And mind you, knowing that, LTAI beings continue to create ever larger stockpiles of just such weaponry as ever more countries —the latest being North Korea —come to possess them. Under the circumstances and given the threat that the Ukraine War could go nuclear, it’s hard not to think that it might just be a matter of time. In the decades to come, the government of my own country is, not atypically, planning to put another $2 trillion into ever more advanced forms of such weaponry and ways of delivering them.

Entering the AI Era

Given such a history, you’d be forgiven for imagining that it might be a glorious thing for artificial intelligence to begin taking over from the intelligence responsible for so many dangers, some of them of the ultimate variety. And I have no doubt that, like its ancestor (us), AI will indeed prove anything but one-sided. It will undoubtedly produce wonders in forms that may as yet be unimaginable.

Still, let’s not forget that AI was created by those of us with LTAI. If now left to its own devices (with, of course, a helping hand from the powers that be), it seems reasonable to assume that it will, in some way, essentially repeat the human experience. In fact, consider that a guarantee of sorts. That means it will create beauty and wonder and—yes!—horror beyond compare (and perhaps even more efficiently so). Lest you doubt that, just consider which part of humanity already seems the most intent on pushing artificial intelligence to its limits.

Yes, across the planet, departments of “defense” are pouring money into AI research and development, especially the creation of unmanned autonomous vehicles (think: killer robots) and weapons systems of various kinds, as Michael Klare pointed out recently at TomDispatch when it comes to the Pentagon. In fact, it shouldn’t shock you to know that five years ago (yes, five whole years!), the Pentagon was significantly ahead of the game in creating a Joint Artificial Intelligence Center to, as The New York Times put it, “explore the use of artificial intelligence in combat.” There, it might, in the end—and “end” is certainly an operative word here—speed up battlefield action in such a way that we could truly be entering unknown territory. We could, in fact, be entering a realm in which human intelligence in wartime decision-making becomes, at best, a sideline activity.

Only recently, AI creators, tech leaders, and key potential users, more than 1,000 of them, including Apple co-founder Steve Wozniak and billionaire Elon Musk, had grown anxious enough about what such a thing—such a brain, you might say—let loose on this planet might do that they called for a six-month moratorium on its development. They feared “profound risks to society and humanity” from AI and wondered whether we should even be developing “nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us.”

The Pentagon, however, instantly responded to that call this way, as David Sanger reported in The New York Times: “Pentagon officials, speaking at technology forums, said they thought the idea of a six-month pause in developing the next generations of ChatGPT and similar software was a bad idea: The Chinese won’t wait, and neither will the Russians.” So, full-speed ahead and skip any international attempts to slow down or control the development of the most devastating aspects of AI!

And I haven’t even bothered to mention how, in a world already seemingly filled to the brim with mis- and disinformation and wild conspiracy theories, AI is likely to be used to create yet more of the same of every imaginable sort, a staggering variety of “hallucinations,” not to speak of churning out everything from remarkable new versions of art to student test papers. I mean, do I really need to mention anything more than those recent all-too-realistic-looking “photos of Donald Trump being aggressively arrested by the NYPD and Pope Francis sporting a luxurious Balenciaga puffy coat circulating widely online”?

I doubt it. After all, image-based AI technology, including striking fake art, is on the rise in a significant fashion and, soon enough, you may not be able to detect whether the images you see are “real” or “fake.” The only way you’ll know, as Meghan Bartels reports in Scientific American, could be thanks to AI systems trained to detect—yes!—artificial images. In the process, of course, all of us will, in some fashion, be left out of the picture.

On the Future, Artificially Speaking

And of course, that’s almost the good news when, with our present all-too-Trumpian world in mind, you begin to think about how Artificial Intelligence might make political and social fools of us all. Given that I’m anything but one of the better-informed people when it comes to AI (though on Less Than Artificial Intelligence I would claim to know a fair amount more), I’m relieved not to be alone in my fears.

In fact, among those who have spoken out fearfully on the subject is the man known as “the godfather of AI,” Geoffrey Hinton, a pioneer in the field of artificial intelligence. He only recently quit his job at Google to express his fears about where we might indeed be heading, artificially speaking. As he told The New York Times recently, “The idea that this stuff could actually get smarter than people—a few people believed that, but most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Now, he fears not just the coming of killer robots beyond human control but, as he told Geoff Bennett of the PBS NewsHour, “the risk of super intelligent AI taking over control from people.… I think it’s an area in which we can actually have international collaboration, because the machines taking over is a threat for everybody. It’s a threat for the Chinese and for the Americans and for the Europeans, just like a global nuclear war was.”

And that, indeed, is a hopeful thought, just not one that fits our present world of hot war in Europe, cold war in the Pacific, and division globally.

I, of course, have no way of knowing whether Less Than Artificial Intelligence of the sort I’ve lived with all my life will indeed be sunk by the AI carrier fleet or whether, for that matter, humanity will leave AI in the dust by, in some fashion, devastating this planet all on our own. But I must admit that AI, whatever its positives, looks like anything but what the world needs right now to save us from a hell on earth. I hope for the best and fear the worst as I prepare to make my way into a future that I have no doubt is beyond my imagining.

Ad Policy
x