To Preserve Our Humanity, We Must Ban Killer Robots

To Preserve Our Humanity, We Must Ban Killer Robots

To Preserve Our Humanity, We Must Ban Killer Robots

Algorithms could create a perfect killing machine, stripped of the empathy and conscience that might hold a human back.

Facebook
Twitter
Email
Flipboard
Pocket

A dystopian nightmare, in which machines make life-and-death decisions on the battlefield or in policing scenarios is not far away. It’s not Skynet or Cylons—at least, not yet—but the development of weapons with decreasing amounts of human control is already underway.

More than 380 partly autonomous weapon systems have been deployed or are being developed in at least 12 countries, including China, France, Israel, South Korea, Russia, the United Kingdom, and the United States. South Korea deploys mechanized sentries in the demilitarized zone, while Israel’s Iron Dome detects and destroys short-range rockets. US missile-defense systems like the Patriot and Aegis are semi-autonomous, and the US military has completed testing of an autonomous anti-submarine vessel, which is able to sink other submarines and ships without anyone on board. The United Kingdom is developing Taranis, a drone that can avoid radar detection and fly in autonomous mode. Russia has built a robot tank that can be fitted with a machine gun or grenade launcher, and has manufactured a fully automated gun that uses artificial neural networks to choose targets. China is developing weapon “swarms”—small drones that could be fitted with heat sensors and programmed to attack anything that emits a body temperature.

If this trend continues unconstrained, humans will eventually be cut out of crucial decision-making. Some people in advanced militaries desire this. But many roboticists, scientists, tech workers, philosophers, ethicists, legal scholars, human-rights defenders, peace-and-disarmament activists, and governments of countries with less-advanced militaries have called for an international ban on the development of such weapons.

The risks of killer robots

Proponents of fully autonomous weapon systems argue that these weapons will be keep human soldiers in the deploying force out of danger and that they will be more “precise.” They believe these weapons will make calculations and decisions more quickly than humans, and that those decisions—in targeting and in attack—will be more accurate than those of humans. They also argue that the weapons will not have emotional responses to situations—they won’t go on a rampage out of revenge; they won’t rape.

But many tech workers, roboticists, and legal scholars believe that we will never be able to program robots to accurately and consistently discriminate between soldiers and civilians in times of conflict. “Although progress is likely in the development of sensory and processing capabilities, distinguishing an active combatant from a civilian or an injured or surrendering soldier requires more than such capabilities,” explained Bonnie Docherty of Harvard Law School and Human Rights Watch. “It also depends on the qualitative ability to gauge human intention, which involves interpreting the meaning of subtle clues, such as tone of voice, facial expressions, or body language, in a specific context.”

There are also widespread concerns about programming human bias into killer robots. The practice of “signature strikes” already uses identifiers such as “military-age male” to target and execute killings—and to justify them afterward. Imagine a machine programmed with prejudice on the basis of race, sex, gender identity, sexual orientation, socioeconomic status, or ability. Imagine its deployment not just in war but in policing situations.

This dehumanization of targets would be matched by dehumanization of attacks. Algorithms would create a perfect killing machine, stripped of the empathy, conscience, or emotion that might hold a human soldier back. Proponents of autonomous weapons have argued that this is exactly what would make them better than human soldiers. They say machines would do a better job of complying with the laws of war than humans do, because they would lack human emotions. But this also means they would not possess mercy or compassion. They would not hesitate or challenge a commanding officer’s deployment or instruction. They would simply do as they have been programmed to do—and if this includes massacring everyone in a village, they will do so without hesitation.

This delegation of violence also has implications for accountability and liability. Who is responsible if a robot kills civilians or destroys houses, schools, and marketplaces? Is it the military commander who ordered its deployment? The programmer who designed or installed the algorithms? The hardware or software developers? We can’t lock up a machine for committing war crimes—so who should pay the penalty?

Being killed by a gun, a drone, or a bomb may all amount to the same end. But there is a particular moral repugnance to the idea of a killer robot. Peter Maurer, the president of the International Committee of the Red Cross, has written that “It’s the human soldier or fighter—not a machine—who understands the law and the consequences of violating it, and who is responsible for applying it.” The implications of having an amoral algorithm determine when to use of force means is that we’ll likely see more conflict and killing, not less.

Then there is the argument that autonomous weapons will save lives. As we have seen with armed drones, remote-controlled weapons have made war less “costly” to the user of the weapon. Operators safely ensconced in their electronic fighting stations thousands of miles away don’t face immediate retaliation for their acts of violence. While this is obviously attractive to advanced militaries, which don’t have to risk the lives of their soldiers, it arguably raises the cost of war for everyone else. It lowers the threshold for the use of force, especially in situations where the opposing side does not have equivalent systems to deploy in response. In the near future, autonomous weapon systems are not likely to result in an epic battle of robots, where machines fight machines. Instead, they would likely be unleashed upon populations that might not be able to detect their imminent attack and might have no equivalent means with which to fight back. Thus the features that might make autonomous weapons attractive to technologically advanced countries looking to preserve the lives of their soldiers will inevitably push the burden of risk and harm onto the rest of the world.

These features also fundamentally change the nature of war. The increasing automation of weapon systems helps to take war and conflict outside of the view of the deploying countries’ citizenry. If its own soldiers aren’t coming home in body bags, does will the public pay attention to what its government abroad? Does it care about the soldiers or the civilians being killed elsewhere? From what we have seen with the use of drones, it seems that it is easier for governments to sell narratives about terrorism and victory if their populations can’t see or feel the consequences themselves.

Ensuring meaningful human control

Avoiding this is what led to the initiation of discussions on killer robots at the UN in Geneva five years ago. The talks have been held within the context of a treaty focused on prohibiting or restricting the use of weapons that have “excessively injurious” or indiscriminate effects—colloquially known as the Convention on Certain Conventional Weapons (CCW).

Since 2014, the meetings have focused on building a common understanding about the meaning of human control and the risks of fully autonomous weapons. Most governments, along with the International Committee of the Red Cross (ICRC) and the Campaign to Stop Killer Robots, have reached the conclusion that humans must maintain control over programming, development, activation, and/or operational phases of a weapon system.

The Campaign, which currently comprises about 82 organizations from 35 countries, has consistently called for the negotiation of a treaty prohibiting the development and use of fully autonomous weapon systems. Right now, about 26 countries currently support a preemptive ban. The Non-Aligned Movement, the largest bloc of states operating in the UN, has called for a legally binding instrument stipulating prohibitions and regulations of such weapons. Austria, Brazil, and Chile support the negotiation of “a legally binding instrument to ensure meaningful human control over the critical functions” of weapon systems. A few others have expressed their interest in non–legally binding mechanisms, such as a political declaration proposed by France and Germany.

Whatever the differences in these proposed approaches, these states are agree on one key thing: Fully autonomous weapon systems must never be developed or used. From the Campaign’s perspective, a legally binding ban is the best option. This has proven—through bans on biological, chemical, and nuclear weapons, as well as landmines and cluster munitions—to be the best way to stigmatize weapons socially and politically. A legal prohibition is also necessary to involve related industries to ensure against misuse of relevant materials and technologies. In 1995, the CCW preemptively banned blinding laser weapons, showing that the forum can take action before a weapon system is developed if there is the political will to do so.

Thousands of scientists and artificial-intelligence experts have endorsed the prohibition. In July 2018, they issued a pledge not to assist with the development or use of fully autonomous weapons. Four thousand Google employees recently signed a letter demanding their company cancel its Project Maven contract with the Pentagon, which was geared toward “improving” drone strikes through artificial intelligence. Twelve hundred academics announced their support for the tech workers. More than 160 faith leaders and more than 20 Nobel Peace Prize laureates back the ban. Several international and national public-opinion polls have found that a majority of people who responded opposed developing and using fully autonomous weapons.

But a tiny handful of states are opposed to legally binding or political responses to the threats posed by autonomous weapons. The United States has argued that governments and civil society must not “stigmatize new technologies” or set new international standards, but instead work to ensure the “responsible use of weapons.” Together with Australia, Israel, Russia, and South Korea, the United States spent the latest CCW meeting in August 2018 arguing that any concrete action is “premature,” and demanded that the CCW spend next year exploring potential “benefits” of autonomous weapon systems. Unfortunately, the CCW operates on the basis of consensus, which is interpreted to require unanimity. This means that these five countries were able to block any moves to stop the development of these weapons.

This manipulation of consensus-based forums is rife in the UN system. It has led to deadlock on many disarmament and arms-control issues over the past 20 years. Right now, the recommendation in the CCW is to simply extend discussions for another 10 days in 2019. This hardly meets the urgency of the problem or the pace of technological development.

Governments need to decide soon what they want to do. Will the majority allow a handful of countries to careen us into a dystopian nightmare? Or will the majority of governments work together to develop new laws now, even without the support of the handful of states that want to build killer robots?

At the end of the day, the ask is simple: Weapons must be under human control. We already experience far too much violence among human beings. How can we risk further automating this violence? Fighting to retain human control over violence is not just about preventing mechanized death and destruction; it’s also about calling ourselves to account for the violence we already perpetuate. Maybe this can be a wake-up call for us all—one that we would do well to heed now, before it’s too late.

Can we count on you?

In the coming election, the fate of our democracy and fundamental civil rights are on the ballot. The conservative architects of Project 2025 are scheming to institutionalize Donald Trump’s authoritarian vision across all levels of government if he should win.

We’ve already seen events that fill us with both dread and cautious optimism—throughout it all, The Nation has been a bulwark against misinformation and an advocate for bold, principled perspectives. Our dedicated writers have sat down with Kamala Harris and Bernie Sanders for interviews, unpacked the shallow right-wing populist appeals of J.D. Vance, and debated the pathway for a Democratic victory in November.

Stories like these and the one you just read are vital at this critical juncture in our country’s history. Now more than ever, we need clear-eyed and deeply reported independent journalism to make sense of the headlines and sort fact from fiction. Donate today and join our 160-year legacy of speaking truth to power and uplifting the voices of grassroots advocates.

Throughout 2024 and what is likely the defining election of our lifetimes, we need your support to continue publishing the insightful journalism you rely on.

Thank you,
The Editors of The Nation

Ad Policy
x