When AI and Robotics Combine

When AI and Robotics Combine

The coming advent of “killer robots.”

Copy Link
Facebook
X (Twitter)
Bluesky
Pocket
Email

What happens when you marry generative artificial intelligence (AI) software like ChatGPT or Bard with advanced robotic hardware? According to engineers at Google, you wind up with a “smart” robot capable of performing a wide range of useful tasks without human supervision, such as cleaning a home or distributing mail and packages. “This really opens up using robots in environments where people are, in office environments, in home environments, in all the places where there are a lot of physical tasks that need to be done,” said Vincent Vanhoucke, Google DeepMind’s head of robotics. But there is another potential use for such devices that Google has not discussed openly: as weapons of war.

Until now, most robotic devices have been too “dumb” to perform anything beyond limited, preprogrammed functions. Conventional robots of this sort are now widely used on auto assembly lines and in similar operations. But Google and other Silicon Valley firms see a promising commercial future for AI-enabled robots capable of performing a much wider range of activities, including those requiring some degree of independent, or “autonomous,” action. In July, Google unveiled Robotic Transformer 2 (RT-2), an AI-enabled device capable of sensing its surroundings and manipulating objects based on its own interpretation of operator instructions, such as “pick up the bag about to fall off the table.” More advanced versions of RT-2, it is widely assumed, will soon be replacing humans in the medical, logistics, and service industries.

At the same time, these technologies are being incorporated by the US military—and those of the other major powers—into devices intended for purely military use. Drawing on advances in the commercial tech industry, military organizations are combining AI software with drone ships, planes, and tanks to create what some call “lethal autonomous weapons systems” and others “killer robots”—that is, weapons capable of identifying, tracking, and striking enemy targets with minimal or nonexistent human oversight. In military lingo, these can include robotic combat vehicles (RCVs), unmanned surface vessels (USVs), and unmanned aerial vehicles (UAVs).

Experimental versions of all these weapons types have been used with notable effect by both sides in the Ukraine conflict. On August 4, for example, a Ukrainian USV, reported to be a Magura-V5, was used to inflict significant damage on a Russian amphibious landing ship, the Olenegorsky Gornyak, while sailing near the Russian port of Novorossiysk on the Black Sea. Ukrainian UAVs have also been used in several recent attacks on office buildings in central Moscow, some said to house key government ministries. Russia, for its part, has employed its Orlan-10 UAV for attacks on Ukrainian combat forces and the Iranian-supplied Shahed-131 “suicide” explosive drone for attacks on Ukrainian cities. (Such devices are called “suicide” or “kamikaze” drones as they are designed to crash into their intended target and detonate an attached explosive device.) The United States, while not a direct party to the Ukraine conflict, has supplied a number of experimental UAVs to Ukrainian forces, including the AeroVironment Switchblade suicide drone and the Aevex Aerospace Phoenix Ghost, a similar device.

Most of these munitions are under some form of human control while in flight or travel along preprogrammed trajectories. Many, however, possess some degree of autonomy, for example in selecting targets to attack. As the war has progressed, weapons designers in Ukraine, Russia, the US, and elsewhere have made continuous improvements to these devices, steadily increasing their use of AI in seeking, identifying, and choosing targets. For many military analysts, these developments represent the most significant long-term implications of the war in Ukraine. Increasingly capable and “intelligent” drones, it is believed, will transform the battlefield, replacing human-crewed ships, planes, and tanks in a wide variety of combat operations.

“This war is a war of drones, they are the super weapon here,” Anton Gerashchenko, an adviser to Ukraine’s minister of internal affairs, told Newsweek in February. “We will win faster and with fewer losses if we have tens of thousands, hundreds of thousands of reconnaissance and combat drones.”

With all this in mind, the US Department of Defense is pouring billions of dollars into the development and procurement of advanced autonomous and semiautonomous weapons systems. In its budget submission for fiscal year (FY) 2024, for example, the Pentagon requested $548 million for procurement of the MQ-9 Reaper surveillance/strike UAV, $824 million for the MQ-4C Triton reconnaissance UAV, and $969 million for the MQ-25 Stingray carrier-based UAV. Additional millions were sought for the development of RCVs and USVs, and a staggering $1.8 billion was requested for research on the military use of artificial intelligence.

Addressing the Risks

These US initiatives, and similar undertakings by China, Russia, and other major powers, have generated considerable alarm among diplomats, computer scientists, human rights activists, and others who fear that the deployment of autonomous weapons systems will result in unintended human disasters. Advanced AI systems like ChatGPT and Bard have been shown to malfunction in egregious ways, most notably in the production of fabricated statements and images dubbed “hallucinations” by experts. Many top computer scientists fear that the unregulated release of these systems into the wider economy could result in catastrophic failures, with great harm to the human population. Their use in weapons systems has aroused even greater concern, prompting some analysts to warn of uncontrolled robot attacks on human populations and even the unintended initiation of nuclear war.

“The malicious use of AI systems for terrorist, criminal or state purposes could cause horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale,” UN Secretary General António Guterres told a special session of the Security Council devoted to the issue on July 18. “The unforeseen consequences of some AI-enabled systems could create security risks by accident…. And the interaction between AI and nuclear weapons, biotechnology, neurotechnology, and robotics is deeply alarming.”

Opponents of autonomous weapons systems worry, in particular, that such devices, once deployed, will be empowered to slay human beings without the capacity to abide by the provisions of international humanitarian law—which require, among other things, that parties to war be able to distinguish between combatants and civilians on the battlefield, and avoid harm to the latter as much as possible.

Indeed, Google’s experimentation with its RT-2 prototype provides ample reason for concern. While surprising observers with its ability to interpret human instructions—for example, by selecting a toy dinosaur when asked to “pick up the extinct animal”—it repeatedly made incorrect decisions, for example by misidentifying a banana laid out before it. Google engineers insist that, with time and practice, the AI software governing RT-2 will become more adept at identifying objects, but many scientists worry that mistakes will continue to occur.

When occurring on the battlefield, software errors of this sort could result in unintended human slaughter. Insisting that robotic weapons can never be made intelligent enough to distinguish between combatants and civilians in the heat of battle—especially in urban insurgencies, where combatants are often dispersed among civilians—many governments and nongovernmental organizations (NGOs) have called for a legally binding ban on such devices. The International Committee of the Red Cross (ICRC), for example, recommends “that states adopt new, international legally binding rules to prohibit unpredictable autonomous weapons and those designed or used to apply force against persons, and to place strict restrictions on all others.”

The ICRC, and other NGOs like the Campaign to Stop Killer Robots, have been working with a growing contingent of concerned governments to adopt such a ban under the aegis of the Convention on Certain Conventional Weapons (CCW), a 1980 UN treaty aimed at preventing the use of weapons that “may be deemed to be excessively injurious or to have indiscriminate effects.” (A total of 126 nations are parties to the CCW, including China, Russia, and the US.) Under the treaty, these “State Parties” to the CCW can adopt additional “protocols” banning or restricting particular weapons. Protocols prohibiting blinding lasers and incendiary weapons have already been adopted in this fashion, and now a coalition of states and NGOs are seeking adoption of a ban on lethal autonomous weapons.

The number of state parties advocating such a ban has been growing rapidly in recent years. On February 14, representatives of 33 states from Latin American and the Caribbean meeting in Belén, Costa Rica, issued a joint communiqué calling for “the urgent negotiation of an international legally binding instrument, with prohibitions and regulations with regard to autonomy in weapons systems, in order to ensure compliance with International Law.”

However, the US and several other states with advanced autonomous weapons programs oppose the adoption of a legally binding prohibition, and call instead for the continued development and deployment of such systems so long as they comply with various voluntary “principles.”

The US position was spelled out by the Department of State in a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” released on February 16 at a US-backed gathering in The Hague. “Military use of AI can and should be ethical, responsible, and enhance international security,” the declaration states. “States should take appropriate measures to ensure the responsible development, deployment, and use of their military AI capabilities, including those enabling autonomous systems.”

Many of the principles espoused in the February 16 document, such as its call upon states to “ensure that deliberate steps are taken to minimize unintended bias in military AI capabilities” and to “maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment,” are worthy of widespread emulation. However, no compliance or verification measures are incorporated into the US declaration, so any government can state its adherence to its principles without necessarily abiding by them.

In this sense, the declaration on military use of AI is akin to the promises made in July to President Biden by top AI corporate officials, including Google President Kent Walker, to abide by voluntary controls on the development of advanced AI software. While admirable in theory, we can have no assurance that these measures will be carried out in full, and that miscreants will be held to account. This is as much true of giant tech companies like Google, which are notoriously secretive about their AI research efforts, and the US Department of Defense. Indeed, there is no way for outsiders to determine whether the UAVs, USVs, and RCVs identified in the FY 2024 budget are being developed in accordance with the Pentagon’s own principles on ethical AI use, upon which the State Department declaration is based.

Will RT-2 or its successors ever be garbed in camouflage and equipped with an assault rifle? At this point, the US military and those of its major rivals appear to be focusing their research and procurement efforts on autonomous tanks, planes, and ships—not individual combat soldiers. But the appeal of substituting robotic warriors for (more costly and vulnerable) human ones is bound to grow as the speed and lethality of major combat intensifies. A similar calculation, one suspects, will arise among the top leadership of urban police forces around the world. It is essential, therefore, that the voluntary guardrails on AI development proposed by US tech companies and the Department of State be made legally binding, and that enforcement measures be devised to ensure compliance with them.

Ad Policy
x