A Brief History of Drones

A Brief History of Drones

With the invention of drones, we crossed into a new frontier: killing that’s risk-free, remote, and detached from human cues.

Facebook
Twitter
Email
Flipboard
Pocket


A US Reaper drone flies over the moon above Kandahar Air Field, southern Afghanistan, January 31, 2010. (AP Photo/Kirsty Wigglesworth, File)

Portions of this article are adapted from The Violence All Around, forthcoming from Harvard University Press.

It was ten years ago this month, on February 4, 2002, that the CIA first used an unmanned Predator drone in a targeted killing. The strike was in Paktia province in Afghanistan, near the city of Khost. The intended target was Osama bin Laden, or at least someone in the CIA had thought so. Donald Rumsfeld later explained, using the passive voice of government: “A decision was made to fire the Hellfire missile. It was fired.” The incident occurred during a brief period when the military, which assisted the CIA’s drone program by providing active service personnel as operators, still acknowledged the program’s existence. Within days of the strike, journalists on the ground were collecting accounts from local Afghans that the dead men were civilians gathering scrap metal. The Pentagon media pool began asking questions, and so the long decade of the drone began.

The CIA had been flying unarmed drones over Afghanistan since 2000. It began to fly armed drones after the September 11 attacks. Some were used during the air war against the Taliban in late 2001. But by February 2002 the CIA hadn’t yet used a drone for a strike outside military support. The February 2002 attack was a pure CIA kill operation, undertaken separately from any ongoing military operation. The drone operators were reported to have come across three people at a former mujahedeen base called Zhawar Kili—later, officials would never claim they were armed—including a “tall man” to whom the other men were “acting with reverence.” (On one previous occasion, a year before the September 11 attacks, CIA observers thought they’d seen bin Laden: a tall man with long robes near Tarnak Farm, bin Laden’s erstwhile home near Kandahar. This sighting by an unarmed drone was what had led to the first arguments among the White House and CIA about arming drones with missiles, a debate that simmered until it was snuffed out by the September 11 attacks.)

After the February 2002 strike, military officials quickly acknowledged that the “tall man” was not bin Laden. But they insisted the targets were “legitimate,” although they struggled to explain why, using vague and even coy language to cover up what appeared to be uncertainty. Pentagon spokeswoman Victoria Clark said, “We’re convinced that it was an appropriate target.” But she added, “We do not know yet exactly who it was.” Gen. Tommy Franks told ABC News that he expected the identities of the three to prove “interesting.”

Pentagon spokesman John Stufflebeem spoke of the government’s being in the “comfort zone” of determining that the targets were “not innocent,” noting there were “no initial indications that these were innocent locals,” a curious phrase reflecting a presumption of guilt. “Indicators were there that there was something untoward that we needed to make go away…. Initial indications would seem to say that these are not peasant people up there farming.” Rumsfeld later chimed in, offering his signature pseudo-philosophical analysis to address the allegations that the dead were civilians. “We’ll just have to find out. There’s not much more anyone could add, except that there’s that one version, and there’s the other version.”

The government’s evasion was helped by the fact that Zhawar Kili, the site of the strike, was an infamous mujahedeen complex built with CIA and Saudi support by Jalaluddin Haqqani, the mujahedeen scion allied with the Taliban, then and now. In the 1980s CIA officers and journalists used to visit the base. It was the site of two major battles against Soviet forces in the mid-’80s. President Bill Clinton ordered a strike on the area with Tomahawk cruise missiles in 1998 after the two Africa embassy bombings, and the US military pummeled it with airstrikes beginning in late 2001. For a time the military thought that bin Laden and his Al Qaeda forces might have fled to Zhawar Kili after the battle of Tora Bora (a puzzling hypothesis because the area had already been hit by withering fire and was more exposed than Tora Bora). In January 2002 the military sent several search and demolition units there to gather leftover material with potential intelligence value and to blow up the caves.

By February 2002 the place had been deserted by militants for months. Several journalists headed to Zhawar Kili after the strike and spoke with local leaders and the families of the dead, who confirmed the identities of the men killed: Daraz Khan, the tall man, about 31, from the village of Lalazha, and two others, Jehangir Khan, about 28, and Mir Ahmed, about 30, from the village of Patalan. The New York Times’s John Burns was among those who spoke with the families, saw the men’s graves and confirmed their extreme poverty. The men had climbed to the mountainous area to forage for leftover metal from the US airstrikes, bits of shrapnel and bomb tail fins—scavengers could fetch about 50 cents per camel load. Although Daraz Khan was admittedly tall by Afghan standards—5 feet 11 inches—he was six inches shorter than bin Laden.

Reading about the strike later, I felt a slight connection with Daraz Khan. I am also 5 feet 11, and at around the same period I spent time foraging for bomb fragments in remote locations in Afghanistan. As a researcher for Human Rights Watch, working on an assessment of the US air war in the winter and spring of 2002, I had visited locations like Zhawar Kili. With colleagues I had climbed into craters, poked at the twisted tail fins of bombs, and interviewed witnesses and families of the dead. And I was the tallest among my colleagues. Perhaps I could have been mistaken for bin Laden too.

* * *

Air warfare has been with us for a hundred years, since the Italian invasion of Libya in 1911, and the development of drones was in the works from the start. The reason is simple: even with all the advantages offered by air power, humans still needed to strap themselves into the devices and fly them. There were limits to the risks that could be taken. Whatever an airplane was used for, it ultimately had to return to base with its pilot. Not surprisingly, from the start of the development of airplanes for use in war, engineers labored to circumvent this limitation.

During World War I, the Navy hired Elmer Ambrose Sperry, the inventor of the gyroscope, to develop a fleet of “air torpedoes,” unmanned Curtis biplanes designed to be launched by catapult and fly over enemy positions. A secret program was run out of a small outfield in central Long Island, New York. A New York Times report from 1926, when the secret was revealed, said that the planes were “automatically guided with a high degree of precision” and after a predetermined distance were supposed to suddenly turn and fly vertically downward, carrying enough TNT to “blow a small town inside out.” The program ran out of steam because the war ended in 1918. In reality, according to a Navy history, the planes rarely worked: they typically crashed after takeoff or flew away over the ocean, never to be seen again.

In World War II a different approach was taken: the Navy launched a new program, called Operation Anvil, to target deep German bunkers using refitted B-24 bombers filled to double capacity with explosives and guided by remote control devices to crash at selected targets in Germany and Nazi-controlled France. Remote control technology was still limited—involving crude radio-controlled devices linked to motors—so actual pilots were used for takeoff: they were supposed to guide the plane to a cruising altitude and then parachute to safety in England, after which a “mothership” would guide the plane to its target. In practice, the program was a disaster. Many planes crashed, or worse. John F. Kennedy’s older brother, Joseph, was one of the program’s first pilots: he was killed in August 1944 when a drone-to-be that he was piloting exploded prematurely over Suffolk, England.

And here lies a small irony in history. The target of that particular mission of Kennedy’s was a Nazi site where scientists were working on technology in the same vein, the remote delivery of explosives: the world’s first military rocket program. Indeed, German engineers had switched to rocketry, given the difficulties in building full-scale pilotless airplanes. They worked extensively on rockets during the war, and after the war US and Russian governments carried on their work. (In the late 1940s and ’50s, hundreds of former German rocket engineers and other Nazi scientists were brought to the United States and granted citizenship in exchange for their help on rocket engineering efforts—some despite clear ties to Holocaust-related atrocities. Stanley Kubrick’s character Dr. Strangelove was a caricature of an expatriate Nazi scientist.)

The development of drones stagnated for decades because there was little need for them, thanks to developments in rocketry. By the late 1950s, the US military had developed, in addition to many rockets, a slew of slower but more guidable “cruise missiles”—which, in their own way, were like little airplanes. Cruise missiles maintain airplanelike “lift” on stubby little wings, unlike ballistic missiles, which move through a long curve of flight comprising a launch and rise followed by a guided fall.

Cruise missiles were, in a sense, proto-drones, miniature versions of what the military had attempted as far back as 1917. They could be dispatched and guided in flight; some had cameras; and, in some incarnations, could even change target midflight. But cruise missiles could not linger over a battlefield in the manner of a holding pattern, nor could they return to base. And their weapons delivery was blunt and inflexible; the delivery was the missile itself, its single warhead. So in the 1960s and ’70s, Air Force engineers continued to tinker with unmanned aircraft—in particular for use in surveillance flights, which don’t engage in complex flight maneuvers and require less sophisticated piloting. Only with major improvements in computing and electronic controlling systems in the 1980s and ’90s were modern-day drones made possible. And it wasn’t until the late ’90s that the Air Force began working on the technical aspects of arming unmanned aircraft with missiles.

The CIA, which had been using the drones for surveillance, became involved with the military effort to arm them after September 11. Although the agency had been authorized to support military operations even before the attacks, the legal parameters governing its involvement in military or paramilitary operations were murky, then as now. There were questions about who was allowed to “pull the trigger” and in what settings. Outright assassinations were illegal under a presidential executive order in the wake of CIA scandals from the Nixon period, and the laws of armed conflict contained complicated provisions on the circumstances in which civilian personnel—CIA officers not in uniform—could use lethal force.

So government attorneys worried back in 2001. Ten years later, the CIA works side by side with the military, launching kinetic strikes from Pakistan to Somalia. Few concerns are raised anymore, except by a handful of academics and activists who worry that the CIA is less accountable than the military for its targetting (and, as we saw in Zhawar Kili, for its mistakes). Still, many people seem to be leery of drones in the abstract—whether they are used in armed conflict or in targeted killings.

* * *

What, in the final analysis, is troubling about the CIA’s use of drones? Drones are only one weapon system among many, and the CIA’s role, while disturbing, is not the primary cause for alarm. Certainly the legal identity of drone operators, CIA or military, matters little to the victims of a Hellfire strike. So what is it about the drone, really, that draws the attention of victims, insurgent propagandists, lawyers and journalists, more than other forms of kinetic violent force? Why do drones interest us, fascinate us or disturb us?

Perhaps one clue comes from the linguistics. The weapons’ names suggest ruthless and inhumane characteristics. The first drone aircraft deployed by the CIA and Air Force after 2001 was the Predator, a rather coarse name even for a weapons system, suggestive that the enemy was not human but merely prey, that military operations were not combat subject to the laws of war but a hunt. (Some of the computer software used by the military and the CIA to calculate expected civilian casualties during airstrikes is known in government circles as Bug Splat.) The Predator’s manufacturer, General Atomics, later developed the larger Reaper, a moniker implying that the United States was fate itself, cutting down enemies who were destined to die. That the drones’ payloads were called Hellfire missiles, invoking the punishment of the afterlife, added to a sense of righteousness.

But the real issue is the context of how drones kill. The curious characteristic of drones—and the names reinforce this—is that they are used primarily to target individual humans, not places or military forces as such. Yet they simultaneously obscure the human role in perpetrating the violence. Unlike a missile strike, in which a physical or geographic target is chosen beforehand, drones linger, looking precisely for a target—a human target. And yet, at the same time, the perpetrator of the violence is not physically present. Observers are drawn toward thinking that it is the Predator that kills Anwar al-Awlaki, or its Hellfire missiles, not the CIA officers who order the weapons’ engagement. On the one hand, we have the most intimate form of violence—the targeted killing of a specific person, which in some contexts is called assassination—while on the other hand, the least intimate of weapons.

This characteristic, the distance between targets and CIA executive officers at Langley, is the defining characteristic of drones. They are the zenith of the technological quest that runs back to the invention of slings and arrows thousands of years ago, efforts of the earliest perpetrators of violence to get away from their victims. That process, which brought catapults and later artillery, reached its first peak with the development of intercontinental nuclear missiles; but those are weapons of limited tactical use and have never been used. Drones allow all the alienation of long-range missions but with much more flexibility and capacity for everyday use. The net result is everyday violence with all the distance and alienation of ICBMs. This is disturbing perhaps because alienation is disturbing.

The work of animal behaviorists like Konrad Lorenz sheds some light on why. Lorenz—a onetime member of the Nazi party who later renounced his politics and won the Nobel Prize in the 1970s—spent much of his life studying violence in animals. His book On Aggression posited a theory whereby many animals, male and female, have a natural “drive” to be aggressive against opponents, including members of their own species.

The aggression drive, Lorenz posited, was often limited within species by a “submission” phenomenon, whereby potential victims turn off the aggressive drive in others by displaying signs of submission. In this way, most animal violence is checked before it occurs. Lorenz suggested that in humans, the submission safety valve was blunted by the technological creation of weapons, which emotionally “distanced” the killer from his victim. When a spear or sling is used to kill, victims lose the opportunity to engage in submission and trigger the aggression “off switch.” The drone represents an extreme extension of that process. Drones crossed into a new frontier in military affairs: an area of entirely risk-free, remote and even potentially automated killing detached from human behavioral cues.

Military research seems to back this up. Lt. Col. Dave Grossman, a psychologist and former professor at West Point, has written extensively on the natural human aversion to killing. His 1995 book On Killing contains a collection of accounts from his research and from military history demonstrating soldiers’ revulsion with killing—in particular, killing at close range. He tells the story of a Green Beret in Vietnam describing the killing of a young Vietnamese soldier: “I just opened up, fired the whole twenty rounds right at the kid, and he just laid there. I dropped my weapon and cried.” The most telling accounts are with the “close” kills of hand-to-hand combat. Grossman tells of a Special Forces sergeant from the Vietnam War describing a close kill: “‘When you get up close and personal,’ he drawled with a cud of chewing tobacco in his cheek, ‘where you can hear ‘em scream and see ‘em die,’ and here he spit tobacco for emphasis, ‘it’s a bitch.’”

Obviously the primary advantage of the drone is that it insulates its operators from risk. Yet one can’t help wondering whether aversion to the unpleasantness of violence is another factor making drones popular with the military and CIA. Drones make the nasty business of killing a little easier. Or do they?

There are reports of military drone operators suffering from post-traumatic stress disorder, and studies showing that those who conduct strikes or watch videos of strikes suffer from “operational stress,” which officials believe is the result of operators’ long hours and extended viewing of video feeds showing the results of military operations after they have occurred—i.e., dead bodies. Still, these reports pale in comparison with those of PTSD among combat veterans. And there is no public information about stress among those ordering the strikes—the CIA strike operators or the decision-makers at Langley.

A little-noticed 2011 British Defense Ministry study of unmanned drones discusses some of these points: from concerns about drone operators’ potential alienation from violence to the propaganda opportunities for enemies (noting that drones’ use “enables the insurgent to cast himself in the role of underdog and the West as a cowardly bully—that is unwilling to risk his own troops, but is happy to kill remotely”). The paper also discusses concerns raised by military analyst Peter Singer, who has written on “robot warfare” and the risk that drones might acquire the capacity to engage enemies autonomously. The report envisions a scenario where a drone fires on a target “based solely on its own sensors, or shared information, and without recourse to higher, human authority.”

The authors note that in warfare, the risks of the battlefield and the horror that comes from carrying out violence can act as controls on brutality. Citing the oft-quoted adage of Gen. Robert E. Lee, reportedly uttered after the battle of Fredericksburg, “It is well that war is so terrible, otherwise we would grow too fond of it,” the authors then ask:

If we remove the risk of loss from the decision-makers’ calculations when considering crisis management options, do we make the use of armed force more attractive? Will decision-makers resort to war as a policy option far sooner than previously?

The issue is not that armed drones are more terrible or deadly than other weapons systems. On the contrary, the violence of drones today is more selective than many forms of military violence, and human rights groups recognize that drones, in comparison with less precise weapons, have the potential to minimize civilian casualties during legitimate military strikes.

Nor is the issue the remote delivery of weapons: alienation from the effects of violence reached a high-water mark in World War I. What makes drones disturbing is an unusual combination of characteristics: the distance between killer and killed, the asymmetry, the prospect of automation and, most of all, the minimization of pilot risk and political risk. It is the merging of these characteristics that draws the attention of journalists, military analysts, human rights researchers and Al Qaeda propagandists, suggesting something disturbing about what human violence may become. The unique technology allows the mundane and regular violence of military force to be separated further from human emotion. Drones foreshadow the idea that brutality could become detached from humanity—and yield violence that is, as it were, unconscious.

In this sense, drones foretell a future that is very dark indeed.

Thank you for reading The Nation!

We hope you enjoyed the story you just read. It’s just one of many examples of incisive, deeply-reported journalism we publish—journalism that shifts the needle on important issues, uncovers malfeasance and corruption, and uplifts voices and perspectives that often go unheard in mainstream media. For nearly 160 years, The Nation has spoken truth to power and shone a light on issues that would otherwise be swept under the rug.

In a critical election year as well as a time of media austerity, independent journalism needs your continued support. The best way to do this is with a recurring donation. This month, we are asking readers like you who value truth and democracy to step up and support The Nation with a monthly contribution. We call these monthly donors Sustainers, a small but mighty group of supporters who ensure our team of writers, editors, and fact-checkers have the resources they need to report on breaking news, investigative feature stories that often take weeks or months to report, and much more.

There’s a lot to talk about in the coming months, from the presidential election and Supreme Court battles to the fight for bodily autonomy. We’ll cover all these issues and more, but this is only made possible with support from sustaining donors. Donate today—any amount you can spare each month is appreciated, even just the price of a cup of coffee.

The Nation does not bow to the interests of a corporate owner or advertisers—we answer only to readers like you who make our work possible. Set up a recurring donation today and ensure we can continue to hold the powerful accountable.

Thank you for your generosity.

Ad Policy
x