That corporations are people in the eyes of the law has been a great subject of liberal indignation. This legal framework appears to have sanctified corporate money as speech and to be indicative of an era when democracy has been swallowed whole by the ”free market” and those who control it. But the foundation for corporate personhood was laid in Dartmouth v. Woodward before the Civil War for practical reasons: because a corporation is a collective of people doing business together, its constituent persons should not be deprived of their constitutional rights when they act as such. This decision facilitated the stabilization and the expansion of the early American economy; it allowed corporations to sue and to be sued, provided a unitary entity for taxation and regulation and made possible intricate transactions that would have involved a multitude of shareholders. Only by assigning legal personhood to corporations could judges make reasonable decisions about contracts.
Judges work by analogy all the time. When they seek to answer a new question using the body of existing law, they often seek, in that corpus, analogies that would allow a similar treatment to be extended to the case at hand. For instance, when trying to figure out how Google’s self-driving car should be regulated for insurance and liability purposes, should it be treated like a pet or a child or something else? It is, after all, partially autonomous and partially under the control of its owner. The most baffling regulatory frontier now confronting legislators and judges is technology. Here, the courts are making up the law as they go along, grasping for pre–Web 2.0 analogies that will allow them to adjudicate sophisticated new threats to privacy. So I’d like to propose one that hasn’t been tried, one that could revolutionize the way companies like Google and Facebook approach privacy: treat programs as people too.
Imagine the following situation: your credit card provider uses a risk assessment program that monitors your financial activity. Using the information it gathers, it notices your purchases are following a “high-risk pattern”; it does so on the basis of a secret, proprietary algorithm. The assessment program, acting on its own, cuts off the use of your credit card. It is courteous enough to e-mail you a warning. Thereafter, you find that actions that were possible yesterday—like making electronic purchases—no longer are. No humans at the credit card company were involved in this decision; its representative program acted autonomously on the basis of pre-set risk thresholds.
We interact with the world through programs all the time. Very often, programs we use interact with a network of unseen programs so that their ends and ours may be realized. We are acted on by programs all the time. These entities are not just agents in the sense of being able to take actions; they are also agents in the representative sense, taking autonomous actions on our behalf.
Now consider the following: in 1995, following the detection of hacker attacks routed through Harvard University’s e-mail system, federal prosecutors—looking for network packets meeting specific criteria—obtained a wiretap order to inspect all e-mails on Harvard’s intranet. In defending this reading of millions of personal messages, the US attorney claimed there was no violation of Harvard users’ privacy, because the scanning had been carried out by programs. A few years later, responding to the question of whether it was reading users’ e-mail in Gmail, Google argued there was no breach of privacy because humans were not reading its users’ mail. And in 2013, the National Security Agency reassured US citizens that their privacy had not been violated, because programs—mere instantiations of abstract algorithms—had carried out its wholesale snooping and eavesdropping.
There is a pattern visible in these defenses of invasive surveillance and monitoring: if you want to violate Internet users’ privacy, get programs—artificial, not human, agents—to do your dirty work, and then wash your hands of them by using the Google defense: there is no invasion of privacy because no humans accessed your personal details. Thus, the raw, untutored intuition that “programs don’t know what your e-mail is about” allows corporate and governmental actors to put up an automation screen between themselves and culpability for the privacy violations in which they routinely engage.