There are a few recent reports that are worth considering in our hackable brave new world. The first is from Georgetown University Law School’s Center on Privacy & Technology, titled “The Perpetual Line-Up: Unregulated Police Face Recognition in America.” It documents the growth of immense law-enforcement data banks—accumulated by the FBI and local, state, and federal police agencies—housing digital images of more than 117 million American citizens. These pictures are drawn from mug shots, driver’s licenses, passport photos, and the Internet. While facial-recognition technology is a genuinely useful tool in solving crime, the report highlights potential areas of infringement upon privacy, civil rights, and liberties, as well as a lack of transparency or public accountability. Indeed, unlike voice recordings, the collection of which is governed by the Wiretap Act, the gathering of visual images is pretty much unregulated.
This lack of regulation—and public ignorance of such systems’ existence—means that police departments are able to use facial recognition to identify and track law-abiding citizens as well as criminal suspects. Many police departments are able to run “continuous, real-time scans of people walking by a surveillance camera”—without warrant, reasonable suspicion, or any other limitation. The report found that of 52 agencies polled, only one prohibits officers from “using face recognition to track individuals engaging in political, religious, or other protected free speech.” In addition, facial-recognition technologies are manufactured by private companies using proprietary algorithms generally classified as intellectual property or trade secrets. Hence, few measures exist for ensuring accuracy through public oversight, regular maintenance, or published operating standards.
In addition, the potential for error—particularly racially based error—seems built into the machine. The Seattle Police Department even claims that its system “does not see race.” The Pennsylvania Justice Network’s system, on the other hand, comes with a manual whose only user options are: “Generic Male, Generic Female, Asian Male, Asian Female, Caucasian Male, Caucasian Female or Middle Eastern Male.” Another study suggests that darker faces may significantly reduce accuracy because of badly calibrated color contrast.
If this is not of sufficient concern, add in that there is a hidden but quite lucrative market in mining cell-phone customers’ data: It was recently revealed, for example, that AT&T has been secretly selling information like call time, duration, and location to state and local police departments since at least 2008. It doesn’t take much to imagine how other data from one’s credit cards, Siri, Facebook profile, reading habits, political preferences, entertainment choices, and residential-security cameras might be compiled to create profiles that define citizens as effectively as a new-age caste system.
If even that kind of surreptitious tracking seems not to trouble many Americans, perhaps it may take a more ominous cast when understood as a broad phenomenon in the global context. For example, the Chinese government has been building a comprehensive data bank that would rank all “natural persons, legal persons and other organizations” by adding up “social credits” accumulated in economic and social activities. China hopes to have a population-wide system of measurement up and running by the year 2020, giving “complete rein to mechanisms to encourage keeping trust and punish breaking trust.”