Toggle Menu

Minority Report

Michelle Chen

October 1, 2014

Sometimes it feels like we’re living in an era in which information has finally become “free”—unlimited media access, twenty-four-hour wellness tracking, endless dating possibilities. But there’s nothing inherently progressive about Big Data. A new report shows that when Big Data creeps into our workplaces and our financial lives, it may simply create new ways of reinforcing old racial and economic injustices.

The report, “Civil Rights, Big Data, and Our Algorithmic Future,” by the think tank Robinson + Yu, notes that technological advances, the declining cost of data storage, and the intensified surveillance climate of post-9/11 America have spurred massive data collection. This accumulation of private information by corporations and government has created troubling new issues in the areas of labor rights, privacy and ethics. Consider the influence of Big Data on hiring practices. Hiring algorithms are often seen as an “objective,” meritocratic assessment, free of irrational prejudice or biases. But the report warns that because “[d]igital indicators of race, religion, or sexual preference can easily be observed or inferred online,” the mining of social media and Google-search data can reinforce systemic discrimination. The result may be a perpetuation of an unjust status quo: disproportionately white, upper-class, elite-educated and culturally homogeneous. Sloppy résumé scans end up excluding people based on superficial criteria—where they live, for example, a metric bound to reflect already-existing housing discrimination. Big Data manipulation allows these subtle individual slights to be expanded to new orders of magnitude with monstrous efficiency. Since the algorithm reflects social patterns, researcher David Robinson tells The Nation, “any time someone is the victim of old-fashioned human discrimination, that discrimination is likely to be reflected in some of the data points that these new algorithms measure. Culturally speaking, there is a real tendency to defer to decisions that come from computers—which means if we’re not careful, it is reasonable to expect that computers will sanitize biased inputs into neutral-seeming outputs.”

Michelle ChenTwitterMichelle Chen is a contributing writer for The Nation.


Latest from the nation