Sometimes it feels like we live in an era when information is finally becoming “free”—unlimited media access, the “quantified self” of twenty-four-hour wellness tracking, endless dating possibilities. But there’s nothing inherently progressive about Big Data, and all that data collection can actually bolster inequality and undermine personal privacy. A new report reveals that when Big Data creeps into our work and financial lives, civil rights may get severely squeezed.
The report, “Civil Rights, Big Data, and Our Algorithmic Future” by the think tank Robinson + Yu, confronts the challenges of an information landscape where knowledge can be oppressively overwhelming. While it’s true that Big Data—the amassing of huge amounts of statistical information on social and economic trends and human behavior—can be empowering for some, it’s often wielded as a tool of control. While we’re busy tracking our daily carb intake, every data packet we submit, each image we toss into the cloud, is hoarded and parsed by powerful institutions that manage our everyday lives.
Data collection has accelerated due to technological advancements, the declining cost of data mining and storage, and the intensified surveillance climate of post-9/11 America. But while the Algorithmic Future seems inevitable and exciting, say the authors, it creates gray areas in labor rights, privacy and ethics.
In the financial services, the report cites trends of “redlining” in home loans, which tracks people of an “undesirable” consumer group—often blacks and Latinos—to poorer neighborhoods and loans. In the subprime lending crisis, numerous scandals emerged in communities where masses of people were unfairly saddled with heavy debt and bad credit.
Data crunching can also aggravate employer bias. Hiring algorithms are often seen as an “objective,” meritocratic assessment—free of irrational emotion or biases. But applied on a mass scale, Big Data can reinforce and mask prejudice.
On an individual level, the report warns that because “[d]igital indicators of race, religion, or sexual preference can easily be observed or inferred online,” the mining of social media and Google-search data can reinforce systemic discrimination. The result may be a perpetuation of the status quo of disproportionately white, upper-class, elite-educated and culturally homogeneous filling the company ranks. That demographic is profitable, perhaps, but comes at the expense of social equity.
When the HR department can quickly parse thousands of résumés and rule out countless candidates without a second glance, the method is “objective” only if you assume the algorithm will select candidates according to standards that are reasonable and fair. But it’s more likely that sloppy scans end up excluding people based on superficial criteria—where people live, an unexplained gap in their résumé, or perhaps a foreign-sounding last name that suggests a non-white immigrant background. Big Data manipulation allows these subtle individual slights to be expanded to new orders of magnitude with monstrous efficiency. And the worst possibility is that data mining “works” perfectly—in a bad way, by reproducing and reinforcing social inequalities.