Is ‘Big Data’ Actually Reinforcing Social Inequalities?

Is ‘Big Data’ Actually Reinforcing Social Inequalities?

Is ‘Big Data’ Actually Reinforcing Social Inequalities?

An increasingly technologized world makes life easier… for some people, anyway.

Facebook
Twitter
Email
Flipboard
Pocket

Sometimes it feels like we live in an era when information is finally becoming “free”—unlimited media access, the “quantified self” of twenty-four-hour wellness tracking, endless dating possibilities. But there’s nothing inherently progressive about Big Data, and all that data collection can actually bolster inequality and undermine personal privacy. A new report reveals that when Big Data creeps into our work and financial lives, civil rights may get severely squeezed.

The report, “Civil Rights, Big Data, and Our Algorithmic Future” by the think tank Robinson + Yu, confronts the challenges of an information landscape where knowledge can be oppressively overwhelming. While it’s true that Big Data—the amassing of huge amounts of statistical information on social and economic trends and human behavior—can be empowering for some, it’s often wielded as a tool of control. While we’re busy tracking our daily carb intake, every data packet we submit, each image we toss into the cloud, is hoarded and parsed by powerful institutions that manage our everyday lives.

Data collection has accelerated due to technological advancements, the declining cost of data mining and storage, and the intensified surveillance climate of post-9/11 America. But while the Algorithmic Future seems inevitable and exciting, say the authors, it creates gray areas in labor rights, privacy and ethics.

In the financial services, the report cites trends of “redlining” in home loans, which tracks people of an “undesirable” consumer group—often blacks and Latinos—to poorer neighborhoods and loans. In the subprime lending crisis, numerous scandals emerged in communities where masses of people were unfairly saddled with heavy debt and bad credit.

Data crunching can also aggravate employer bias. Hiring algorithms are often seen as an “objective,” meritocratic assessment—free of irrational emotion or biases. But applied on a mass scale, Big Data can reinforce and mask prejudice.

On an individual level, the report warns that because “[d]igital indicators of race, religion, or sexual preference can easily be observed or inferred online,” the mining of social media and Google-search data can reinforce systemic discrimination. The result may be a perpetuation of the status quo of disproportionately white, upper-class, elite-educated and culturally homogeneous filling the company ranks. That demographic is profitable, perhaps, but comes at the expense of social equity.

When the HR department can quickly parse thousands of résumés and rule out countless candidates without a second glance, the method is “objective” only if you assume the algorithm will select candidates according to standards that are reasonable and fair. But it’s more likely that sloppy scans end up excluding people based on superficial criteria—where people live, an unexplained gap in their résumé, or perhaps a foreign-sounding last name that suggests a non-white immigrant background. Big Data manipulation allows these subtle individual slights to be expanded to new orders of magnitude with monstrous efficiency. And the worst possibility is that data mining “works” perfectly—in a bad way, by reproducing and reinforcing social inequalities.

In the financial sector and the workplace, researcher David Robinson tells The Nation via e-mail, although a Big Data system may not operate with “the same kind of visceral, irrational hostility that humans sometime show toward disadvantaged groups…many of the statistics that can be fed into these new computer systems are themselves records of racism or other bias.” Since the algorithm reflects social patterns, “any time someone is the victim of old-fashioned human discrimination, that discrimination is likely to be reflected in some of the data points that these new algorithms measure. Culturally speaking, there is a real tendency to defer to decisions that come from computers—which means if we’re not careful, it is reasonable to expect that computers will sanitize biased inputs into neutral-seeming outputs.”

Sometimes Big Data also allows law enforcement and business to perform invasive background checks. The federal employment screening system E-verify, for example, designed as an immigration enforcement tool, vets workers’ employment eligibility by cross-checking them against social security administration records. Civil libertarians fear this mega-screening device pushes companies toward employment apartheid.

Even the government’s own audits, the report points out, cast doubt on the system’s accuracy. According to recent federally funded analysis, “legal permanent residents…were nearly five times more likely than citizens to be issued an inaccurate TNC [Temporary Non-Confirmation status], even though they were employment authorized.… That figure is even worse for other noncitizens, which were twenty-seven times more likely to receive an inaccurate TNC.” The false screening results yield very real impacts on workers:

Employers have restricted work, delayed training, reduced pay, and taken other unlawful actions against workers who receive TNCs. Because of the uncertainty caused by TNCs, the National Immigration Law Center suggests that E-Verify “encourages employers to hire U.S. citizens exclusively, a practice that usually constitutes a violation of antidiscrimination law.”

Yet in a telling intersection of techno-bias and political bias, many law-and-order state politicians and anti-immigrant groups champion E-Verify as a weapon to root out undocumented workers. Advocates counter that the program erodes all workers’ rights by encouraging massive bias, including perhaps ruling out Latino workers to stick with “safer” native-born white workers. Civil rights laws mandate non-discrimination, but immigration policing locks bosses into a system that rewards racial profiling.

Even if you make it past the HR gauntlet, Big Data follows us to work every day: There has been an explosion in the field of workplace surveillance in corporate offices, enveloping cubicle-bound workers with granular performance monitoring to maximize productivity.

Maybe it’s time that our digital ethics were upgraded to match our technologized economy. As Neil Richards and Jonathan King of Washington University in St. Louis write in their recent treatise on information-age ethics, Big Data can easily be cast as an instrument of repression: “If we fail to balance the human values that we care about, like privacy, confidentiality, transparency, identity and free choice with the compelling uses of big data, our Big Data Society risks abandoning these values for the sake of innovation and expediency.”

Digital rights activists are calling for new efforts to safeguard privacy and ensure transparency. That includes both new regulations—like confidentiality controls for financial records and sensitive background data—along with new programs for managing information that are conscious of the rights of workers and consumers. They should be included as owners and participants, not just “end users,” in the Big Data society.

The Robinson + Yu study is not a call to Luddism. Rather, it sounds an alarm on the perils haunting the cathedral of technocracy, which lures us to sacrifice personal sovereignty for the salvation of a world where life can be impeccably controlled and “optimized.”

Not only are the poor and marginalized often left behind by the tech vanguard; they’re exploited by it when data unfairly limit employment opportunities or reinforces social barriers. That’s not the world of choice we’ve been promised.

 

Thank you for reading The Nation!

We hope you enjoyed the story you just read, just one of the many incisive, deeply-reported articles we publish daily. Now more than ever, we need fearless journalism that shifts the needle on important issues, uncovers malfeasance and corruption, and uplifts voices and perspectives that often go unheard in mainstream media.

Throughout this critical election year and a time of media austerity and renewed campus activism and rising labor organizing, independent journalism that gets to the heart of the matter is more critical than ever before. Donate right now and help us hold the powerful accountable, shine a light on issues that would otherwise be swept under the rug, and build a more just and equitable future.

For nearly 160 years, The Nation has stood for truth, justice, and moral clarity. As a reader-supported publication, we are not beholden to the whims of advertisers or a corporate owner. But it does take financial resources to report on stories that may take weeks or months to properly investigate, thoroughly edit and fact-check articles, and get our stories into the hands of readers.

Donate today and stand with us for a better future. Thank you for being a supporter of independent journalism.

Thank you for your generosity.

Ad Policy
x