How Tech Workers Are Fighting Back Against Collusion With ICE and the Department of Defense

How Tech Workers Are Fighting Back Against Collusion With ICE and the Department of Defense

How Tech Workers Are Fighting Back Against Collusion With ICE and the Department of Defense

Microsoft and Google are just two of the largest companies that have been unmasked as contracting with harmful government agencies. Who will be next?

Copy Link
Facebook
X (Twitter)
Bluesky
Pocket
Email

Last month, Gizmodo reported that Google had downgraded its unofficial motto, “Don’t be evil,” in its code of conduct, deleting several instances of the phrase in a document that is meant to guide its employees’ work. The tweak spoke to an uneasy ethical debate now echoing in the corridors of Silicon Valley as tech giants do big business with military and law enforcement, handing the keys of digital innovation to the Pentagon and Homeland Security.

Just last week, Microsoft employees brought into sharp focus the overlap between Silicon Valley’s leading lights and the Trump administration’s cruelest abuses, when they released an open letter calling on their company to cease work as a contractor for Immigration and Customs Enforcement. According to the workers, Microsoft had a $19.4 million contract to help ICE develop its surveillance operations with data-processing and artificial-intelligence technology.

Despite the company’s claims that it did not directly partner with ICE on enforcement operations, the rebelling workers objected to any link to the agency that is ripping apart immigrant families and imprisoning refugees: “As the people who build the technologies that Microsoft profits from, we refuse to be complicit. We are part of a growing movement, comprised of many across the industry who recognize the grave responsibility that those creating powerful technology have to ensure what they build is used for good, and not for harm.”

Google executives got a similar disruption earlier this year when employees learned of its partnership with the Pentagon on “Project Maven,” an effort to weaponize artificial intelligence. Google’s contract for the project, which involved “using machine learning to identify vehicles and other objects in drone footage,” according to Gizmodo, was folded into a broader set of corporate-federal “partnerships” aimed at enhancing agencies’ cloud-computing systems. The project was set to yield as much as $250 million (though Google initially claimed the contract would be just $9 million), and more crucially, would come with high-level security clearances that would lay the groundwork for future collaborations with military projects.

It’s unclear when exactly Google began cooperating on the project, but when Google staffers discovered the lucrative backdoor contract, the company had a mutiny on its hands.

Outraged Google workers took their fight public with a petition that garnered about 4,000 signatures, demanding that Google halt military-related projects and issue comprehensive safeguards against the potential abuses of machine learning by nefarious state or corporate actors: “We cannot outsource the moral responsibility of our technologies to third parties…. This contract puts Google’s reputation at risk and stands in direct opposition to our core values.”

A separate public petition by the Tech Workers Coalition, an emergent labor-advocacy group in Silicon Valley, declared: “Many of us signing this petition are faced with ethical decisions in the design and development of technology on a daily basis. We cannot ignore the moral responsibility of our work.”

Within weeks, Google announced that it planned not to renew the Project Maven contract when it is set to expire in 2019, and soon issued revised internal ethical guidelines, citing “internationally recognized” human-rights standards as general guidelines. But as Wired reported, the company seemed later to backtrack, couching its prohibition against Pentagon collusion in boilerplate language about “not developing AI for use in weapons,” but remaining open to “work with governments and the military in many other areas.” Following the fallout over Microsoft’s ICE contract, CEO Satya Nadella also tried to publicly distance himself by denouncing Trump’s immigration measures.

But Google and Microsoft are hardly outliers in partnering with creepy government security ventures. Amazon markets its facial-recognition software as a law-enforcement tool—raising alarms about potential racial biases and privacy encroachment in police surveillance technology. Hewlett Packard Enterprise and Motorola have also scored hefty contracts to enhance ICE’s surveillance and data-gathering systems. For all their user-friendly branding as lifestyle boosters and hubs for world-changing scientific breakthroughs, Big Tech’s Orwellian ties to military and security agencies highlight Silicon Valley’s potential to increase the government’s reach into our networked lives.

For the Google workers, their employer didn’t get a pass simply because it coded an algorithm instead of pulling a trigger: Ultimately, they argued, “the technology is being built for the military, and once it’s delivered it could easily be used to assist in these tasks.”

The Tech Workers Coalition continues to demand deeper reforms, including the establishment of an independent oversight system and a code of “binding ethical standards” for tech frontiers such as AI technology, and has demanded that all the major platforms and service providers that dominate the public and private sector “stay out of the business of war,” including IBM, Microsoft, and Amazon.

But there remains no firm legal firewall between tech that serves communities and tech that controls and exploits the public sphere. Across both policing and military sectors, the national-security apparatus in the post-9/11 era is increasingly invested in “information security.” Big Data and machine learning are supercharging federal surveillance capacity. Criminal-justice institutions are piloting controversial “predictive policing” programs that seek to profile potential criminals through demographic mapping. And the same companies that are fueling FBI fishing operations are also mining our social-media feeds, with virtually no systematic regulation of the porous interface between the security state and our online lives. So who writes the rules for transparency, ethics and human rights for a globalized, digital citizenry?

One starting point for an open discussion about ethics in technology is the Toronto Declaration, a kind of living constitution for tech ethics that was created at the digital-activist convention RightsCon in Toronto this year. The manifesto outlines guiding principles for machine learning and artificial-intelligence producers on civil liberties, equality in access, and public transparency, including a commitment to “promote, protect and respect human rights” and to develop binding government policies and civil-society measures to hold corporations accountable to international human-rights standards.

Although the Toronto Declaration focuses on preventing discrimination in consumer-facing technology, the same principles could be applied to corporate partnerships with government, particularly the agencies tasked with executing justice or war. According to Drew Mitnick, an advocate with Access Now: “human rights law requires transparency from both sides.” Citing Apple’s recent clash with the FBI over hacking iPhone data encryption, he added that both companies and the government currently have outsize leeway to control the murky interactions between government security and privacy protection, without real public oversight. “So within the application of AI or machine learning, certainly there’s an obligation on both companies and government to be transparent.”

Who enforces that obligation, though, is a question of what kind of social contract is established at the intersection of civil society, Big Tech, and the state. Part of that may come through legislation or global accords, but the Microsoft and Google workers’ campaigns illustrate how solutions might come from within the tech workforce itself, as it encodes a new ethical framework in the form of a labor contract.

A glimpse into what that worker-driven social contract might look like was highlighted in the wake of the Project Maven crisis. At the Silicon Valley feeder campus of Stanford University, students declared a preemptive strike against technological evil: Technology and engineering students pledged not to interview with Google unless and until the company set a concrete policy against any collusion with the government on military-related projects. The group that issued the letter, Stanford Solidarity Network, explained via e-mail that its move was a way of showing the power of workers, even if they have yet to enter the workforce:

“Our effort to withhold labor as leverage is in part based on the recognition that the development of AI requires highly trained workers who are not in high supply. Using that power to collectively shape the tech industry that we would like to participate in is at the heart of our student movement. We recognize the huge influence that these technologies have on the future of the world and we want to be able to determine what our role is in that future.”

As the line between digital democracy and technological exploitation continues to blur, it might be up to workers and consumers to draft their own Information Age Bill of Rights. Whether they’re coding the programs, feeding the machines as users, or deploying their platforms in the functions of government, the infinite uses and abuses of our data demands that tech workers fight to reclaim the digital commons they themselves built, and put it back in the hands of the people.

Ad Policy
x