Fixing Tech’s Ethics Problem Starts in the Classroom

Fixing Tech’s Ethics Problem Starts in the Classroom

Fixing Tech’s Ethics Problem Starts in the Classroom

In the age of big data, some universities are trying to train future technologists to consider the implications of tools before they’re used.

Facebook
Twitter
Email
Flipboard
Pocket

As artificial intelligence proliferates, so does concern about its use in areas ranging from criminal justice to hiring to insurance. Many people worry that tools built using “big data” will perpetuate or worsen past inequities, or threaten civil liberties. Over the past two years, for instance, Amazon has been aggressively marketing a facial-recognition tool called Rekognition to law-enforcement agencies. The tool, which can identify faces in real time from a database of tens of millions, has raised troubling questions about bias: Researchers at the ACLU and MIT Media Lab, among others, have shown that it is significantly less accurate in identifying darker-skinned women. Equally troubling is the technology’s potential to erode privacy.

Privacy advocates, legislators, and even some tech companies themselves have called for greater regulation of tools like Rekognition. While regulation is certainly important, thinking through the ethical and legal implications of technology shouldn’t happen only after it is created and sold. Designing and implementing algorithms are far from merely technical matters, as projects like Rekognition show. To that end, there’s a growing effort at many universities to better prepare future designers and engineers to consider the urgent questions raised by their products, by incorporating ethical and policy questions into undergraduate computer-science classes.

“The profound consequences of technological innovation…demand that the people who are trained to become technologists have an ethical and social framework for thinking about the implications of the very technologies that they work on,” said Rob Reich, a political scientist and philosopher who is co-teaching a course called “Computers, Ethics, and Public Policy” at Stanford this year.

Coursework on the ethics of technology is not entirely new: It emerged in universities in the 1970s and ’80s, with engineers collaborating with philosophers and others to develop course materials. ABET, an organization that accredits engineering programs, has required for decades that programs provide students with “an understanding of professional and ethical responsibility.” But how the requirement is carried out varies widely.

Casey Fiesler, a faculty member in the Department of Information Science at the University of Colorado Boulder, said that a common model in engineering programs is a stand-alone ethics class, often taught towards the end of a program. But there’s increasingly a consensus among those teaching tech ethics that a better model is to discuss ethical issues alongside technical work. Evan Peck, a computer scientist at Bucknell University, writes that separating ethical from technical material means that students get practice “debating ethical dilemmas…but don’t get to practice formalizing those values into code.” This is a particularly a problem, said Fiesler, if an ethics class is taught by someone from outside a student’s field, and the professors in their computer-science courses rarely mention ethical issues. On the other hand, classes focused squarely on the ethics of technology allow students to dig deeply into complicated questions. “I think the best solution is to do both…but if you can’t do both, incorporating [ethics material into regular coursework] is the best option,” Fiesler said.

The new generation of tech-ethics courses cover topics like data privacy, algorithmic bias and accountability, and job automation, and they often draw on concrete, real-world cases. For example, some classes consider criminal-justice algorithms, which many jurisdictions across the United States use to predict the chance that someone accused of a crime will be rearrested or fail to appear in court if they are released pending their court hearings. Pretrial risk-assessment algorithms often recommend that judges detain those with “high” risk scores in jail and release those with “low” risk scores. Proponents of such tools argue that they can help to reduce the number of unconvicted people that are held in jail. (Currently that number is nearly half a million on any given day across the country.) But exactly how such algorithms are designed and implemented—as well as overseen—is a major area of concern among advocates and researchers. For instance, by drawing on data like past arrests, these programs can perpetuate the racial skew that is already present in the criminal-justice system. Fixing this issue is far from straightforward: Work by academic researchers suggests that it’s not always easy to determine what algorithmic “fairness” means in the first place.

Several new programs are designed to assist computer-science professors and others with technical backgrounds who may not feel prepared to teach on philosophical and policy issues. Embedded EthiCS is an initiative at Harvard University in which philosophy faculty and grad students develop and teach ethics course modules in computer-science classes, in close collaboration with computer scientists. Another initiative, the Responsible Computer Science Challenge, a partnership between Omidyar Network, Mozilla, Schmidt Futures, and Craig Newmark Philanthropies, will award up to $3.5 million in grants for “promising approaches to embedding ethics into undergraduate computer-science education.”

Philosopher Shannon Vallor of Santa Clara University and computer scientist Arvind Narayanan of Princeton University have co-authored modules to embed ethics material in software-engineering classes, and made them available to other educators. Professors at more than 100 universities have requested to use them, according to Vallor. The goal of this kind of embedded ethics is to make such considerations part of students’ routine. “Habits are powerful: Students should be in the habit of considering how the code they write serves the public good, how it might fail or be misused, who will control it; and their teachers should be in the habit of calling these issues to their attention,” Narayanan and Vallor write.

Other academic programs are incorporating tech ethics by offering classes co-taught by faculty members from multiple fields. An example of this approach is this year’s “Computers, Ethics, and Public Policy” course at Stanford University, developed by Rob Reich, computer scientist Mehran Sahami, political scientist Jeremy Weinstein, and research fellow and course manager Hilary Cohen. The course was first developed in the late 1980s. But this year’s version, with 300 students, includes faculty members and teaching assistants from a range of disciplines, allowing for deeper dives into ethical, policy, and technical topics. Students are given assignments in three areas: coding exercises, a philosophy paper, and policy memos.

Part of the impetus for developing a new version of the class, Rob Reich said, is the huge popularity of computer science at Stanford in recent years. The class aims to both help give engineering students familiarity with ethical and policy questions and to impart technical understanding of algorithms and other tools to students in non-technical fields. The latter is critical as well, Reich emphasized. Students who go to work in public policy and other fields need to have familiarity with technology—as demonstrated by recent congressional hearings at which some legislators seemed unaware of basic facts regarding Facebook, like how the company makes money.

In approaching philosophical questions, Reich said, the course emphasizes that ethical awareness is an ongoing habit, not a fixed set of rules. “One of the big-picture messages that we want to get across through considering the complicated ethical terrain is that there’s no such thing as having an ethical checklist. [It’s not the case] that after you go through some exercise, you’re done with ethical compliance and you can stop thinking of the ethical dimensions of your work,” Reich said.

Vallor endorses the idea of ethics education as helping students develop moral awareness. In her book Technology and the Virtues, she argues that to be prepared to navigate ethical challenges posed by new technology, students need to develop “practical wisdom”—a concept from the work of Aristotle and other philosophers. This involves cultivating a disposition to judge and respond wisely, rather than leaning on moral scripts. When a new technology comes out five years from now that we didn’t anticipate, Vallor said, what will be important is having developed the skills to be able to navigate relevant ethical issues.

In addition to the movement toward ethical training in universities, researchers are forming new venues for socially and ethically aware research on technology. Solon Barocas, a faculty member in the Department of Information Science at Cornell University, co-founded a group called Fairness, Accountability, and Transparency in Machine Learning in 2014. Over the past two years, the group has held interdisciplinary conferences in which hundreds of people from fields including computer science, social sciences, law, media and information studies, and philosophy—many of them students—gather together to discuss tech-related ethical and policy issues.

Just a few years ago, Barocas says, work on these topics was relegated to workshops within larger machine-learning conferences. But now, ethical questions have entered the mainstream and are seen as urgent research questions for the field to grapple with. More than ever, Barocas says, students are motivated to combine their interest in social change with their interest in computing. As compared with even just a few years when he was in graduate school, there is now a much clearer path for them to do so.

Even with the increased attention and support for the ethics of technology, much work remains to be done. There’s a need, for example, to reach people beyond those involved in higher education. Within the tech industry, Barocas and Vallor said, some companies are looking for ways that they can engage in the ethical implications of their work, and are hiring people to focus on these issues. (Both have personal experience with this: Barocas is taking a year off from Cornell to work at Microsoft Research, and Vallor is an AI ethicist/visiting researcher at Google). In collaboration with colleagues at Santa Clara University’s Markkula Center for Applied Ethics, Vallor has also developed tech-ethics modules for companies to use in staff training. And nearly 400 people—many of whom work for tech companies in Silicon Valley—have enrolled in the Ethics of Technological Disruption, a Stanford continuing-studies course, featuring many guest speakers and taught by the same group of faculty that developed the tech-ethics course for undergraduates.

There’s also a need to bridge the gap between the academic and industry conversation about tech ethics and the wider community, particularly those most likely to be impacted by potential bias in tools such as predictive policing and risk-assessment algorithms. While advocates and researchers are doing some work to reach out to impacted communities, Barocas said, “there’s still not nearly enough interaction between the communities that are most deeply affected by these kinds of problems, and the people doing research on them.”

Finally, some experts argue that tech ethics must also consider cases in which technology shouldn’t be emphasized as a solution. In a recent session of the Ethics of Technological Disruption, guest speaker Safiya Umoja Noble brought up the enormous and disproportionate toll that the mortgage crisis took on African-American wealth. Noble, author of the book Algorithms of Oppression and a faculty member at the University of Southern California, said, “We could say that we need to tweak the algorithm. But I think there’s a different set of conversations that we need to have, about the morality of projects that are just everyday business.” As Noble and other researchers like Virginia Eubanks argue, part of the conversation, and part of tech-ethics education, should include thinking through how to put technology in its proper place.

Thank you for reading The Nation!

We hope you enjoyed the story you just read. It’s just one of many examples of incisive, deeply-reported journalism we publish—journalism that shifts the needle on important issues, uncovers malfeasance and corruption, and uplifts voices and perspectives that often go unheard in mainstream media. For nearly 160 years, The Nation has spoken truth to power and shone a light on issues that would otherwise be swept under the rug.

In a critical election year as well as a time of media austerity, independent journalism needs your continued support. The best way to do this is with a recurring donation. This month, we are asking readers like you who value truth and democracy to step up and support The Nation with a monthly contribution. We call these monthly donors Sustainers, a small but mighty group of supporters who ensure our team of writers, editors, and fact-checkers have the resources they need to report on breaking news, investigative feature stories that often take weeks or months to report, and much more.

There’s a lot to talk about in the coming months, from the presidential election and Supreme Court battles to the fight for bodily autonomy. We’ll cover all these issues and more, but this is only made possible with support from sustaining donors. Donate today—any amount you can spare each month is appreciated, even just the price of a cup of coffee.

The Nation does not bow to the interests of a corporate owner or advertisers—we answer only to readers like you who make our work possible. Set up a recurring donation today and ensure we can continue to hold the powerful accountable.

Thank you for your generosity.

Ad Policy
x