Cambridge Analytica Showed Us the Dangers of ‘Academic Commercialism’

Cambridge Analytica Showed Us the Dangers of ‘Academic Commercialism’

Cambridge Analytica Showed Us the Dangers of ‘Academic Commercialism’

What happens to ethics when universities rush to monetize their work?

Copy Link
Facebook
X (Twitter)
Bluesky
Pocket
Email

Universities are diving more and more deeply into academic commercialism. They prod their faculties and graduate students to convert nonprofit research—often heavily subsidized by taxpayers—into their own for-profit start-ups or collaborations with existing companies as fast as possible. That’s especially true in fields developing disruptive technologies of unprecedented power, such as artificial intelligence, robotics, and extreme forms of genetic engineering.

But at what price, in terms of compromised public interests? As institutions, faculties, and graduate students become more eager to monetize revolutionary inventions, it’s not surprising that they often fail to pause for rigorous ethical and safety analyses. Many give at least lip service to the idea of “public engagement” in discussions about how—although usually not whether—ethically fraught lines of innovation should be widely adopted. But they move so quickly to develop and market new technologies that it’s often too late for the kind of broadly participatory political deliberations about benefits and harms, and winners and losers, that the public has a right to demand. After all, the social and ecological consequences may affect us all.

Consider the Cambridge Analytica scandal that broke wide-open two years ago: a massive privacy breach that exposed tens of millions of people’s personal data from Facebook, which was then used to micro-target political messages to voters in the 2016 US presidential election. The media has mostly skewered Aleksandr Kogan, the young psychologist then at the University of Cambridge who made what has been widely condemned as a devil of a deal in 2014 between his own private start-up and Cambridge Analytica’s parent company. What happened at Cambridge before that deal, though, deserves more scrutiny. It suggests that the university, as an institution, and its Psychometrics Centre unwittingly provided cues for many of Kogan’s later missteps.

In fact, the university had highlighted the idea of psychological targeting in politics before Kogan was even hired. In 2011—the year before he arrived and about three years before he began working with Cambridge Analytica’s parent company, SCL Group—Cambridge University published a news article reporting that researchers at the Psychometrics Centre were pursuing tantalizing new technical possibilities for psychologically targeting people in advertising.

The article focused on a new online marketing tool called LikeAudience, which could generate an average psychological and demographic profile of people who shared a particular “like” on Facebook, or identify Facebook “likes” that would appeal to people of a particular psycho-demographic profile. It noted that LikeAudience drew only from anonymized information from people using Facebook apps who had agreed to let their data be used for it. But there was no discussion of the obvious possibility that such methods, if one of the world’s oldest and most respected academic institutions continued to help refine and tacitly endorse them, might inspire abuses—for example, micro-targeting tools that could threaten fair and free elections. Instead, the university presented the psychological and demographic traits that LikeAudience revealed about typical Facebook fans of then-President Barack Obama and other political leaders in the United States and the United Kingdom as well, including Sarah Palin and then–UK Prime Minister David Cameron.

The article also suggests that both Centre researchers and the university anticipated that this line of technical development could revolutionize not only the marketing of consumer products and brands but also the selling of political candidates to voters. And that the university was willing to help the two young researchers, Michal Kosinski and David Stillwell, advertise that the tool was now available, apparently for political and commercial campaigns. (Both were graduate students at the time: Kosinski at Cambridge and Stillwell at another university.)

“LikeAudience’s creators believe that it will be of particular value to marketers, who will be able to uncover new potential audiences for their advertising campaigns, and exploitable niches based on the fans of their closest rivals,” the university declared. “The potential significance for politicians, particularly when on the election trail, is also clear.”

LikeAudience was available on a free website. But Kosinski and Stillwell already had formed their own spinoff company called Cambridge Personality Research Ltd. the year before, and by 2012 it was selling a more advanced product: Preference Tool.

The line of technology the Centre had begun contributing to involves first collecting from people—in the Centre’s case, with their permission—academic-caliber psychological test scores, demographic information, and some record of their online behavior. Then, the data can be analyzed to detect patterns and generate psycho-demographic profiles of other people, for whom only some subset of such information is available.

By 2012, the Centre was publicizing LikeAudience in a list of its own “Products/Services” on its own website as “a breakthrough research and marketing tool,” and announcing that Preference Tool was also now available from Cambridge Personality Research Ltd., “a spin-off of The Psychometrics Centre.” The latter product, the Centre proclaimed, could “significantly improve targeting and reduce the cost of marketing campaigns,” and was already used by leading online marketing agencies, for whom it had “increased campaign effectiveness by up to 140%.”

Starting with the Facebook app myPersonality, launched by Stillwell as his own business in 2007 before his work with the Centre, Centre researchers over time collected, apparently insecurely anonymized, and stored for commercial as well as academic studies the personal data and performances on psychological tests of millions of people, including minors.

Until about 2013, it was a very different world, Kosinski recalled in an interview. An awareness of the serious potential for harmful consequences from analyzing digital footprints was only beginning to emerge. That it did, he suggests, had a lot to do with the evolution of his own and the Centre’s work.

As the Centre’s expertise grew more sophisticated, its researchers began speaking publicly about the risks involved with the powerful predictive methods that the Centre was trying out. By 2012, Kosinski, Stillwell, and their co-authors had begun including generic warnings about privacy in their papers. Then, in a major 2013 paper the two published with a Microsoft colleague in a prestigious journal, their warning was more detailed and specific, pointing out that such methods, in the wrong hands, could lead to security breaches whose exposure of sensitive personal information could even be life-threatening.

Still, they devoted much more attention in that 2013 article and much of their published work (at least until the Cambridge Analytica scandal broke) to describing the ingenious techniques they’d devised to improve their own predictive modeling. Meanwhile, neither Stillwell nor Kosinski, in a long string of research publications, listed conflicts of interest related to their start-up, Cambridge Personality Research (CPR). They founded it in 2010, were the only initial directors and shareholders, and did not file paperwork to dissolve the company—which ultimately was a bust—until July 2015. Both, in e-mails, indicated they did not consider the company a conflict to be reported.

The students’ for-profit work seems to have dovetailed closely with the Centre’s nonprofit research. CPR proclaimed that it was harnessing Cambridge University’s “global leadership” in psychology, neuroscience, and statistics, and its “mountains” of data “for commercial use.” CPR advertised that its statistical methods were based on years of research at the university, and touted its “Cambridge 100-factor model, a unique statistical tool to predict the behaviour of any individual or group. We can model and predict the personality of any brand, product, action, audience or keyword.”

When Kosinski and Stillwell decided to concentrate on being academics and gave up on the company, the Centre not only described Preference Tool, the company’s product, as its own but made it available to businesses and other clients who were willing to help fund the Centre’s research in exchange.

Along the way, the Centre also pioneered another idea that Kogan later riffed on in a far more daring way: Help an ambitious company—in the Centre’s case, the marketing-research firm CubeYou—by relaunching a Facebook app originally developed by Centre researchers in a way that would give the company access to the app’s raw data, which included, at least according to the public website for the app, the personal data of friends of the app’s users.

The original version of the YouAreWhatYouLike app was designed in 2011 by Kosinski and Stillwell, apparently allowing them to collect more data to improve their predictive models. Later it became an in-house project at the Centre. It collected people’s Facebook likes to generate predictions for them of what those preferences revealed about their personality. But after the Centre began collaborating with CubeYou in 2013—the year before Kogan’s infamous deal—a new version of the app was launched, with dramatically different terms of use.

YouAreWhatYouLike began requiring Facebook subscribers who wanted to use the app to allow both the Centre and CubeYou access to “your and your friends’ public profile, likes, friends list, e-mail address, relationships, relationship interests, birthday, work history, education history, hometown, interests, current city and religious and political views.” (The new version predicted the personalities of both app users and their friends.)

The Centre promised that it and CubeYou would anonymize data before sharing it or any derivatives from it with others. But by allowing friends’ personal information to be collected and stored without their knowledge or permission, the website terms violated an ethical standard the Centre had long championed—that data should only be collected and stored from people who have given their consent.

John Rust, founder and recently retired director of the Centre, told The Nation that neither he nor the Centre had anything to do with CubeYou or the YouAreWhatYouLike app. Kosinski, the Centre’s deputy director at least between 2012 and mid-2014, told The Nation that there was a partnership, but that it was just a proof of concept that didn’t make any money. David Stillwell, who took over as deputy director when Kosinski left, did not answer e-mailed questions about it.

Vesselin Popov, who joined the Centre in 2013 and is now its executive director, declined to answer most questions for this article. But in an e-mail, he did state that the YouAreWhatYouLike app made by Kosinski and Stillwell was “separate” from an app called the same thing that “was made by CubeYou and which used our algorithms based on our published PNAS paper for the purpose of showing predictions to users.”

He can’t speak for CubeYou, he added, “but the Centre only analyzed two of those fields that you’ve listed, which were the user’s Likes and their friends’ Likes. The Likes were sent to our API for the purpose of generating predictions and then those predictions were shown immediately to the user. We never stored or analyzed any of those other types of data.”

The Centre, he wrote, only used friends’ “likes” data to provide insights to participants, in a way Facebook allowed at the time. The Centre “has never used data from friends of users for research nor for any other purpose.”

As late as March 2015, however, the site for the new version of the app emphasized the Centre’s key role in this collaboration. It used the same website URL as the original one, and the Centre’s website continued linking to that home page, once the Centre-CubeYou version was live. The Privacy and Terms page didn’t mention that your friends’ data would be accessed. But the “How Does It Work” page did, and the About page also indicated that YouAreWhatYouLike was developed by Kosinski and Stillwell. The joint version apparently stopped operating by May of that year, after new Facebook rules took effect.

In 2018, after a CNBC report on YouAreWhatYouLike in the midst of the Cambridge Analytica scandal, the Centre posted a note on the app’s website stating that CubeYou “created” the 2013 app and the Centre had administered the website URL—not mentioning the role touted earlier of Kosinski and Stillwell in developing this “one-click” personality test.

Federico Treu, founder and in 2018 CEO of CubeYou, could not be reached for comment. According to CNBC’s 2018 story, Treu “denied CubeYou has access to friends’ data if a user opted in, and said it only connects friends who have opted into the app individually.” It isn’t clear, though, if he meant what CubeYou had access to when its collaboration with the Centre was active, back in 2013–15, or what it had access to in 2018.

The Centre referred to its own data haul from YouAreWhatYouLike in marketing its products and consulting services to help fund its research. By 2014, its flyer touting “Benefits of Partnering with the Psychometrics Centre for Collection and Analysis of Big Data” claimed www.youarewhatyoulike.comwithout specifying a particular version of the app—had generated “one of the largest databases of behavioral data in history.” (The flyer added that the Centre’s Apply Magic Sauce, its new product for predictive analytics, could “translate digital footprints into detailed psycho-demographic profiles.”)

By 2016, the Centre was ready to move out of the Cambridge Department of Psychology. Its new home was the University of Cambridge Judge Business School, where it could help JBS Executive Education Limited, a related for-profit company that is wholly owned by the university, serve the company’s network of global clients. The company makes annual “gift aid distributions” to Cambridge University, which totaled about £1.24 million in 2019, or about $1.6 million. Starting in 2016, then, such “gifts” apparently benefit from any earnings generated by the Centre’s work helping businesses exploit its expertise in psychological testing—and its “groundbreaking” applications of predictive Big Data techniques.

The business school has made clear that includes “psychological marketing with Big Data” and that the Centre can demonstrate how “communications tailored to someone’s personality increases key metrics such as clicks and purchases.”

Was it wise for a nonprofit institution, supported by public tax dollars, to start down this path in the first place—applying its academic prowess in psychological assessment in ways that empower online marketing? It’s one thing for an institution to study such new technical territory from the outside. But when it also tries to market its results from the inside, how can it maintain its traditional role as a trustworthy, independent source of critical analyses for such new technologies? In effect, has Cambridge University compromised its own academic capacity to challenge the emerging market for machine-driven, personalized mass persuasion?

Such automated psychological manipulation now poses a significant social issue. Consider the analysis of Cathy O’Neil, author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. The growing integration of Big Data, artificial intelligence, and predictive behavioral modeling, she writes, lets data-crunching machines sift “through our habits and hopes, fears and desires,” searching for patterns to predict how best to exploit our weaknesses and vulnerabilities. The result, she points out, is predatory advertising: ads that can “zero in on the most desperate of us at enormous scale,” and sell those in greatest need “false or overpriced promises.”

“They find inequality and feast on it,” she adds in her book. “The result is that they perpetuate our existing social stratification, with all of its injustices.”

To be fair, what happened at Cambridge University partly reflects the pressure on academics today to shape their research to meet market demands in the hope of successfully competing for the funding they need to be able to conduct any research at all.

For US research institutions, the immediate financial rewards of commercializing their research in 2017 hit a record $3 billion in licensing income (before expenses) from related intellectual property, according to the AUTM US Licensing Activity Survey: 2017. The latest trend is for institutions to push employees and students to start their own companies—in the United States, at least 1,080 start-ups in 2017 and again in 2018. Universities often receive equity in these spinoffs or collect royalties and licensing fees more directly from companies. The start-up trend, which is up almost a third from five years earlier, further blurs the line between institutions’ nonprofit mission and for-profit aspirations.

Politically, the surge in academic commercialism—including universities selling researchers’ expertise, as paid consultants, and working intimately in joint projects with existing companies—has won bipartisan applause in the United States. The trend grew quickly after Congress passed the Bayh-Dole Act in 1980, which gave universities the authority to own and patent inventions funded by the federal government. And it has coincided with the declining role of governments in funding academic research. In the United States, for example, the federal government’s share of total support for academic R&D peaked in 1973, at 69 percent. By 2016, it was down to 54 percent, according to the US National Science Board. Over that period, state and local support had fallen from accounting for about 10 percent to less than 6 percent, while business support had doubled from about 3 to about 6 percent. Academic institutions’ own resources, as the source for their R&D spending, rose from 11 percent to 25 percent of the total.

Unfortunately, nothing similar to the Bayh-Dole Act exists to push universities to partner with nonprofits—like worker organizations, environmental and other public-interest groups, and grassroots community organizations—to offset the pressures on institutions to focus on generating high-tech, for-profit products. The ethical perils of assuming, by default, that the pursuit of such products will serve universities’ public-service mission seem obvious.

Yes, many beneficial products—such as lifesaving medications—have been commercialized from university labs: Twenty-four percent of drugs approved by the FDA between 1997 and 2005 were developed in university labs. But how differently might technologies have evolved over the past 40 years if institutions receiving research funds from taxpayers had been required to fully integrate a broad range of public-interest stakeholders to guide their technology-transfer efforts?

Imagine if panels of such stakeholders routinely scrutinized the potential harms as well as potential benefits—not just the commercial potential—of possible research applications. And what if such public-interest stakeholders routinely advised universities on how to better align their research agendas not primarily with commercial opportunities but with the most pressing human needs—such as for economic justice, peace-building, climate action, and, as the current pandemic highlights, good health care for all? How Cambridge navigated such issues, related to the development of methods that exploit digital footprints, poses a cautionary note for other institutions.

Key actors in the Cambridge chapter that preceded Kogan’s Facebook imbroglio include: Kosinski, now an associate professor at the Stanford University Graduate School of Business but still a Centre associate; Stillwell, now the Centre’s academic director; Rust, a renowned psychometrician who retired from the Centre a year ago; and the university itself—including Cambridge Enterprise, its wholly owned subsidiary.

Centre researchers were not responsible for Kogan’s work with SCL. Still, through the years, Stillwell and Kosinski, with Rust’s support, did develop a powerful model for other graduate students and young researchers, such as Kogan, that may have been tempting to abuse.

First, it would entice huge numbers of Facebook subscribers, often from the age of 13 up, to take academic-quality personality and ability quizzes for “fun” and self-discovery. It would encourage them to let their scores, Facebook profiles, and likes be recorded by signing consent forms that refer in general terms to this very personal data’s being anonymized and applied for “research” and “academic and business purposes.” And it would push test-takers to invite their friends to participate, to help the app go viral. In effect, those who consented to leave their data behind became research subjects—albeit anonymous ones—in what could be an endless variety of academic and business experiments even years later.

Their names were not traded. But the applications derived from this data, or from this model, included products and consulting services with a special allure for marketers ever seeking to more precisely target emotional cues to move people to buy what they’re selling, from cosmetics to candidates.

As for the university, consider its early, uncritical publicity about LikeAudience. It also cultivated a culture that assumed connecting even the work of students to commercial ventures is a good idea. In 2006, the university formed Cambridge Enterprise, which helps staff and students commercialize their expertise and inventions—including even “the seed of an idea”—with the goal of benefiting society. That involves converting them into “opportunities that are attractive to business and investors.”

By the end of 2010, Cambridge Enterprise held equity in 72 companies based on Cambridge research, was investing cash in three seed funds to jump-start Cambridge spinouts, and had garnered £8.4 million in income for the year from licensing research, equity realizations, and consultancies negotiated for academics with business and government.

It was in that cultural environment that Kosinski and Stillwell registered their own start-up. The idea of predicting psychological traits from people’s digital traits—especially by incorporating actual psychological tests—seemed like just the kind of novel, exciting idea the university was encouraging academics to take to market.

In fact, over the years, Cambridge Enterprise helped the Centre negotiate consultancies that supported its research. By 2015, Cambridge Enterprise had dubbed John Rust an “Enterprise Champion”: someone to serve as a first point of contact for colleagues interested in finding out how to commercialize their work. The short bio it posted for Rust, as a member of its team, noted the Centre’s expertise in implementing “human predictive technology” at low cost.

Since the 1980 Bayh-Dole Act, universities in many other countries have also been under political pressure to work more closely with businesses to boost economic growth. They’re also under financial pressure to do so to fund their research enterprises. At first, that mainly meant persuading corporations to support campus research, raising concerns about the conflicts of interest that such corporate sponsorship posed. But the push at Cambridge for faculty and graduate students to directly collaborate in joint projects with companies and to form their own companies was part of the academic evolution to increasingly hands-on forms of commercial activity.

Rust recalls this shift to promoting academic start-ups was just occurring around 2010. In moving so quickly to form start-ups, they were part of a large cohort of academics doing the same thing. “It was what you would do if you had a new idea,” he says.

Kosinski says he was simply trying to survive as a second-year graduate student on a disposable income of about £200 a month. So yes, trying to make money from his own bright idea made sense to him at the time. “I was a young student excited about all of the opportunities,” he recalls.

The university’s interest in trying to help commercialize the results of the Centre’s work was also gratifying. Most people just saw “an old geezer and a couple of PhD students” messing around with Facebook, Rust says; the three of them, however, “could see that it was revolutionary.”

It’s true that early on they publicized the idea that the Centre’s research could be used to help political candidates. But they never anticipated the actual manipulation of elections, Rust adds.

Like most start-ups, the Centre’s start-up, Cambridge Personality Research, did not take off. Specializing in “personality-based targeting” for marketing, it seems to have made a bit of money. Kosinski, Stillwell, and a partner they brought in to run the business charged a monthly fee of $999 for ad agencies to use Preference Tool. But by the time the company was dissolved in 2015, public records indicate the three had shared £5,000 in dividends, received advances of more than £10,000 each, and ended up not only paying back that amount but also chipping in a few thousand more each, to pay off bills.

It didn’t take long, Kosinski says, for Stillwell and him to decide they would rather focus on their academic careers than continue trying to make it as private entrepreneurs. They closed CPR down for good in 2015. He adds that they never patented or copyrighted the related intellectual property, which is all in the hands of Cambridge University. Their primary interest and motivation, he emphasizes, was about research—not about making money.

The furor over Facebook breaches has only reinforced the wisdom of their decision. In fact, Kosinski calls himself a whistle-blower, saying he spent months gathering evidence about the scandal to provide to the media, including The Guardian, which first broke the news in December, 2015 that Cambridge Analytica had psychologically profiled and targeted American voters with Kogan’s help. The Guardian recently added credit to Kosinski, as a source, to the end of that story.

But the Centre’s unwitting connection to the scandal has not escaped official notice. In a 2018 report on the use of data analytics for political purposes, in response to Facebook privacy breaches, the UK’s Information Commissioner’s Office noted the Centre’s role in pioneering the targeting techniques at the core of its investigation. It also questioned whether universities have been blurring the lines between their own research and the for-profit enterprises of their faculties and students in ways that place the protection of research subjects’ data in jeopardy.

Cambridge University apparently made an assumption that may have eased the way for such blurring: that universities can fulfill their obligation to make sure their research serves the public interest by rapidly converting it to products of interest to the for-profit market. Across the United States, that trend is common as well. Many US universities have their own technology-transfer offices charged, like Cambridge Enterprise, with making sure that intellectual property created by faculty or students with university resources is claimed by the institution and commercialized as quickly as possible.

Josh Farley, professor of ecological economics at the University of Vermont, explains how this academic “obsession” with taking part in for-profit markets conflicts with public interests. Much of the research needed to meet the most pressing social and ecological needs may not be easily converted into private profits. Markets, he notes, cannot be counted on to direct resources to those who have unmet needs—only to those who have both money and unmet wants. So it’s irrational, he argues, for universities to assume they can rely on the logic of the market alone to generate the knowledge and technologies to, for example, help the world’s poor meet their basic needs, or to disseminate such resources fairly. “Universities,” Farley laments, “are not for the common good anymore. We’re for corporate profits.”

Cambridge University declined to answer specific questions for this story, except to confirm that Kogan has not been employed there since September 2018. Instead, it sent a short statement emphasizing its “rigorous and independent ethical review process that is proportionate to the potential risk,” and that where “appropriate, research is reviewed by a School-level or University Research Ethics Committee.”

But it also noted that its vice chancellor has established an Advisory Working Group on Research Integrity. That group will review how the institution manages research involving people’s personal data and conflicts of interest that stem from Cambridge employees’ private enterprises. It will also review the training the university provides to its staff and students to safeguard the integrity of its research.

In the last few years, Rust, Kosinski, and Stillwell have been presenting themselves as prophets. The Centre’s relentless focus on more accurate predictive techniques, they say, reflected their desire to warn the public about digital privacy issues. Even years ago, they note, they were also emphasizing the need for people to have control over their own personal information and that any online tracking should be transparent. By 2017, the Centre had posted a statement disavowing any connection to Kogan’s work with SCL and summarizing the “strict” ethical requirements for anyone who wants to use its current predictive tool, Apply Magic Sauce. For example: “Nobody should have predictions made about them without their prior informed consent.”

Kosinski recounts rejecting “many lucrative offers, in order to keep my integrity,” and points to his call in a 2013 Financial Times op-ed for new policies and tools to help people control their own data.

Stillwell lists a string of governmental data-protection reports that have cited their research in calling for new privacy legislation. He and Kosinski also cite improved practices at social media companies, like Facebook, in response to their work. “Far from acting as what you call academic capitalists,” Stillwell argued in an e-mail, “many of our published research papers have attracted considerable media attention which has embarrassed powerful social media companies into changing their practices for the better…. I believe that we played a vital role in changing the conversation around social media data.”

Cambridge is no outlier, in terms of encouraging businesses to pursue the commercial potential of such powerful new tools despite their ethical risks and temptations. Consider Stanford’s integration of psychological targeting into its trainings for business executives. Nearly a year after The Guardian’s first story about Cambridge Analytica, Stanford’s Graduate School of Business featured Kosinski at an on-campus Digital Marketing Conference, where he spoke to industry professionals and academics about “Psychological Targeting and Tailoring in Advertising.” That 2016 conference was sponsored by the business schools of Stanford and another university.

Kosinski also will be on the faculty of a one-week, $13,000 training this summer that is one of the Stanford business school’s Executive Education program: “Harnessing AI for Breakthrough Innovation and Strategic Impact.” His session will focus on “Training Artificial Intelligence to Understand Humans,” including how marketing and other industries can reap the revolutionary potential of applying Big Data methods to large populations to assess intimate psycho-demographic traits, while also avoiding ethical pitfalls, such as “significant privacy risks.” Over all, the training will help well-heeled participants, including decision-makers from “any industry” and “any country,” learn how to use artificial intelligence “to gain a competitive edge,” while also evaluating “ethical and social implications.”

Stanford’s business school had this comment: “Dr. Kosinski has been tirelessly working on educating scholars, executives and the general public about the privacy risks posed by the trend you noted: the ‘fast, early commercialization of research to bring to market powerful new technologies.’”

Institutions across the United States now routinely encourage researchers to develop and commercialize powerful technical advances before full-scale, public, probing ethical analyses. Their institutional review boards (IRBs) only conduct narrower reviews. They consider, for example, any immediate safety issues a particular research proposal poses and the impact on any human research subjects. But now that universities often have potential intellectual property at stake, the independence of IRBs is an issue. Nor are they designed for critically analyzing a proposed project’s larger context—its broader social and ecological implications, especially in terms of the whole line of unprecedented technical ambitions it’s intended to advance.

Considering such broader questions in the IRB process “is really off the table,” notes developmental biologist Stuart Newman, a professor at New York Medical College. Newman is co-author, with bioethics historian Tina Stevens, of Biotech Juggernaut: Hope, Hype, and Hidden Agendas of Entrepreneurial BioScience. He points to the academic life sciences as an area that may be especially vulnerable to ethical breaches today, given its intense commercialization.

The growth in university start-ups is of particular concern. When researchers and students are pressed by universities to start their own companies, they may or may not care how much money they will make if those companies commercially succeed. But the very act of founding a start-up—or in a university’s case, owning related intellectual property or equity—implies an ideological bias that the product or service one has in mind is worth buying, and better than alternatives already on the market. Overcoming such bias in future related research or in scholarly debates—especially in institutional deliberations about potential harm from the line of products under development—may be difficult.

The purpose of this case study is not to impugn the character or question the integrity of any individual researcher or institution. Nor is it to suggest that any of the researchers mentioned were primarily driven by financial considerations, rather than interest in advancing research in an area that fascinated them, and, as its dangers became clear, to warn of its potential abuse.

Instead, the intent here is to spur critical reflection about an academic culture that emphasizes the fast conversion of disruptive ideas into powerful new technical applications for sale. Critics have long warned that individual researchers’ financial interests can unintentionally bias the outcome of studies. But what of institutions’ own emphasis on commercializing the expertise, ideas, and inventions of their staffs and students? Does such a culture foster ethical naïveté and unintentionally, but predictably, help set the stage for future ethical quagmires? Likewise, does excitement about ideas that promise strong returns to investors divert academic resources from less profitable ideas of greater social and ecological urgency?

The Covid-19 pandemic underlines the relevance of such questions, as researchers race to develop treatments and vaccines. The fruits of such research should be not only effective but also safe and accessible to all. Must they also promise ample profits?

Ad Policy
x