Universities are diving more and more deeply into academic commercialism. They prod their faculties and graduate students to convert nonprofit researchâoften heavily subsidized by taxpayersâinto their own for-profit start-ups or collaborations with existing companies as fast as possible. Thatâs especially true in fields developing disruptive technologies of unprecedented power, such as artificial intelligence, robotics, and extreme forms of genetic engineering.
But at what price, in terms of compromised public interests? As institutions, faculties, and graduate students become more eager to monetize revolutionary inventions, itâs not surprising that they often fail to pause for rigorous ethical and safety analyses. Many give at least lip service to the idea of âpublic engagementâ in discussions about howâalthough usually not whetherâethically fraught lines of innovation should be widely adopted. But they move so quickly to develop and market new technologies that itâs often too late for the kind of broadly participatory political deliberations about benefits and harms, and winners and losers, that the public has a right to demand. After all, the social and ecological consequences may affect us all.
Consider the Cambridge Analytica scandal that broke wide-open two years ago: a massive privacy breach that exposed tens of millions of peopleâs personal data from Facebook, which was then used to micro-target political messages to voters in the 2016 US presidential election. The media has mostly skewered Aleksandr Kogan, the young psychologist then at the University of Cambridge who made what has been widely condemned as a devil of a deal in 2014 between his own private start-up and Cambridge Analyticaâs parent company. What happened at Cambridge before that deal, though, deserves more scrutiny. It suggests that the university, as an institution, and its Psychometrics Centre unwittingly provided cues for many of Koganâs later missteps.
In fact, the university had highlighted the idea of psychological targeting in politics before Kogan was even hired. In 2011âthe year before he arrived and about three years before he began working with Cambridge Analyticaâs parent company, SCL GroupâCambridge University published a news article reporting that researchers at the Psychometrics Centre were pursuing tantalizing new technical possibilities for psychologically targeting people in advertising.
The article focused on a new online marketing tool called LikeAudience, which could generate an average psychological and demographic profile of people who shared a particular âlikeâ on Facebook, or identify Facebook âlikesâ that would appeal to people of a particular psycho-demographic profile. It noted that LikeAudience drew only from anonymized information from people using Facebook apps who had agreed to let their data be used for it. But there was no discussion of the obvious possibility that such methods, if one of the worldâs oldest and most respected academic institutions continued to help refine and tacitly endorse them, might inspire abusesâfor example, micro-targeting tools that could threaten fair and free elections. Instead, the university presented the psychological and demographic traits that LikeAudience revealed about typical Facebook fans of then-President Barack Obama and other political leaders in the United States and the United Kingdom as well, including Sarah Palin and thenâUK Prime Minister David Cameron.
Popular
"swipe left below to view more authors"Swipe â
The article also suggests that both Centre researchers and the university anticipated that this line of technical development could revolutionize not only the marketing of consumer products and brands but also the selling of political candidates to voters. And that the university was willing to help the two young researchers, Michal Kosinski and David Stillwell, advertise that the tool was now available, apparently for political and commercial campaigns. (Both were graduate students at the time: Kosinski at Cambridge and Stillwell at another university.)
âLikeAudienceâs creators believe that it will be of particular value to marketers, who will be able to uncover new potential audiences for their advertising campaigns, and exploitable niches based on the fans of their closest rivals,â the university declared. âThe potential significance for politicians, particularly when on the election trail, is also clear.â
LikeAudience was available on a free website. But Kosinski and Stillwell already had formed their own spinoff company called Cambridge Personality Research Ltd. the year before, and by 2012 it was selling a more advanced product: Preference Tool.
The line of technology the Centre had begun contributing to involves first collecting from peopleâin the Centreâs case, with their permissionâacademic-caliber psychological test scores, demographic information, and some record of their online behavior. Then, the data can be analyzed to detect patterns and generate psycho-demographic profiles of other people, for whom only some subset of such information is available.
By 2012, the Centre was publicizing LikeAudience in a list of its own âProducts/Servicesâ on its own website as âa breakthrough research and marketing tool,â and announcing that Preference Tool was also now available from Cambridge Personality Research Ltd., âa spin-off of The Psychometrics Centre.â The latter product, the Centre proclaimed, could âsignificantly improve targeting and reduce the cost of marketing campaigns,â and was already used by leading online marketing agencies, for whom it had âincreased campaign effectiveness by up to 140%.â
Starting with the Facebook app myPersonality, launched by Stillwell as his own business in 2007 before his work with the Centre, Centre researchers over time collected, apparently insecurely anonymized, and stored for commercial as well as academic studies the personal data and performances on psychological tests of millions of people, including minors.
Until about 2013, it was a very different world, Kosinski recalled in an interview. An awareness of the serious potential for harmful consequences from analyzing digital footprints was only beginning to emerge. That it did, he suggests, had a lot to do with the evolution of his own and the Centreâs work.
As the Centreâs expertise grew more sophisticated, its researchers began speaking publicly about the risks involved with the powerful predictive methods that the Centre was trying out. By 2012, Kosinski, Stillwell, and their co-authors had begun including generic warnings about privacy in their papers. Then, in a major 2013 paper the two published with a Microsoft colleague in a prestigious journal, their warning was more detailed and specific, pointing out that such methods, in the wrong hands, could lead to security breaches whose exposure of sensitive personal information could even be life-threatening.
Still, they devoted much more attention in that 2013 article and much of their published work (at least until the Cambridge Analytica scandal broke) to describing the ingenious techniques theyâd devised to improve their own predictive modeling. Meanwhile, neither Stillwell nor Kosinski, in a long string of research publications, listed conflicts of interest related to their start-up, Cambridge Personality Research (CPR). They founded it in 2010, were the only initial directors and shareholders, and did not file paperwork to dissolve the companyâwhich ultimately was a bustâuntil July 2015. Both, in e-mails, indicated they did not consider the company a conflict to be reported.
The studentsâ for-profit work seems to have dovetailed closely with the Centreâs nonprofit research. CPR proclaimed that it was harnessing Cambridge Universityâs âglobal leadershipâ in psychology, neuroscience, and statistics, and its âmountainsâ of data âfor commercial use.â CPR advertised that its statistical methods were based on years of research at the university, and touted its âCambridge 100-factor model, a unique statistical tool to predict the behaviour of any individual or group. We can model and predict the personality of any brand, product, action, audience or keyword.â
When Kosinski and Stillwell decided to concentrate on being academics and gave up on the company, the Centre not only described Preference Tool, the companyâs product, as its own but made it available to businesses and other clients who were willing to help fund the Centreâs research in exchange.
Along the way, the Centre also pioneered another idea that Kogan later riffed on in a far more daring way: Help an ambitious companyâin the Centreâs case, the marketing-research firm CubeYouâby relaunching a Facebook app originally developed by Centre researchers in a way that would give the company access to the appâs raw data, which included, at least according to the public website for the app, the personal data of friends of the appâs users.
The original version of the YouAreWhatYouLike app was designed in 2011 by Kosinski and Stillwell, apparently allowing them to collect more data to improve their predictive models. Later it became an in-house project at the Centre. It collected peopleâs Facebook likes to generate predictions for them of what those preferences revealed about their personality. But after the Centre began collaborating with CubeYou in 2013âthe year before Koganâs infamous dealâa new version of the app was launched, with dramatically different terms of use.
YouAreWhatYouLike began requiring Facebook subscribers who wanted to use the app to allow both the Centre and CubeYou access to âyour and your friendsâ public profile, likes, friends list, e-mail address, relationships, relationship interests, birthday, work history, education history, hometown, interests, current city and religious and political views.â (The new version predicted the personalities of both app users and their friends.)
The Centre promised that it and CubeYou would anonymize data before sharing it or any derivatives from it with others. But by allowing friendsâ personal information to be collected and stored without their knowledge or permission, the website terms violated an ethical standard the Centre had long championedâthat data should only be collected and stored from people who have given their consent.
John Rust, founder and recently retired director of the Centre, told The Nation that neither he nor the Centre had anything to do with CubeYou or the YouAreWhatYouLike app. Kosinski, the Centreâs deputy director at least between 2012 and mid-2014, told The Nation that there was a partnership, but that it was just a proof of concept that didnât make any money. David Stillwell, who took over as deputy director when Kosinski left, did not answer e-mailed questions about it.
Vesselin Popov, who joined the Centre in 2013 and is now its executive director, declined to answer most questions for this article. But in an e-mail, he did state that the YouAreWhatYouLike app made by Kosinski and Stillwell was âseparateâ from an app called the same thing that âwas made by CubeYou and which used our algorithms based on our published PNAS paper for the purpose of showing predictions to users.â
He canât speak for CubeYou, he added, âbut the Centre only analyzed two of those fields that youâve listed, which were the userâs Likes and their friendsâ Likes. The Likes were sent to our API for the purpose of generating predictions and then those predictions were shown immediately to the user. We never stored or analyzed any of those other types of data.â
The Centre, he wrote, only used friendsâ âlikesâ data to provide insights to participants, in a way Facebook allowed at the time. The Centre âhas never used data from friends of users for research nor for any other purpose.â
As late as March 2015, however, the site for the new version of the app emphasized the Centreâs key role in this collaboration. It used the same website URL as the original one, and the Centreâs website continued linking to that home page, once the Centre-CubeYou version was live. The Privacy and Terms page didnât mention that your friendsâ data would be accessed. But the âHow Does It Workâ page did, and the About page also indicated that YouAreWhatYouLike was developed by Kosinski and Stillwell. The joint version apparently stopped operating by May of that year, after new Facebook rules took effect.
In 2018, after a CNBC report on YouAreWhatYouLike in the midst of the Cambridge Analytica scandal, the Centre posted a note on the appâs website stating that CubeYou âcreatedâ the 2013 app and the Centre had administered the website URLânot mentioning the role touted earlier of Kosinski and Stillwell in developing this âone-clickâ personality test.
Federico Treu, founder and in 2018 CEO of CubeYou, could not be reached for comment. According to CNBCâs 2018 story, Treu âdenied CubeYou has access to friendsâ data if a user opted in, and said it only connects friends who have opted into the app individually.â It isnât clear, though, if he meant what CubeYou had access to when its collaboration with the Centre was active, back in 2013â15, or what it had access to in 2018.
The Centre referred to its own data haul from YouAreWhatYouLike in marketing its products and consulting services to help fund its research. By 2014, its flyer touting âBenefits of Partnering with the Psychometrics Centre for Collection and Analysis of Big Dataâ claimed www.youarewhatyoulike.comâwithout specifying a particular version of the appâhad generated âone of the largest databases of behavioral data in history.â (The flyer added that the Centreâs Apply Magic Sauce, its new product for predictive analytics, could âtranslate digital footprints into detailed psycho-demographic profiles.â)
By 2016, the Centre was ready to move out of the Cambridge Department of Psychology. Its new home was the University of Cambridge Judge Business School, where it could help JBS Executive Education Limited, a related for-profit company that is wholly owned by the university, serve the companyâs network of global clients. The company makes annual âgift aid distributionsâ to Cambridge University, which totaled about ÂŁ1.24 million in 2019, or about $1.6 million. Starting in 2016, then, such âgiftsâ apparently benefit from any earnings generated by the Centreâs work helping businesses exploit its expertise in psychological testingâand its âgroundbreakingâ applications of predictive Big Data techniques.
The business school has made clear that includes âpsychological marketing with Big Dataâ and that the Centre can demonstrate how âcommunications tailored to someoneâs personality increases key metrics such as clicks and purchases.â
Was it wise for a nonprofit institution, supported by public tax dollars, to start down this path in the first placeâapplying its academic prowess in psychological assessment in ways that empower online marketing? Itâs one thing for an institution to study such new technical territory from the outside. But when it also tries to market its results from the inside, how can it maintain its traditional role as a trustworthy, independent source of critical analyses for such new technologies? In effect, has Cambridge University compromised its own academic capacity to challenge the emerging market for machine-driven, personalized mass persuasion?
Such automated psychological manipulation now poses a significant social issue. Consider the analysis of Cathy OâNeil, author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. The growing integration of Big Data, artificial intelligence, and predictive behavioral modeling, she writes, lets data-crunching machines sift âthrough our habits and hopes, fears and desires,â searching for patterns to predict how best to exploit our weaknesses and vulnerabilities. The result, she points out, is predatory advertising: ads that can âzero in on the most desperate of us at enormous scale,â and sell those in greatest need âfalse or overpriced promises.â
âThey find inequality and feast on it,â she adds in her book. âThe result is that they perpetuate our existing social stratification, with all of its injustices.â
To be fair, what happened at Cambridge University partly reflects the pressure on academics today to shape their research to meet market demands in the hope of successfully competing for the funding they need to be able to conduct any research at all.
For US research institutions, the immediate financial rewards of commercializing their research in 2017 hit a record $3 billion in licensing income (before expenses) from related intellectual property, according to the AUTM US Licensing Activity Survey: 2017. The latest trend is for institutions to push employees and students to start their own companiesâin the United States, at least 1,080 start-ups in 2017 and again in 2018. Universities often receive equity in these spinoffs or collect royalties and licensing fees more directly from companies. The start-up trend, which is up almost a third from five years earlier, further blurs the line between institutionsâ nonprofit mission and for-profit aspirations.
Politically, the surge in academic commercialismâincluding universities selling researchersâ expertise, as paid consultants, and working intimately in joint projects with existing companiesâhas won bipartisan applause in the United States. The trend grew quickly after Congress passed the Bayh-Dole Act in 1980, which gave universities the authority to own and patent inventions funded by the federal government. And it has coincided with the declining role of governments in funding academic research. In the United States, for example, the federal governmentâs share of total support for academic R&D peaked in 1973, at 69 percent. By 2016, it was down to 54 percent, according to the US National Science Board. Over that period, state and local support had fallen from accounting for about 10 percent to less than 6 percent, while business support had doubled from about 3 to about 6 percent. Academic institutionsâ own resources, as the source for their R&D spending, rose from 11 percent to 25 percent of the total.
Unfortunately, nothing similar to the Bayh-Dole Act exists to push universities to partner with nonprofitsâlike worker organizations, environmental and other public-interest groups, and grassroots community organizationsâto offset the pressures on institutions to focus on generating high-tech, for-profit products. The ethical perils of assuming, by default, that the pursuit of such products will serve universitiesâ public-service mission seem obvious.
Yes, many beneficial productsâsuch as lifesaving medicationsâhave been commercialized from university labs: Twenty-four percent of drugs approved by the FDA between 1997 and 2005 were developed in university labs. But how differently might technologies have evolved over the past 40 years if institutions receiving research funds from taxpayers had been required to fully integrate a broad range of public-interest stakeholders to guide their technology-transfer efforts?
Imagine if panels of such stakeholders routinely scrutinized the potential harms as well as potential benefitsânot just the commercial potentialâof possible research applications. And what if such public-interest stakeholders routinely advised universities on how to better align their research agendas not primarily with commercial opportunities but with the most pressing human needsâsuch as for economic justice, peace-building, climate action, and, as the current pandemic highlights, good health care for all? How Cambridge navigated such issues, related to the development of methods that exploit digital footprints, poses a cautionary note for other institutions.
Key actors in the Cambridge chapter that preceded Koganâs Facebook imbroglio include: Kosinski, now an associate professor at the Stanford University Graduate School of Business but still a Centre associate; Stillwell, now the Centreâs academic director; Rust, a renowned psychometrician who retired from the Centre a year ago; and the university itselfâincluding Cambridge Enterprise, its wholly owned subsidiary.
Centre researchers were not responsible for Koganâs work with SCL. Still, through the years, Stillwell and Kosinski, with Rustâs support, did develop a powerful model for other graduate students and young researchers, such as Kogan, that may have been tempting to abuse.
First, it would entice huge numbers of Facebook subscribers, often from the age of 13 up, to take academic-quality personality and ability quizzes for âfunâ and self-discovery. It would encourage them to let their scores, Facebook profiles, and likes be recorded by signing consent forms that refer in general terms to this very personal dataâs being anonymized and applied for âresearchâ and âacademic and business purposes.â And it would push test-takers to invite their friends to participate, to help the app go viral. In effect, those who consented to leave their data behind became research subjectsâalbeit anonymous onesâin what could be an endless variety of academic and business experiments even years later.
Their names were not traded. But the applications derived from this data, or from this model, included products and consulting services with a special allure for marketers ever seeking to more precisely target emotional cues to move people to buy what theyâre selling, from cosmetics to candidates.
As for the university, consider its early, uncritical publicity about LikeAudience. It also cultivated a culture that assumed connecting even the work of students to commercial ventures is a good idea. In 2006, the university formed Cambridge Enterprise, which helps staff and students commercialize their expertise and inventionsâincluding even âthe seed of an ideaââwith the goal of benefiting society. That involves converting them into âopportunities that are attractive to business and investors.â
By the end of 2010, Cambridge Enterprise held equity in 72 companies based on Cambridge research, was investing cash in three seed funds to jump-start Cambridge spinouts, and had garnered ÂŁ8.4 million in income for the year from licensing research, equity realizations, and consultancies negotiated for academics with business and government.
It was in that cultural environment that Kosinski and Stillwell registered their own start-up. The idea of predicting psychological traits from peopleâs digital traitsâespecially by incorporating actual psychological testsâseemed like just the kind of novel, exciting idea the university was encouraging academics to take to market.
In fact, over the years, Cambridge Enterprise helped the Centre negotiate consultancies that supported its research. By 2015, Cambridge Enterprise had dubbed John Rust an âEnterprise Championâ: someone to serve as a first point of contact for colleagues interested in finding out how to commercialize their work. The short bio it posted for Rust, as a member of its team, noted the Centreâs expertise in implementing âhuman predictive technologyâ at low cost.
Since the 1980 Bayh-Dole Act, universities in many other countries have also been under political pressure to work more closely with businesses to boost economic growth. Theyâre also under financial pressure to do so to fund their research enterprises. At first, that mainly meant persuading corporations to support campus research, raising concerns about the conflicts of interest that such corporate sponsorship posed. But the push at Cambridge for faculty and graduate students to directly collaborate in joint projects with companies and to form their own companies was part of the academic evolution to increasingly hands-on forms of commercial activity.
Rust recalls this shift to promoting academic start-ups was just occurring around 2010. In moving so quickly to form start-ups, they were part of a large cohort of academics doing the same thing. âIt was what you would do if you had a new idea,â he says.
Kosinski says he was simply trying to survive as a second-year graduate student on a disposable income of about ÂŁ200 a month. So yes, trying to make money from his own bright idea made sense to him at the time. âI was a young student excited about all of the opportunities,â he recalls.
The universityâs interest in trying to help commercialize the results of the Centreâs work was also gratifying. Most people just saw âan old geezer and a couple of PhD studentsâ messing around with Facebook, Rust says; the three of them, however, âcould see that it was revolutionary.â
Itâs true that early on they publicized the idea that the Centreâs research could be used to help political candidates. But they never anticipated the actual manipulation of elections, Rust adds.
Like most start-ups, the Centreâs start-up, Cambridge Personality Research, did not take off. Specializing in âpersonality-based targetingâ for marketing, it seems to have made a bit of money. Kosinski, Stillwell, and a partner they brought in to run the business charged a monthly fee of $999 for ad agencies to use Preference Tool. But by the time the company was dissolved in 2015, public records indicate the three had shared ÂŁ5,000 in dividends, received advances of more than ÂŁ10,000 each, and ended up not only paying back that amount but also chipping in a few thousand more each, to pay off bills.
It didnât take long, Kosinski says, for Stillwell and him to decide they would rather focus on their academic careers than continue trying to make it as private entrepreneurs. They closed CPR down for good in 2015. He adds that they never patented or copyrighted the related intellectual property, which is all in the hands of Cambridge University. Their primary interest and motivation, he emphasizes, was about researchânot about making money.
The furor over Facebook breaches has only reinforced the wisdom of their decision. In fact, Kosinski calls himself a whistle-blower, saying he spent months gathering evidence about the scandal to provide to the media, including The Guardian, which first broke the news in December, 2015 that Cambridge Analytica had psychologically profiled and targeted American voters with Koganâs help. The Guardian recently added credit to Kosinski, as a source, to the end of that story.
But the Centreâs unwitting connection to the scandal has not escaped official notice. In a 2018 report on the use of data analytics for political purposes, in response to Facebook privacy breaches, the UKâs Information Commissionerâs Office noted the Centreâs role in pioneering the targeting techniques at the core of its investigation. It also questioned whether universities have been blurring the lines between their own research and the for-profit enterprises of their faculties and students in ways that place the protection of research subjectsâ data in jeopardy.
Cambridge University apparently made an assumption that may have eased the way for such blurring: that universities can fulfill their obligation to make sure their research serves the public interest by rapidly converting it to products of interest to the for-profit market. Across the United States, that trend is common as well. Many US universities have their own technology-transfer offices charged, like Cambridge Enterprise, with making sure that intellectual property created by faculty or students with university resources is claimed by the institution and commercialized as quickly as possible.
Josh Farley, professor of ecological economics at the University of Vermont, explains how this academic âobsessionâ with taking part in for-profit markets conflicts with public interests. Much of the research needed to meet the most pressing social and ecological needs may not be easily converted into private profits. Markets, he notes, cannot be counted on to direct resources to those who have unmet needsâonly to those who have both money and unmet wants. So itâs irrational, he argues, for universities to assume they can rely on the logic of the market alone to generate the knowledge and technologies to, for example, help the worldâs poor meet their basic needs, or to disseminate such resources fairly. âUniversities,â Farley laments, âare not for the common good anymore. Weâre for corporate profits.â
Cambridge University declined to answer specific questions for this story, except to confirm that Kogan has not been employed there since September 2018. Instead, it sent a short statement emphasizing its ârigorous and independent ethical review process that is proportionate to the potential risk,â and that where âappropriate, research is reviewed by a School-level or University Research Ethics Committee.â
But it also noted that its vice chancellor has established an Advisory Working Group on Research Integrity. That group will review how the institution manages research involving peopleâs personal data and conflicts of interest that stem from Cambridge employeesâ private enterprises. It will also review the training the university provides to its staff and students to safeguard the integrity of its research.
In the last few years, Rust, Kosinski, and Stillwell have been presenting themselves as prophets. The Centreâs relentless focus on more accurate predictive techniques, they say, reflected their desire to warn the public about digital privacy issues. Even years ago, they note, they were also emphasizing the need for people to have control over their own personal information and that any online tracking should be transparent. By 2017, the Centre had posted a statement disavowing any connection to Koganâs work with SCL and summarizing the âstrictâ ethical requirements for anyone who wants to use its current predictive tool, Apply Magic Sauce. For example: âNobody should have predictions made about them without their prior informed consent.â
Kosinski recounts rejecting âmany lucrative offers, in order to keep my integrity,â and points to his call in a 2013 Financial Times op-ed for new policies and tools to help people control their own data.
Stillwell lists a string of governmental data-protection reports that have cited their research in calling for new privacy legislation. He and Kosinski also cite improved practices at social media companies, like Facebook, in response to their work. âFar from acting as what you call academic capitalists,â Stillwell argued in an e-mail, âmany of our published research papers have attracted considerable media attention which has embarrassed powerful social media companies into changing their practices for the betterâŠ. I believe that we played a vital role in changing the conversation around social media data.â
Cambridge is no outlier, in terms of encouraging businesses to pursue the commercial potential of such powerful new tools despite their ethical risks and temptations. Consider Stanfordâs integration of psychological targeting into its trainings for business executives. Nearly a year after The Guardianâs first story about Cambridge Analytica, Stanfordâs Graduate School of Business featured Kosinski at an on-campus Digital Marketing Conference, where he spoke to industry professionals and academics about âPsychological Targeting and Tailoring in Advertising.â That 2016 conference was sponsored by the business schools of Stanford and another university.
Kosinski also will be on the faculty of a one-week, $13,000 training this summer that is one of the Stanford business schoolâs Executive Education program: âHarnessing AI for Breakthrough Innovation and Strategic Impact.â His session will focus on âTraining Artificial Intelligence to Understand Humans,â including how marketing and other industries can reap the revolutionary potential of applying Big Data methods to large populations to assess intimate psycho-demographic traits, while also avoiding ethical pitfalls, such as âsignificant privacy risks.â Over all, the training will help well-heeled participants, including decision-makers from âany industryâ and âany country,â learn how to use artificial intelligence âto gain a competitive edge,â while also evaluating âethical and social implications.â
Stanfordâs business school had this comment: âDr. Kosinski has been tirelessly working on educating scholars, executives and the general public about the privacy risks posed by the trend you noted: the âfast, early commercialization of research to bring to market powerful new technologies.ââ
Institutions across the United States now routinely encourage researchers to develop and commercialize powerful technical advances before full-scale, public, probing ethical analyses. Their institutional review boards (IRBs) only conduct narrower reviews. They consider, for example, any immediate safety issues a particular research proposal poses and the impact on any human research subjects. But now that universities often have potential intellectual property at stake, the independence of IRBs is an issue. Nor are they designed for critically analyzing a proposed projectâs larger contextâits broader social and ecological implications, especially in terms of the whole line of unprecedented technical ambitions itâs intended to advance.
Considering such broader questions in the IRB process âis really off the table,â notes developmental biologist Stuart Newman, a professor at New York Medical College. Newman is co-author, with bioethics historian Tina Stevens, of Biotech Juggernaut: Hope, Hype, and Hidden Agendas of Entrepreneurial BioScience. He points to the academic life sciences as an area that may be especially vulnerable to ethical breaches today, given its intense commercialization.
The growth in university start-ups is of particular concern. When researchers and students are pressed by universities to start their own companies, they may or may not care how much money they will make if those companies commercially succeed. But the very act of founding a start-upâor in a universityâs case, owning related intellectual property or equityâimplies an ideological bias that the product or service one has in mind is worth buying, and better than alternatives already on the market. Overcoming such bias in future related research or in scholarly debatesâespecially in institutional deliberations about potential harm from the line of products under developmentâmay be difficult.
The purpose of this case study is not to impugn the character or question the integrity of any individual researcher or institution. Nor is it to suggest that any of the researchers mentioned were primarily driven by financial considerations, rather than interest in advancing research in an area that fascinated them, and, as its dangers became clear, to warn of its potential abuse.
Instead, the intent here is to spur critical reflection about an academic culture that emphasizes the fast conversion of disruptive ideas into powerful new technical applications for sale. Critics have long warned that individual researchersâ financial interests can unintentionally bias the outcome of studies. But what of institutionsâ own emphasis on commercializing the expertise, ideas, and inventions of their staffs and students? Does such a culture foster ethical naĂŻvetĂ© and unintentionally, but predictably, help set the stage for future ethical quagmires? Likewise, does excitement about ideas that promise strong returns to investors divert academic resources from less profitable ideas of greater social and ecological urgency?
The Covid-19 pandemic underlines the relevance of such questions, as researchers race to develop treatments and vaccines. The fruits of such research should be not only effective but also safe and accessible to all. Must they also promise ample profits?
