In the summer of 2007, while the scientist Marc Hauser was in Australia, Harvard University authorities entered his lab on the tenth floor of William James Hall, seizing computers, videotapes, unpublished manuscripts and notes. Hauser, then 47, was a professor of psychology, organismic and evolutionary biology, and biological anthropology. He was popular with students and a prolific researcher and author, with more than 200 papers and several books to his name. His most recent book, Moral Minds (2006), discusses the biological bases of human morality. Noam Chomsky called it “a lucid, expert, and challenging introduction to a rapidly developing field with great promise and far-reaching implications”; for Peter Singer, it is “a major contribution to an ongoing debate about the nature of ethics.”
Three years after the seizure of materials from Hauser’s lab, the Boston Globe leaked news of a secret investigating committee at Harvard that had found Hauser “solely responsible” for “eight counts of scientific misconduct.” Michael Smith, Harvard’s dean of the Faculty of Arts and Sciences, confirmed the existence of the investigation on August 20, 2010. Hauser took a leave of absence, telling the New York Times, “I acknowledge that I made some significant mistakes,” and adding that he was “deeply sorry for the problems this case had caused to my students, my colleagues and my university.” At the time he was working on a new book titled Evilicious: Why We Evolved a Taste for Being Bad. In February 2011 a large majority of the faculty of Harvard’s psychology department voted against allowing Hauser to teach in the coming academic year. On July 7 he resigned his professorship effective August 1. Hauser has neither publicly admitted to nor denied having engaged in scientific misconduct.
Science is driven by two powerful motivations—to discover the “truth,” while acknowledging how fleeting it can be, and to achieve recognition through publication in prominent journals, through grant support to continue and expand research, and through promotion, prizes and memberships in prestigious scientific societies. The search for scientific truth may be seriously derailed by the desire for recognition, which may result in scientific misconduct.
The National Institutes of Health (NIH) and the National Science Foundation (NSF), the main sources of research funds in the United States, have defined scientific misconduct in research as involving fabrication, falsification or plagiarism. “Fabrication” is making up data; “falsification” is altering or selecting data. This definition of misconduct has been adopted by other federal agencies and most scientific societies and research institutions. Explicitly excluded from the category of scientific misconduct are “honest error or differences of opinion”; other types of misconduct, such as sexual harassment, animal abuse and misuse of grant funds, are targeted by other prevention and enforcement mechanisms.
Scientific misconduct is not necessarily a sign of a decline of ethics among scientists today or of the increased competition for tenure and research funds. Accusations of scientific misconduct, sometimes well supported, pepper the history of science from the Greek natural philosophers onward. Ptolemy of Alexandria (90–168), the greatest astronomer of antiquity, has been accused of using without attribution observations of stars made by his predecessor Hipparchus of Rhodes (162–127 BCE), who himself had used much earlier Babylonian observations as if they were his own. Isaac Newton used “fudge factors” to better fit data to his theories. In his studies of hereditary characteristics, Gregor Mendel reported near perfect ratios, and therefore statistically very unlikely ratios, from his pea-plant crossings. When Mendel crossed hybrid plants, he predicted and found that exactly one-third were pure dominants and two-thirds were hybrids. The high unlikelihood of getting exact 1:3 ratios was first pointed out in 1911 by R.A. Fisher, the founder of modern statistics and a founder of population genetics, when he was an undergraduate at Cambridge University. Though Charles Darwin has been cleared of accusations of nicking the idea of natural selection from Alfred Russel Wallace, he seems to have only reluctantly credited some of his predecessors.
"swipe left below to view more authors"Swipe →
Alabama’s IVF Ruling Is Christian Theology Masquerading as Law
Alabama’s IVF Ruling Is Christian Theology Masquerading as Law
How Did Americans Come to Love “Mid-Century Modern”?
How Did Americans Come to Love “Mid-Century Modern”?
Clarence Thomas Broke the Law. Why Is He Not Being Prosecuted?
Clarence Thomas Broke the Law. Why Is He Not Being Prosecuted?
We’re Letting a Public Health Disaster Unfold In Slow Motion
We’re Letting a Public Health Disaster Unfold In Slow Motion
The first formal discussion of scientific misconduct was published in 1830 by Charles Babbage, who held Newton’s chair at Cambridge and made major contributions to astronomy, mathematics and the development of computers. In Reflections on the Decline of Science in England and on Some of Its Causes, Babbage distinguished “several species of impositions that have been practised in science…hoaxing, forging, trimming, and cooking.” An example of “hoaxing” would be the Piltdown man, discovered in 1911 and discredited in 1953; parts of an ape and human skull were combined, supposedly to represent a “missing link” in human evolution. Hoaxes are intended to expose naïveté and credulousness and to mock pseudo wisdom. Unlike most hoaxes, Babbage’s other “impositions” are carried out to advance the perpetrator’s scientific career. “Forging,” which he thought rare, is the counterfeiting of results, today called fabrication. “Trimming” consists of eliminating outliers to make results look more accurate, while keeping the average the same. “Cooking” is the selection of data. Trimming and cooking fall under the modern rubric of “falsification.” Scholarly conventions and standards of scientific probity were probably different in the distant past, yet the feuds, priority disputes and porous notions of scientific truthfulness from previous centuries seem contemporary.
* * *
In the late 1960s I was eating lunch in William James Hall with a few fellow assistant professors in the Harvard psychology department when a woman named Patricia Woolf sat down at our table. Unbeknownst to us, Woolf was a pioneer in the study of scientific misconduct. She asked whether we had heard anything about the fabrication of data by one of our colleagues. When we said yes, she asked what we were going to do about it. One of us said something like, “Look, our chairman, Richard Herrnstein, is a war criminal. Why should we worry about T—— making up data?” I guess we didn’t take the issue that seriously. At that time Herrnstein was training pigeons to recognize people and sampans in photographs of jungle foliage. The work was supported by the Limited War Laboratory of the US Army and was done off-campus because Harvard prohibited secret research. (With Charles Murray, Herrnstein would later write The Bell Curve, which made incendiary claims about purported racial differences in intelligence.) Herrnstein subsequently managed to help the miscreant find a job elsewhere, forestalling the possibility of scandal at Harvard.
In the past few decades there have been a number of studies asking scientists at every level of research in a variety of fields, and under the cover of anonymity, whether they had engaged in fabrication, falsification or plagiarism, or had direct evidence of such misconduct by others. Although the results were variable and involved different survey response rates and methodologies, the overall picture is disturbing.
In a large and pioneering survey of science graduate students and faculty at ninety-nine universities, the historian of biology and ethicist Judith Swazey and her colleagues found that “44 percent of students and 50 percent of faculty” had knowledge of two or more types of misconduct, broadly defined; about 7 percent had “observed” or had “direct knowledge” of faculty falsifying data. In a survey of its members, the International Society of Clinical Biostatistics found that 51 percent of respondents knew of at least one fraudulent project in the previous ten years. Of 549 biomedical trainees at the University of California, San Diego, 10 percent said they had “firsthand knowledge of scientists’ intentionally altering or fabricating data for the purpose of publication.” In a similar survey, 8 percent of biological and medical postdoctoral fellows at the University of California, San Francisco, said they had observed scientists altering data for publication. The American Association for the Advancement of Science surveyed a random sample of its members, and 27 percent of the respondents believed they had encountered or witnessed fabricated, falsified or plagiarized research over the previous ten years, with an average of 2.5 examples. A study by the director of intramural research at the Office of Research Integrity (ORI) of the Department of Health and Human Services found that of 2,212 researchers receiving NIH grants, 201 reported instances of likely federally defined misconduct over a three-year period, of which 60 percent were fabrication or falsification and 36 percent plagiarism. Noting that in 2007 155,000 personnel received research support from the NIH, the authors suggest that under the most conservative assumptions, a minimum of 2,325 possible acts of research misconduct occur each year. Finally, in a meta-analysis of eighteen studies, 2 percent of scientists admitted to fabricating or falsifying data and more than 14 percent had observed other scientists doing the same.
Scientists guilty of misconduct are found in every field, at every kind of research institution and with a variety of social and educational backgrounds. Yet a survey of the excellent coverage of fraud in Science and recent books on the subject—ranging from Horace Freeland Judson’s The Great Betrayal: Fraud in Science (2004) to David Goodstein’s On Fact and Fraud: Cautionary Tales From the Front Lines of Science (2010)—reveals a pattern of the most common, or modal, scientific miscreant. He is a bright and ambitious young man working in an elite institution in a rapidly moving and highly competitive branch of modern biology or medicine, where results have important theoretical, clinical or financial implications. He has been mentored and supported by a senior and respected establishment figure who is often the co-author of many of his papers but may have not been closely involved in the research.
Scientific misconduct is often difficult to detect. Although grant applications and research papers submitted to prestigious journals are rigorously reviewed, it is very difficult for a reviewer to uncover fabrication or falsification. Attempts at “replication”—repeating someone else’s experiment—are usually another weak filter for misconduct. Journals are reluctant to publish results of attempts at replication, whether positive or negative, thereby discouraging such attempts. In any case, particularly in the complex world of biology, it is often hard to repeat a specific experiment because of the multitude of differences, often unknown, between the original and the replication. Failure to replicate does not demonstrate fraud; however, it does indicate a problem to be looked into. Sometimes fraud is detected by a careful examination of published papers revealing multiply published or doctored illustrations; more often it is uncovered by the perpetrator’s students or other members of his laboratory.
* * *
The serious involvement of the government in policing scientific misconduct began only in 1981, when hearings were convened by Al Gore, then a Congressman and chair of the investigations and oversight subcommittee of the House Science and Technology Committee, after an outbreak of egregious scandals. One was the case of John Long, a promising associate professor at Massachusetts General Hospital who was found to have faked cell lines in his research on Hodgkin’s disease. Another case involved Vijay Soman, an assistant professor at Yale Medical School. Soman plagiarized the research findings of Helena Wachslicht-Rodbard, who worked at the NIH. A paper Wachslicht-Rodbard had written about anorexia nervosa and insulin receptors had been sent for publication review to Soman’s mentor, Philip Felig, the vice chair of medicine at Yale. Felig gave it to Soman, who ghostwrote a rejection for Felig. Soman then stole the idea of Wachslicht-Rodbard’s paper and some of its words, fabricated his own supporting “data” and published his results with Felig as co-author.
At Gore’s hearings there was a parade of senior scientists and scientific administrators claiming that scientific fraud was not a problem. It involved only a few “bad apples,” they insisted, and in any case the scientific community could be trusted to tackle the problem and the government should steer clear of restricting scientific freedom. As Philip Handler, then president of the National Academy of Sciences, the most prestigious organization of US scientists, put it, “The matter of falsification of data…eed not be a matter of general societal concern. It is rather a relatively small matter” in view of the “highly effective, democratic self-correcting mode” of science. After more well-publicized scandals, the federal Office of Scientific Integrity (later the ORI) was established to investigate allegations of scientific fraud in research supported by the NIH. The NSF established a similar office for its grantees.
The NIH and NSF now require all institutions that apply for research support to have a set of procedures for addressing allegations of scientific misconduct. In brief, the usual drill is that after an allegation is made to a department chair or dean, an inquiry is undertaken to determine if a formal investigation is warranted. If so, it is carried out by a small committee of faculty members from other departments. During both phases the accused scientist is given opportunities to respond, and the entire investigation is supposed to be confidential. The committee has full access to the accused scientist’s computer files, unpublished data and notes from research supported by the government.
If the investigation finds misconduct, the university can pursue a variety of actions, ranging from the removal of the scientist from the tarnished project to the withdrawal of the scientist’s published papers to his firing. The ORI or an equivalent federal agency then conducts its own investigation. It has the power to deny future research funds to the disgraced scientist. Federal prosecution for misuse of research funds is also a possibility. Partial or total secrecy is often maintained until after the federal investigation is completed. Sometimes the process of resolving scientific conduct can be prolonged, as appeals of ORI decisions are possible. More recently, the NIH and the NSF have required training in “responsible conduct of research” for all students receiving research support. As a result there have been a spate of books, symposiums, workshops and research grants on the subject. In my teaching of the subject at Princeton and Berkeley I have used F.L. Macrina’s excellent Scientific Integrity, now in its third edition (2005). It contains historical background, current regulations and cases for class discussion in a range of subjects, including authorship, peer review, mentoring, use of animals and humans as subjects, record keeping and conflict of interest and of conscience.
* * *
Marc Hauser has worked at the exciting interface of cognition, evolution and development. As he explained on his website, his research has focused on “understanding which mental capacities are shared with other nonhuman primates and which are uniquely human,” and on determining “the evolutionarily ancient building blocks of our capacity for language, mathematics, music and morality.” Hauser has worked primarily with rhesus monkeys, cotton-top tamarins and human infants. Cotton-top tamarins are small South American monkeys similar to marmosets and, like them, are very cute indeed. (I too have worked with marmosets and rhesus monkeys.) Hauser’s laboratory was virtually the only one in the world working on cognition in tamarins, which made replication of his work almost impossible. In his studies comparing human infants with monkeys, Hauser and his research team would usually collect the monkey data, and his collaborators—such as the distinguished developmental psychologists Susan Carey, chair of the Harvard psychology department, and Elizabeth Spelke, another Harvard colleague—would collect the human data. Hauser also wrote papers with major figures in related fields, such as Chomsky in linguistics and Antonio Damasio in neuroscience. Hauser had joint federal grants with most of these senior figures.
A key motivation in Hauser’s work has been to demonstrate that monkeys have cognitive abilities previously thought to be present only in the great apes and humans. In an important 1970 study, Gordon Gallup Jr., now of the State University of New York, Albany, showed that chimpanzees can recognize themselves in a mirror. Gallup put a red spot on the forehead of chimpanzees, and when given a mirror most of the animals touched the red spot. Subsequent studies showed that the great apes (chimpanzees, bonobos, orangutans, gorillas) and humans more than 18 months old could pass the mirror test of self-recognition but not lesser apes like gibbons or the wide range of monkeys tested. In 1995 Hauser published a claim that his cotton-top tamarins could pass the test. Two years later Gallup co-wrote an attack on Hauser’s methodology. He later told the Boston Globe that when he examined some of Hauser’s videotapes of the experimental results (other tapes were said to be lost), he reported that Hauser had no evidence for his claims. Hauser tried to rebut Gallup in print but admitted in a 2001 article that he could not repeat his results; however, he never retracted his original article.
Meanwhile, experiments with elephants, dolphins, orcas and magpies have shown that these animals too can recognize themselves in a mirror, unlike any monkey. The magpie achievement is not surprising, as recent research has shown that magpies and other corvids, such as jays and crows, have a variety of cognitive abilities previously seen only in the great apes, such as tool use, foresight and role taking. These are cases of convergent evolution: apes and corvids do not have any common ancestor with these high-level cognitive skills; they arose in separate lineages. (Aesop was there first.) Darwin had tried to remove the human from the center of the biological universe, stressing its psychological and physical continuity with other living beings. Hauser seems to want to put humans and other primates, even the cotton-top tamarin, on a cognitive plane above other animals, like dolphins and crows, that have sophisticated cognitive skills but are not in the primate lineage.
* * *
The beginning of the inquiry leading to Harvard’s 2007 investigation of Hauser was triggered by a delegation of three researchers in his lab. We know almost nothing from Hauser’s or Harvard’s statements about the nature of the students’ charges. However, an article by Tom Bartlett published in The Chronicle of Higher Education in August 2010 offers a glimpse into Hauser’s lab. It is based on a document provided to Bartlett, on condition of anonymity, by a former research assistant of Hauser’s. The document, Bartlett writes, “is the statement the research assistant gave to Harvard investigators in 2007.” As he explains, “one experiment in particular [had] led members of Mr. Hauser’s lab to become suspicious of his research and, in the end, to report their concerns about the professor to Harvard administrators.”
This experiment used a standard method in child and animal studies: a sound pattern is played repeatedly over a sound system and then changed, and if the animal then looks at the sound speaker the implication is that the animal noticed the change. In Hauser’s experiment, three tones (in a pattern like A-B-A) were played by the lab assistants. After the monkeys repeatedly heard this pattern, the scientists would modify it and observe if the monkeys had noticed the change in the sound pattern. Pattern recognition of this sort is considered to be a component of language acquisition.
The monkey’s behavior was videotaped and later “coded blind”—that is, the experimenters, without knowing which sound was being played, judged whether the monkey was looking at the speaker. When coding is done blind and independently by two observers, and the two sets of observations match closely, the results are assumed to be reliable.
Bartlett went on to explain that, according to the document that had been provided by the research assistant,
the experiment in question was coded by Mr. Hauser and a research assistant in his laboratory. A second research assistant was asked by Mr. Hauser to analyze the results. When the second research assistant analyzed the first research assistant’s codes, he found that the monkeys didn’t seem to notice the change in pattern. In fact, they looked at the speaker more often when the pattern was the same. In other words, the experiment was a bust.
But Mr. Hauser’s coding showed something else entirely: He found that the monkeys did notice the change in pattern—and, according to his numbers, the results were statistically significant. If his coding was right, the experiment was a big success.
The second research assistant was bothered by the discrepancy. How could two researchers watching the same videotapes arrive at such different conclusions? He suggested to Mr. Hauser that a third researcher should code the results. In an e-mail message to Mr. Hauser, a copy of which was provided to The Chronicle, the research assistant who analyzed the numbers explained his concern. “I don’t feel comfortable analyzing results/publishing data with that kind of skew until we can verify that with a third coder,” he wrote.
A graduate student agreed with the research assistant and joined him in pressing Mr. Hauser to allow the results to be checked, the document given to The Chronicle indicates. But Mr. Hauser resisted, repeatedly arguing against having a third researcher code the videotapes and writing that they should simply go with the data as he had already coded it. After several back-and-forths, it became plain that the professor was annoyed.
“i am getting a bit pissed here,” Mr. Hauser wrote in an e-mail to one research assistant. “there were no inconsistencies! let me repeat what happened. i coded everything. Then [a research assistant] coded all the trials highlighted in yellow. we only had one trial that didn’t agree. i then mistakenly told [another research assistant] to look at column B when he should have looked at column D…. we need to resolve this because I am not sure why we are going in circles.”
According to the document provided to the Chronicle, the graduate student and the research assistant who analyzed the data decided to re-examine the tapes without notifying Hauser. They coded the results without consulting with each other, and both sets of data showed that the monkeys didn’t seem to react to the change in patterns. When they then reviewed Hauser’s results, they found that what he had recorded “bore little relation” to what they had seen on the videotapes. The two did not think the issue was a matter of differing interpretations. As Bartlett put it, they thought Hauser’s data were “just completely wrong.” As news of their experience spread around the lab, according to the document, other lab members indicated they too had experienced episodes in which Hauser “reported false data and then insisted that it be used.”
Several other people who had worked in Hauser’s lab during the period he produced the research investigated by Harvard, and who have asked to remain unnamed, confirmed for me the account offered by the Chronicle and provided further details and examples of the general pattern of Hauser fabricating and falsifying data and pressuring others, particularly undergraduates and other junior members of the lab, to do the same to obtain the desired results. Eventually, three researchers in the lab presented evidence to the university’s ombudsman and then to the dean’s office, prompting the inquiry that led to the formal investigation.
* * *
A week after the Boston Globe disclosed that Harvard had found Hauser responsible for scientific misconduct, Dean Smith sent a letter to Harvard faculty confirming the revelations. The letter, which remains Harvard’s only public account of Hauser’s misdeeds, went into great detail about Harvard’s procedures, stressing that “the work of the investigating committee as well as its final report are considered confidential to protect both the individuals who made the allegations and those who assisted in the investigation.” It was less than forthcoming on details of the “eight instances of scientific misconduct” that Hauser was claimed to be “solely responsible” for. The most Smith would reveal was that “while different issues were detected for the studies reviewed, overall, the experiments reported were designed and conducted, but there were problems involving data acquisition, data analysis, data retention, and the reporting of research methodologies and results.” His letter provided no specific information on the nature of the misconduct, nor did it indicate how the committee knew that Hauser was “solely responsible,” even though all the papers known to be disputed, as well as the vast bulk of Hauser’s publications, have co-authors.
One of the eight instances of scientific misconduct concerned a paper published in Cognition in 2002, which Smith explained “has been retracted because the data produced in the published experiments did not support the published findings.” In the second instance of scientific misconduct, a correction was published to a paper that appeared in the Proceedings of the Royal Society in 2007. In the third instance, concerning a paper that appeared in Science in 2007, Smith wrote, “The authors continue to work with the editors.” Smith then explained that “the investigating committee found problems” with “five other studies that either did not result in publications or where the problems were corrected prior to publication.” Presumably one of them was the experiment involved in the recognition of sound patterns by tamarins that was the subject of the contretemps between Hauser and his research assistants reported by The Chronicle of Higher Education.
The Cognition paper tested whether cotton-top tamarins, like human infants, could rapidly generalize “patterns that have been characterized as abstract algebraic rules,” an ability that could be important in language acquisition. The editor of Cognition, Gerry Altmann, received information from Harvard that led him to believe the paper was a case of scientific misconduct. As Altmann explained on his blog this past October:
As I make very clear in this blog…the information I have received, when taken at face value, leads me to maintain my belief that the data that had been published in the journal Cognition was effectively a fiction—that is, there was no basis in the recorded data for those data. I concluded, and I continue to conclude, that the data were most likely fabricated (that is, after all, what a fiction is—a fabrication).
Two months earlier Altmann had told the Boston Globe that Hauser’s Cognition paper “reports data…but there was no such data existing on the videotape. These data are depicted in the paper in a graph. The graph is effectively a fiction and the statistic that is supplied in the main text is effectively a fiction.” And “if it’s the case the data have in fact been fabricated, which is what I as the editor infer, that is as serious as it gets.”
The three whistleblowers apparently had not been involved in carrying out this experiment. Rather, they chose to re-examine it to see whether the pattern of misconduct they had observed could be found in Hauser’s other papers. This raises two crucial questions: Are other studies of Hauser’s that Harvard did not examine also flawed? Did the Harvard committee look into studies other than those brought to them by the whistleblowers?
The second and third “instances” concerned papers about the ability of chimpanzees, rhesus monkeys and cotton-top tamarins to understand hand gestures made by humans, the implication being that nonhuman primates have the ability to “read the minds of others,” a cognitive skill previously thought to be confined to humans. Hauser and his co-authors informed the editors of the two journals, the Proceedings of the Royal Society and Science, that they had repeated their experiments and verified their original conclusions. The Proceedings of the Royal Society published an addendum to that effect. One of the co-authors explained in Science that the Harvard investigating committee “determined that there are no field notes, records of aborted trials, or subject identifying information associated with the rhesus monkey experiments; however, the research notes and videotapes for the tamarin and chimpanzee experiments were accounted for.” Hauser and one of his co-authors then replicated the rhesus monkey experiments, and after anonymous review, they were published in Science on September 7, 2010. That Hauser and his co-workers obtained data supporting the conclusions of the original papers does not indicate whether the original experiments were carried out properly. This point cannot be stressed enough. As Gordon Gallup Jr. told the Harvard Crimson this past May, “Ultimately it’s not a question of whether he can replicate his findings—it’s whether other people can.” Incidentally, since Hauser published the two papers, dogs have been shown to be better than chimpanzees at interpreting human gestures. Sic transit gloria, the primacy of primates in cognition.
* * *
Hauser has recently been drawn into another controversy about the integrity of his published work. Gilbert Harman, a professor of philosophy at Princeton, has posted on his website a paper alleging that in Moral Minds Hauser draws on ideas developed in several works of John Mikhail’s without making proper acknowledgment. (Mikhail is now a professor of law and philosophy at Georgetown University. Harman’s analysis, which includes a list of passages he thinks are questionable, is at princeton.edu/~harman/Mikhail%20and%20Hauser.pdf.) Harman says that the works of Mikhail’s in question are his PhD dissertation (2000) at Cornell, his JD thesis at Stanford (2002) and a review in the Stanford Law Review (2002).
The Moral Minds controversy isn’t about Hauser passing off as his own phrases or entire sentences lifted from Mikhail’s writings. Rather, as Harman writes, “the section on Plagiarism in the Publication Manual of the American Psychological Association says, ‘The key element of this principle is that an author does not present the work of another author as if it were his own. This can extend to ideas as well as written words.’ (The italics in these quotations are mine.)” Harman points out that in Moral Minds, “Hauser presents as his own novel discovery and as the central idea of the book the very same analogy between universal linguistic grammar and universal moral grammar” that Mikhail had proposed in his dissertation. Furthermore, according to Harman, Hauser “says that an unconscious action analysis is a precondition and preliminary step for judging moral actions to be permissible, forbidden, or obligatory and contrasts this with a purely emotion based account. He does not say that this is Mikhail’s (2000) account…developed further in Mikhail (2002a).”
One line of Harman’s argument concerns what philosophers call “trolley problems,” or dilemmas like whether pushing one person in front of a train to avoid the death of five others is morally permissible. In Moral Minds Hauser discusses four trolley problems, involving “Denise,” “Frank,” “Ned” and “Oscar.” In his dissertation Mikhail gives an account of trolley problems with the same names; Hauser does not cite Mikhail, from whom he must have taken at least two of these examples. Harman writes that Hauser “notes the same parallel between immediate linguistic judgments and immediate moral judgments without referring to Mikhail’s (2000)…similar but earlier discussion. Similarly Hauser notes that the linguistic analogy suggests there are innate constraints on moral development that might make different moral grammars mutually incomprehensible, without referring to Mikhail’s earlier discussion of the same point.”
Harman has posted a reply from Hauser, who says that “Mikhail is cited repeatedly in Moral Minds, and singled out in the Acknowledgments as someone who greatly influenced my thinking.” (Mikhail has not yet replied.) Hauser adds, “These accusations confuse ordinary intellectual influence for malfeasance…and, they gloss the important difference between an empirical synthesis/trade book and a philosophical treatise/academic book.” Hauser is correct in suggesting that trade publishing doesn’t have standard rules for crediting sources. By contrast, having sat on the Princeton committee that handles undergraduate plagiarism cases, I am confident that if Hauser were a student, even a small portion of his failure to credit Mikhail would merit serious punishment.
* * *
In his resignation letter to Harvard, Hauser wrote, “While on leave over the past year, I have begun doing some extremely interesting and rewarding work focusing on the educational needs of at-risk teenagers. I have also been offered some exciting opportunities in the private sector.” In an interview titled “On How We Judge Bad Behavior,” made a few months before the Globe broke the story of Harvard’s investigation and available on YouTube, Hauser discusses psychopaths and suggests that they “know right from wrong but just don’t care.”
The structure of Hauser’s lab was similar in many ways to that of my lab and of many other medium- to large-size biology labs at research universities. These labs are populated by a range of people, including undergraduates, paid research technicians, graduate students, postdoctoral fellows and visitors. Some members—particularly graduate students—work in the lab for years, whereas others are more transient. The principal investigator (PI), such as Hauser or myself, selects the lab members, usually pays them, suggests (or assigns) experiments and evaluates their work. For graduate students, the PI is usually the most important person in their scientific life, acting as mentor, supervisor, model, adviser, critic, editor, co-author, supporter, reference and sometimes rival.
All labs are like complicated families, but each lab is complicated in its own way. Along with sibling rivalries, there are battles for attention, praise, identity, privacy and independence. The intimate relation of a PI to his graduate students often lasts as long and as intensely as a familial one. For a graduate student to blow the whistle on his or her mentor is an extraordinary and very risky step. Aside from the emotional and psychological trauma, whistleblowing by graduate students about their PI, even if confirmed, often ruins their careers. If the PI is fired or loses grant support, members of his or her lab usually stand to lose nearly everything—their financial support, their laboratory facilities, their research project and sometimes their credibility. But in the Hauser affair things have turned out very differently: the three whistleblowers whose action prompted the Harvard investigation have gone on to successful careers in scientific research.
The procedures and conclusions of the investigation raise many questions. Its methods and results remain secret. Its procedures bore no relation to the due process that is the goal of our judicial system. We have no clear idea of the exact nature of the evidence, of how many studies were examined and if anyone besides the three whistleblowers and Hauser was asked to testify. I was told by one of the whistleblowers that, to this person’s surprise and relief, the committee, which included scientists, did look carefully at evidence, even going so far as to recalculate statistics.
Aside from their potential injustice to the accused and accusers, the secrecy of the investigation and the paucity of specific facts in the conclusions are deleterious to the entire field of animal cognition. Exactly what kind of irregularities existed in the “eight instances of misconduct” and what they might imply for other papers by Hauser and for the field in general remained unclear.
Although some of my knowledge of the Hauser case is based on conversations with sources who have preferred to remain unnamed, there seems to me to be little doubt that Hauser is guilty of scientific misconduct, though to what extent and severity remains to be revealed. Regardless of the final outcome of the investigation of Hauser by the federal Office of Research Integrity, irreversible damage has been done to the field of animal cognition, to Harvard University and most of all to Marc Hauser.