Disgrace: On Marc Hauser | The Nation


Disgrace: On Marc Hauser

  • Share
  • Decrease text size Increase text size

In the summer of 2007, while the scientist Marc Hauser was in Australia, Harvard University authorities entered his lab on the tenth floor of William James Hall, seizing computers, videotapes, unpublished manuscripts and notes. Hauser, then 47, was a professor of psychology, organismic and evolutionary biology, and biological anthropology. He was popular with students and a prolific researcher and author, with more than 200 papers and several books to his name. His most recent book, Moral Minds (2006), discusses the biological bases of human morality. Noam Chomsky called it “a lucid, expert, and challenging introduction to a rapidly developing field with great promise and far-reaching implications”; for Peter Singer, it is “a major contribution to an ongoing debate about the nature of ethics.”

About the Author

Charles Gross
Charles Gross is a professor of psychology and neuroscience at Princeton University. In 2011 he was a biology...

Also by the Author

What the modern science of memory owes to the amnesiac patient H.M.

Three years after the seizure of materials from Hauser’s lab, the Boston Globe leaked news of a secret investigating committee at Harvard that had found Hauser “solely responsible” for “eight counts of scientific misconduct.” Michael Smith, Harvard’s dean of the Faculty of Arts and Sciences, confirmed the existence of the investigation on August 20, 2010. Hauser took a leave of absence, telling the New York Times, “I acknowledge that I made some significant mistakes,” and adding that he was “deeply sorry for the problems this case had caused to my students, my colleagues and my university.” At the time he was working on a new book titled Evilicious: Why We Evolved a Taste for Being Bad. In February 2011 a large majority of the faculty of Harvard’s psychology department voted against allowing Hauser to teach in the coming academic year. On July 7 he resigned his professorship effective August 1. Hauser has neither publicly admitted to nor denied having engaged in scientific misconduct.

Science is driven by two powerful motivations—to discover the “truth,” while acknowledging how fleeting it can be, and to achieve recognition through publication in prominent journals, through grant support to continue and expand research, and through promotion, prizes and memberships in prestigious scientific societies. The search for scientific truth may be seriously derailed by the desire for recognition, which may result in scientific misconduct.

The National Institutes of Health (NIH) and the National Science Foundation (NSF), the main sources of research funds in the United States, have defined scientific misconduct in research as involving fabrication, falsification or plagiarism. “Fabrication” is making up data; “falsification” is altering or selecting data. This definition of misconduct has been adopted by other federal agencies and most scientific societies and research institutions. Explicitly excluded from the category of scientific misconduct are “honest error or differences of opinion”; other types of misconduct, such as sexual harassment, animal abuse and misuse of grant funds, are targeted by other prevention and enforcement mechanisms.

Scientific misconduct is not necessarily a sign of a decline of ethics among scientists today or of the increased competition for tenure and research funds. Accusations of scientific misconduct, sometimes well supported, pepper the history of science from the Greek natural philosophers onward. Ptolemy of Alexandria (90–168), the greatest astronomer of antiquity, has been accused of using without attribution observations of stars made by his predecessor Hipparchus of Rhodes (162–127 BCE), who himself had used much earlier Babylonian observations as if they were his own. Isaac Newton used “fudge factors” to better fit data to his theories. In his studies of hereditary characteristics, Gregor Mendel reported near perfect ratios, and therefore statistically very unlikely ratios, from his pea-plant crossings. When Mendel crossed hybrid plants, he predicted and found that exactly one-third were pure dominants and two-thirds were hybrids. The high unlikelihood of getting exact 1:3 ratios was first pointed out in 1911 by R.A. Fisher, the founder of modern statistics and a founder of population genetics, when he was an undergraduate at Cambridge University. Though Charles Darwin has been cleared of accusations of nicking the idea of natural selection from Alfred Russel Wallace, he seems to have only reluctantly credited some of his predecessors.

The first formal discussion of scientific misconduct was published in 1830 by Charles Babbage, who held Newton’s chair at Cambridge and made major contributions to astronomy, mathematics and the development of computers. In Reflections on the Decline of Science in England and on Some of Its Causes, Babbage distinguished “several species of impositions that have been practised in science…hoaxing, forging, trimming, and cooking.” An example of “hoaxing” would be the Piltdown man, discovered in 1911 and discredited in 1953; parts of an ape and human skull were combined, supposedly to represent a “missing link” in human evolution. Hoaxes are intended to expose naïveté and credulousness and to mock pseudo wisdom. Unlike most hoaxes, Babbage’s other “impositions” are carried out to advance the perpetrator’s scientific career. “Forging,” which he thought rare, is the counterfeiting of results, today called fabrication. “Trimming” consists of eliminating outliers to make results look more accurate, while keeping the average the same. “Cooking” is the selection of data. Trimming and cooking fall under the modern rubric of “falsification.” Scholarly conventions and standards of scientific probity were probably different in the distant past, yet the feuds, priority disputes and porous notions of scientific truthfulness from previous centuries seem contemporary.

* * *

In the late 1960s I was eating lunch in William James Hall with a few fellow assistant professors in the Harvard psychology department when a woman named Patricia Woolf sat down at our table. Unbeknownst to us, Woolf was a pioneer in the study of scientific misconduct. She asked whether we had heard anything about the fabrication of data by one of our colleagues. When we said yes, she asked what we were going to do about it. One of us said something like, “Look, our chairman, Richard Herrnstein, is a war criminal. Why should we worry about T—— making up data?” I guess we didn’t take the issue that seriously. At that time Herrnstein was training pigeons to recognize people and sampans in photographs of jungle foliage. The work was supported by the Limited War Laboratory of the US Army and was done off-campus because Harvard prohibited secret research. (With Charles Murray, Herrnstein would later write The Bell Curve, which made incendiary claims about purported racial differences in intelligence.) Herrnstein subsequently managed to help the miscreant find a job elsewhere, forestalling the possibility of scandal at Harvard.

In the past few decades there have been a number of studies asking scientists at every level of research in a variety of fields, and under the cover of anonymity, whether they had engaged in fabrication, falsification or plagiarism, or had direct evidence of such misconduct by others. Although the results were variable and involved different survey response rates and methodologies, the overall picture is disturbing.

In a large and pioneering survey of science graduate students and faculty at ninety-nine universities, the historian of biology and ethicist Judith Swazey and her colleagues found that “44 percent of students and 50 percent of faculty” had knowledge of two or more types of misconduct, broadly defined; about 7 percent had “observed” or had “direct knowledge” of faculty falsifying data. In a survey of its members, the International Society of Clinical Biostatistics found that 51 percent of respondents knew of at least one fraudulent project in the previous ten years. Of 549 biomedical trainees at the University of California, San Diego, 10 percent said they had “firsthand knowledge of scientists’ intentionally altering or fabricating data for the purpose of publication.” In a similar survey, 8 percent of biological and medical postdoctoral fellows at the University of California, San Francisco, said they had observed scientists altering data for publication. The American Association for the Advancement of Science surveyed a random sample of its members, and 27 percent of the respondents believed they had encountered or witnessed fabricated, falsified or plagiarized research over the previous ten years, with an average of 2.5 examples. A study by the director of intramural research at the Office of Research Integrity (ORI) of the Department of Health and Human Services found that of 2,212 researchers receiving NIH grants, 201 reported instances of likely federally defined misconduct over a three-year period, of which 60 percent were fabrication or falsification and 36 percent plagiarism. Noting that in 2007 155,000 personnel received research support from the NIH, the authors suggest that under the most conservative assumptions, a minimum of 2,325 possible acts of research misconduct occur each year. Finally, in a meta-analysis of eighteen studies, 2 percent of scientists admitted to fabricating or falsifying data and more than 14 percent had observed other scientists doing the same.

Scientists guilty of misconduct are found in every field, at every kind of research institution and with a variety of social and educational backgrounds. Yet a survey of the excellent coverage of fraud in Science and recent books on the subject—ranging from Horace Freeland Judson’s The Great Betrayal: Fraud in Science (2004) to David Goodstein’s On Fact and Fraud: Cautionary Tales From the Front Lines of Science (2010)—reveals a pattern of the most common, or modal, scientific miscreant. He is a bright and ambitious young man working in an elite institution in a rapidly moving and highly competitive branch of modern biology or medicine, where results have important theoretical, clinical or financial implications. He has been mentored and supported by a senior and respected establishment figure who is often the co-author of many of his papers but may have not been closely involved in the research.

Scientific misconduct is often difficult to detect. Although grant applications and research papers submitted to prestigious journals are rigorously reviewed, it is very difficult for a reviewer to uncover fabrication or falsification. Attempts at “replication”—repeating someone else’s experiment—are usually another weak filter for misconduct. Journals are reluctant to publish results of attempts at replication, whether positive or negative, thereby discouraging such attempts. In any case, particularly in the complex world of biology, it is often hard to repeat a specific experiment because of the multitude of differences, often unknown, between the original and the replication. Failure to replicate does not demonstrate fraud; however, it does indicate a problem to be looked into. Sometimes fraud is detected by a careful examination of published papers revealing multiply published or doctored illustrations; more often it is uncovered by the perpetrator’s students or other members of his laboratory.

* * *

  • Share
  • Decrease text size Increase text size

Before commenting, please read our Community Guidelines.