The third most dispensed drug in the United States is a thyroid medication called Synthroid. Eight million Americans suffering from hypothyroidism take Synthroid every day, paying a premium for Knoll Pharmaceutical’s top-selling brand name rather than buying the much less expensive generic alternative. As is the case with most brand leaders, Knoll’s enormous success with Synthroid is entirely dependent on its continuing ability to convince its users and the healthcare community that its drug is worth the extra cost. This the company has done brilliantly for decades, despite any real proof of Synthroid’s superiority.

In the late eighties, the company (then known as Boots Pharmaceutical) had good reason to believe it was on the verge of obtaining such proof. A clinical pharmacist at the University of California, San Francisco, named Betty Dong published a limited study that strongly suggested Synthroid would beat out its competitors in a blind, randomized trial. The company approached Dong, offering her the full $250,000 needed to pay for such a long and complex study.

Alas, the study backfired on the company. To the surprise of nearly everyone, including Dong, the results suggested that Synthroid was no more or less effective than three much cheaper competitors. All four were what scientists call “bioequivalent.”

But the company had a trump card. As the study’s sponsor, it had not only been able to design the protocols of the drug trial; it also had exclusive access to the prepublished results of the study as well as final approval over whether the study could ever be made public. Not surprisingly, with the results so threatening to its marketing efforts, Knoll set out to thwart the study. In addition to delaying its publication in a scientific journal by many years, effectively destroying the relevance of its data, the company also undermined the study’s message by pre-emptively publishing the UCSF data in a different journal with a different (much friendlier) conclusion. Then Knoll waged a massive PR campaign against the real study, “Bioequivalence of Generic and Brand-name Levothyroxine Products in the Treatment of Hypothyroidism,” by Betty J. Dong et al., after it was finally published in the spring of 1997 in the eminent Journal of the American Medical Association (JAMA).

A massive class-action lawsuit followed the publication of Dong’s JAMA report, alleging on behalf of all Synthroid users that Knoll had defrauded them of hundreds of millions of dollars in inflated costs. The company has offered to settle for a sum close to $100 million–which would be the largest cash settlement for a class-action suit of its kind in history. And yet, even with such a fantastic price to pay, one can only conclude that in the end Knoll has benefited tremendously from its brash interference in the academic research process: A hundred million is but a small fraction of the profits the company made from Synthroid during the years it was suppressing the study. And by its ability to taint Dong’s study with controversy over the years, Knoll was able to nullify any would-be effect. “Sales continue to grow very rapidly,” Carter Eckert, Knoll’s president, told me when I visited him at the company’s rural New Jersey headquarters. “Our position has been validated.”

Betty Dong’s case, while extraordinary, is not isolated. In Toronto, liver specialist Nancy Olivieri was threatened with legal action by the Canadian drug giant Apotex if she published criticisms of its drug L1, concerns that had emerged from a clinical trial the company was sponsoring. In Providence, Rhode Island, Brown University’s director of occupational medicine, David Kern, was pressured both by a local company and by his own university not to publish his findings about a new lung disease breaking out at the company’s plant (Kern did publish his data, and the disease Flock Worker’s Lung was officially recognized by the Centers for Disease Control in September 1997). In Winston-Salem, North Carolina, hypertension expert Curt Furberg and three colleagues resigned from a major Sandoz-funded study of calcium channel blockers, a controversial class of drugs purported to decrease the risk of heart attacks, rather than cave in to company pressure to spin negative results in a positive light. “I have seen people in industry asking for stranger and stranger things in private funding, as far as control is concerned,” says Gregory Gardiner, Yale’s senior director of the Office of Cooperative Research. Indeed, these sensational cases may well be only the visible tip of a broader crisis in academic science. Over the past two decades, university-industry partnerships have become a ubiquitous feature of biotech research, and with this new closeness has come a raft of new concerns about whether the soul of academic science is being slowly eaten away. “We need to be vigilant,” suggests Gardiner, “to make sure nothing is happening to university science.”

The infusion of private capital is staggering. In 1997 US companies spent an extraordinary $1.7 billion on university-based science and engineering research, a fivefold increase from 1977. More than 90 percent of life-science companies now have some type of formal relationship with academic scientists, and 60 percent of those report that they have achieved new patents, products and sales as a result. In the realm of university science, at least, that once-remote ivory tower now finds itself cater-corner to an office park–in many cases literally.

No one doubts that this surge in university-industry alliances has produced enormous scientific progress, yielding important new drugs like the anti-HIV agent 3TC, a synthetic version of the anticancer drug Taxol and the Haemophilus b conjugate children’s vaccine for bacterial meningitis. University-industry alliances have also hatched many critical tests and medical technologies, prolonging and improving countless lives.

The new alliances have also generated a lot of profit. According to the Association of University Technology Managers, a boosterish pro-alliance trade group, corporate licensing of university inventions now accounts for $21 billion in annual revenue, which in turn supports 180,000 jobs. The arrangement has also become an important new revenue stream for academic institutions and for individual faculty: In fiscal year 1993 the top ten universities alone received $170 million in product royalties. In the majority of campus technology transfer policies, the researchers making discoveries are entitled to a portion of that money. Sure enough, a survey by Tufts University’s Sheldon Krimsky of articles published in 1992 in the fourteen leading US biomedical journals disclosed that 15 percent of lead authors had some significant financial interest in their published report. A similar survey in 1996 suggested the proportion was closer to one-third. And a just-completed Krimsky study of 62,000 articles published in 210 different journals revealed that potential conflicts-of-interest are almost never reported. Though all 210 journals have a formal disclosure requirement, 142 of them did not publish a single disclosure in all of 1997. “Companies say, ‘Here’s the design. Are you interested?'” explains Bowman Gray medical school’s Curt Furberg. “Being interested means a lot of funding for you and your institution. There’s a lot of appeal in going along.”

Unfortunately, the cost of economic success may often be the integrity of the science itself. What are we to make of a recent study published in JAMA suggesting that an astouding 43 percent of women and 31 percent of men suffer from “sexual dysfunction”–once we also discover that two of the study’s authors served as paid consultants to Pfizer, which manufactures Viagra? (The relationships were not disclosed in JAMA.) If individual researchers are profiting from their own research, considers University of Pennsylvania bioethicist Mildred Cho, “the outcome or direction of their work may be affected. They might, for instance, be tempted (consciously or unconsciously) to design studies that are more likely than not to have an outcome favorable to the product.” Or they might be tempted to keep lifesaving but potentially profitable information secret from the colleagues–now competitors–who could most readily build on the discovery. “There is little question that academic faculty have a very different and less critical attitude toward a specific company if they are getting a lot of money,” insists Public Citizen’s Sidney Wolfe. “It’s not just research grants. A number of these people supplement their income by going around the country giving talks funded by the drug industry. It adds a significant amount of money to their income. You don’t bite the hand that feeds you.” The obligation purchased with this money, Wolfe says, eats into “the freedom to teach the way you want to, to put drugs on the formulary, to do the research you want to do, to publish when you have results, as opposed to when some company decides that it’s OK. People don’t have to sign restrictive agreements. You can modify people’s behavior just by giving them money.”

Such subtle and not-so-subtle perversions of science are very difficult to detect but have very real economic and health implications for American consumers. When adverse side effects are not adequately reported, drugs and devices maintain artificial leads–and premium prices. Scientists sometimes may not pursue drugs or tests that lack obvious short-term markets. Ultimately, private science could end up answering not to the public good but to the same pressures that drive stocks up and down. “The reason we got the money [from Boots/Knoll] was that chances were that the results were going to be very positive,” says Dong, still an important researcher at UCSF. “I’ve changed my mind about that. I don’t think that’s a very good reason to do research.”

Whatever the drawbacks of the privatization of research, logic would dictate that they are already pervasive. “Some of the collaborations I find strange,” says Allen Sinisgalli, Princeton’s associate provost. “If you’re involved in sponsored research and you’re working on one floor and the corporation is on the other floor, it’s hard to believe that the stairwell somehow acts as a membrane that will prevent conflicts of interest.” While private investment amounts to just 12 percent of the total annual budget for academic life science, that number nevertheless signals a radical shift in the funding of American science–a shift that is causing considerable concern among a small group of academic ethicists. “An entrepreneurial atmosphere that has begun to alter the ethos of science,” warns Sheldon Krimsky. “Norms of behavior within the academic community are being modified to accommodate closer corporate ties.”

This is not to say that medical research is rife with corruption. But there are unmistakable warning signs. One recent study, for example, revealed that among published studies of new drug therapies, 98 percent of those financially supported by the pharmaceutical industry commented favorably on the new drug–in contrast to 79 percent of studies with no industry support. Ninety-eight percent: Either industry-supported studies are consistently and miraculously beating all the odds, or a raft of unfavorable results is somehow not getting published. “This is the biggest ethical issue facing biomedical research now and into the twenty-first century,” says Mildred Cho. “It’s something that’s sneaking up on us now and shouldn’t be.”

If you ever want to watch an ethicist struggling with a crisis of conscience, offer to spring for lunch. This is the level of consternation I have unintentionally created as I meet with Drummond Rennie in San Francisco. “I’m sorry, I can’t. I just can’t let you do that–but please let me explain why,” Rennie, the West Coast deputy editor of JAMA, pleads in his plaintive Winchester-Cambridge accent as I try to pay for our mayonnaisey sandwiches just downstairs from his office. Jerking out his black leather billfold, Rennie explains his longtime, ironclad rule of refusing all offers of free food, travel, lodging–indeed, perks of any kind–from anyone other than his employer. Polite apologies are exchanged, accepted. No harm done. Our mutual autonomy intact, we head back upstairs. As I set up my tape recorder, Rennie, who over the years has slowly fashioned himself into the conscience not only of JAMA but more generally of scientific publishing, opens a drawer and begins to excavate the Synthroid files.

A junior associate, Veronica Yank, lends a hand. There isn’t much glory in the business of ethical scrutiny, and certainly not much money; it’s not the type of job that filters into the daydreams of ambitious children or that charming recruiters wax about over seared tuna to recent Phi Beta Kappa graduates. So here at the Institute for Health Policy Studies, a think tank affiliated with UCSF’s prestigious medical school, Professor Rennie has also made an extra effort to mentor other like-minded scientists. While most of the world’s researchers investigate matters of efficacy, morbidity and so on, this small cadre–Cho, Yank and Rennie protégée Lisa Bero (herself now a leading force in the field)–joins Krimsky and David Blumenthal of Harvard in examining the integrity of that research. It’s nothing like a police squad, though, because most of the flaws and compromises they discover are not even apparent to the researchers. Because of corporate influence, says Rennie, “there is distortion that causes publication bias in little ways, and scientists just don’t understand that they have been influenced. There’s influence everywhere, on people who would steadfastly deny it. You and I think we are not influenced, but Veronica looking at us from above can prove that we are.”

Like most of his ethics colleagues, Rennie, 61, did not leap at but slowly gravitated to the field. Trained in nephrology (the study of kidneys) at the University of London and Johns Hopkins, he eventually managed to combine his vocational specialty with his avocational passion–high-altitude climbing–to become an expert on altitude sickness. Throughout the sixties and seventies, in dozens of expeditions in the Andes, the Himalayas, the Alps, the Yukon and in Alaska, Rennie documented the physiological effects of low-oxygen environments. A 1981 hip injury on Mt. McKinley squelched that intense phase of his life, but he took away from his mountain years a profound lesson in morality. “Really serious climbing teaches you a lot about integrity,” he says. “It’s so basic–do you abandon someone or not when you think you’re going to die? Do you cut the rope? Do you make an effort to get food up to those people? There are a lot of very stark things that climbing teaches which I’ve found very painful to learn, because I haven’t always made the right choices.”

In 1977 Rennie went to work for Arnold “Bud” Relman at the New England Journal of Medicine. In Relman, Rennie found a mentor for what would become the next distinguished phase of his career. At first, the education was simply in the art of scientific editing. Interest in the integrity of scientific literature came later and was driven by a series of unfortunate events. “I came into this role very slowly,” he says. “It took a long time for me to even accept that there was such a thing as scientific misconduct.”

His terrifying introduction to the subject came in 1979 in the form of a letter to the Journal containing incontrovertible proof that two Yale researchers, Vijay Soman and Philip Felig, had committed plagiarism. Not long after that, a well-respected Harvard researcher, John Darsee, was caught falsifying electrocardiogram data. “When I heard that there was a problem with Darsee,” says Rennie, “I rushed to the Journal. We had just published this amazing article by him. I looked at it again and said to myself, ‘Oh, we’re all right. He’s got a co-author and he thanked three other authors at the end of the article.’ Well, it was later shown that every single piece of data in that article was invented. He even invented the doctors at the end.”

Today, Rennie is JAMA‘s West Coast deputy editor. The Synthroid case is his latest fascination because, he says, it so clearly illustrates the starkly differing agendas of industry and the academy. “This was a good study,” he says. “The best study that had been published on the subject. [The company] went to extraordinary efforts to discredit it, and by extraordinary I mean that there were accusations that can ruin a scientist’s career.”

Indeed, when the research that Boots/Knoll had funded produced results that could potentially have cost it billions, Knoll accused Dong not only of sloppy research but also of serious ethical violations (none of which have been substantiated). Those accusations continue to this day. “We thought we had contracted with a qualified researcher,” Carter Eckert told me during my visit to Knoll. “She didn’t follow the protocol. Her methods were flawed. She drew erroneous conclusions and she didn’t provide all the information on what she discovered.”

While there does seem to be an honest scientific disagreement at the heart of the controversy, it’s just as clear that the company exploited that difference well beyond propriety. “What Boots tried to do,” says Leslie Benet, chairman of pharmaceutical chemistry at UCSF and one of the leading bioequivalency experts in the United States, “was to come in and create confusion as best they could–anything to delay or prevent the publication of this study. So they raised a lot of issues. They had a catalogue of a hundred and something issues. The great majority of it was grandstanding, what we call ‘data scrubbing’–trying to find something to cause a problem.”

Knoll also used its near-omnipotence in the thyroid community to keep the study under wraps, Rennie says. Perhaps the most vivid illustration of this came when the American Thyroid Association considered a resolution urging the company to allow the study to be published. “That vote was on an absolute no-brainer, which was, ‘Should we, as the Thyroid Association, write to the manufacturer and say, Please publish this paper?’ I can’t think of any easier question. It’s a matter of basic academic freedom. And it was turned down. That is most extraordinary.” One inescapable conclusion is that the defeat had something to do with the fact that Knoll provides more than 60 percent of the Thyroid Association’s funding. Indeed, Rennie claims that three people present for the fateful vote later told him that as they considered the proposition, one member openly remarked, “We mustn’t kill the golden goose.”

“Universities exist to do research, and research exists to benefit mankind,” Rennie says. “Companies have an additional and different agenda–making profit. Though they may be experts and though you may have read papers by them and so on, their strings are pulled by the marketers. And that’s forgotten by academics.” Weeks later, in a follow-up phone conversation, I ask Rennie if the Dong-Synthroid affair is the worst case of private abuse of public research that he has ever seen. He laughs. “David, I’ve got a house full of files with important cases of abuse. This is just one example. There are many others. Extreme examples like faking whole papers draw attention, but trimming, skewing, using the wrong analysis, using the more favorable analysis or just muddling a little bit is certainly much more common and a far, far bigger problem.”

For precisely fifty years, the US government has funded, on our behalf, a stunning volume of academic scientific research, mostly through the umbrella bureaucracy of the National Institutes of Health (NIH). The expenditures have also been spectacular in their consistency. In sharp contrast to almost all other federal spending on research and development, spending on academic science has steadily increased through deficits, recessions, wars and even our recent political devolution. The latter half of the twentieth century of US history might ultimately be as well-known for its commitment to basic scientific research as for any other endeavor.

As the United States began to convert its economy after World War II, a conviction emerged among the elite that the nation’s future success would depend largely on scientific progress. The spur came from the legendary Vannevar Bush, director of the wartime Office of Scientific Research and Development and overseer of the Manhattan Project. In July 1945 Bush submitted to President Truman a report titled “Science: The Endless Frontier,” which pleaded with Truman to make science a permanent national priority. “Without scientific progress,” Bush wrote, “no amount of achievement in other directions can insure our health, prosperity, and security as a nation in the modern world.”

Bush’s expectation of science’s importance to society has of course proven entirely correct. America’s enthusiastic public support for research has helped make it the world’s undisputed leader in public health. Part of that success is due to the fact that science was not only well funded for so long but also had the independence to pursue its own ends. “Investigators did not have to prove the short-term applicability of their work,” explains Harvard’s Blumenthal, “because they did not have to rely on sponsors, such as industry, with such short-term orientations.”

The implicit pact between scientists and legislators that allowed for such a long leash was that research dollars would, eventually, help treat and cure disease, something any constituency could appreciate. In part because the postwar economy was performing so well in other areas, there was no particular expectation of economic benefits from this federal largesse for several decades. American industry was too busy manufacturing to bother with discovery. Throughout the fifties and sixties, private industry generated less than 4 percent of all university research funding. This “certainly did not prevent the transfer of useful technologies from universities to biomedical industries,” remarks Blumenthal. “But it did result in less direct interaction between academic scientists and industrial organizations.”

In the late seventies, as the economy faltered and strong foreign competition emerged from Japan and Western Europe, the institutional separation between academia and industry came under critical scrutiny, as both industry and government began to view academic science as an untapped economic resource. Many potentially lucrative discoveries, it was thought, were foundering in the laboratory. In 1980 Congress passed the Bayh-Dole Act, which allowed researchers and universities to patent discoveries from federally funded research. With such legal protection, entrepreneurs would be able to take the development risks necessary to bring discoveries to market. Since almost everything on campus depends on Washington funding, at least in part, Bayh-Dole effectively lifted a ban on campus entrepreneurship, thus allowing academic scientists to take an active role in the private applications of their research.

The Federal Technology Transfer Acts of 1986 and 1989 strengthened market incentives even further, allowing researchers, for example, to keep proprietary information secret. This suite of legislation reflected the increasingly popular notion that government research was useful mainly as an economic seed. “There are also times when a field of research no longer needs the Government as nursemaid,” the New York Times editorialized in 1985. “The rich flow of venture capital into biotechnology means the Government need no longer support that element of biomedical research so heavily.” Between these lines, one can see the rebirth of a familiar laissez-faire refrain: What’s good for Pfizer is good for everyone.

That sentiment would probably sound about right to Knoll Pharmaceutical president Carter Eckert. “The whole concept of this conflict–it ain’t there,” he said. “Not in the pharmaceutical business. The stakes are too high. It’s absolutely insane to take the position that a pharmaceutical company is going to win by not pursuing the truth. Ultimately, the patients have to use the drug.” In Eckert’s view, then, the marketplace is the ultimate consumer watchdog. After all, he says, no one’s going to make much money selling something that doesn’t work.

That’s true enough. On the other hand, the profit motive might encourage a company to suppress or distort positive findings on competing products–or, for that matter, simply to keep some data secret. A 1997 survey by David Blumenthal revealed that among companies that sponsor academic research, 58 percent require their investigators to withhold results for more than six months–far longer than the two months the NIH considers reasonable. In that same survey, a third of the academic respondents said they had been denied access to research results of other university scientists.

Ultimately, such secrecy costs not just dollars but also lives: Renowned NIH cancer researcher Stephen Rosenberg reports that he has, on several occasions, been unable to obtain important data and lab materials because he would not agree to strict proprietary rules of secrecy. When anything undermines the open sharing of all research data, laments Blumenthal, “researchers unknowingly build on something less than the total accumulation of scientific knowledge.” Ineffective or even dangerous drugs are not revealed as such at the earliest possible moment; avenues of research already known to be fruitless by some are needlessly pursued by others, wasting money and time and ultimately hindering scientific progress.

The walls in Rennie’s small office are lined with stark photographs of peaks, glaciers and very cold people. “One of the great things about being a climber,” Rennie says with a gesture to one wall, “is that you keep falling off things and getting frozen. You end up in hospitals. You become a patient.” He laughs. “It’s my job, and Blumenthal’s and Krimsky’s and Bero’s, to look at research from the patient’s point of view, to ask, ‘Can I trust this?’ You can talk about caveat emptor, buyer beware, but patients are emptors that can’t caveat because they don’t know how. When you are a patient, it’s not like buying a Toyota. Patients don’t know how to choose their own anesthetic.”

Such profoundly important medical decisions are made by hospital boards based on the best scientific research available. The problem is, argues Rennie, that as universities continue to let industry money dilute their nonprofit, nonpartisan character, they do so at the risk of frittering away public confidence. “The bottom line for universities that they haven’t fully understood,” he says, “is that in the end, public universities have to rely on public support. If the public perceives a university as a place where scientists become millionaires and where companies are in control, they’ll lose public support, and that will be catastrophic for them and for the public at large. People will say, ‘Well, he’s got a bigger house than I have and a better car, and I don’t seem to be getting any of the action at all. Why should I support or do anything to help those jerks? They’re just a rich business concern.’ Universities have to have credibility and be above the fray.”

Princeton’s Sinisgalli agrees. “Universities are having difficulty all the way along the line,” he says. “We cannot allow ourselves to blur our role. It’s not only a matter of conflict of interest but also of conflict of commitment and time.” Although industry-sponsored research on his campus has risen sixfold in recent years as a portion of total research dollars, it’s only at about half the national average. Further, Princeton retains what may be the strictest industry-sponsorship policies in the country: no developmental research; no testing; no ownership stake allowed for any company sponsoring campus research. “For a while, a lot of people thought we were a little behind the curve,” says Sinisgalli. “Now, I think some people are looking at our cautiousness and saying, ‘Maybe they were right.’ They are rethinking it because there are so many conflicts.”

One obvious move that bioethicists would like to see is a lot more public disclosure. While most of the top research institutions have disclosure guidelines in place, many could be more stringent. Conferences and journals have also been edging toward more disclosure, but many refuse to budge. Nature, for example, insisted in an editorial two years ago that the 1996 report by Sheldon Krimsky revealing that a third of authors surveyed had a financial interest in the research “makes no claim that the undeclared interests led to any fraud, deception, or bias in presentation, and until there is evidence that there are serious risks of such malpractice, this journal will persist in its stubborn belief that research as we publish it is indeed research, not business.”

Nature‘s position of shielding conflicts of interest from public view is ridiculous on its face, and, in an era of so many financial entanglements, a threat to the integrity of science. The starkness of the problem was revealed last year in a New England Journal of Medicine survey of authors who had published studies on calcium channel blockers. “The medical profession needs to develop a more effective policy on conflict of interest,” the Journal survey concluded. How did it arrive at such a blunt determination? It turned out that while just 3 percent of the calcium channel authors surveyed had publicly disclosed potential conflicts of interest, the percentage of those who should have–that is, the percentage of those who publicly favored the drug and had a financial relationship with the manufacturers–was a bit higher: 96.