Nothing is more galling to scientists than outsiders questioning their research priorities. Witness the indignation of several leading physicists when the superconducting supercollider project was axed in 1993, or, more recently, the outrage expressed by members of the biomedical research community at being stymied in their pursuit of human embryonic stem-cell research. Beneath the research community’s sense of entitlement lies a deeply rooted fact about science policy: Since World War II, the United States has socialized the costs of scientific research without socializing its governance.
But given the powerful influences science and technology exert on society, shouldn’t the public be given a greater role in shaping science’s agenda? Even before Hiroshima seared an awareness of the powers of science in the American mind, New Deal policy-makers had begun answering in the affirmative. Their attempts have renewed relevance today, as military imperatives reassert their influence through bioterrorism and missile defense research, and as academia’s courtship of industry imperils what little public accountability exists in science.
The cause of democratic science was first championed in the early 1940s by freshman senator Harley Kilgore. Described as a “round faced gum-chewing man,” Kilgore introduced a bill intended to reverse what he perceived as corporate domination of research and to give the public an important role in directing the course of science. The centerpiece of his bill was a provision that called for creating a large, pan-scientific federal agency to coordinate and conduct both basic and applied research. In line with Kilgore’s New Deal commitments, his agency would be governed by a board of directors that included one representative each of labor, agriculture, consumer and industry interests, in addition to two scientists.
Not surprisingly, Kilgore’s proposal to grant scientists only a minority stake in their new agency was met with consternation by much–though not all–of the scientific community. Their response was led by Vannevar Bush. A Yankee Republican with angular features, Bush maintained close ties with large corporations and was an accomplished dean at the Massachusetts Institute of Technology. As head of the country’s wartime science agency, he also enjoyed considerable clout as the spokesman for American science. Bush parried Kilgore by appointing an advisory committee of scientists to furnish the President with recommendations on science research policy. The resulting manifesto, Science, the Endless Frontier, argued for a well-budgeted peacetime scientific agency that would jealously preserve the authority and agenda-setting autonomy of an elite scientific community.
A six-year tug of war between Bush and Kilgore ensued. As wartime government contracts were transferred to various agencies like the National Institutes of Health, the dream of a centralized science agency evaporated, and the resulting National Science Foundation emerged in 1950 with an almost inconsequential budget and limited jurisdiction over peacetime scientific research. Though both Bush and Kilgore lost the battle, Bush, having helped win the war, nevertheless won the science war as well. His report soon became the canticle for a postwar science policy that has generally left the governance of science to the scientists. Kilgore, sad to say, is remembered (to quote his Times obituary) as “one of the most tireless idea men in the Senate…the fact that his solutions were almost unanimously ignored by his colleagues never seemed to daunt him.”
The Bush-Kilgore debate, which remained the most important science policy scrimmage for more than a century, makes a cameo in philosopher Philip Kitcher’s Science, Truth, and Democracy. Noting that philosophers have tended to approach science policy debates by way of ethics, Kitcher asks instead how philosophers of science contribute to discussions about the social implications of science. In part because ethicists have tended to focus on technologies and their (mis)applications, questions about the values and content of science have tended to languish in soft focus. Kitcher’s book attempts to correct that myopia by training a moral and political lens on the content, rather than simply the application, of science.
Science warriors allied with social constructivism might grouse that Kitcher’s attempt to reconcile science and social values is nothing new. They view science as interpenetrated–some would say saturated–with political and social interests. These interests not only influence the choice and funding of scientific projects but also the adjudication of scientific disputes, the processes by which observations become enshrined as “facts” and the actual descriptions of phenomena proffered by science. The most extreme social constructivists would further argue that because values intercede whenever the scientist interrogates his or her world, there is no such thing as a “stable” reality independent of our mind.
However, Kitcher’s book argues that you don’t have to be a social constructivist to advocate a central role for social, political and ethical judgments in scientific affairs. Realist orthodoxy would maintain that science provides value-free descriptions of an objective reality, and therefore science unfolds by its own (and nature’s own) asocial logic. The best science, therefore, is produced by leaving scientists literally to their own devices. This may occasionally lead to dangerous and unseemly discoveries–honed techniques of genocide, say, or evidence of incorrigible ethnic propensities. But society, the orthodox realist continues, ought nevertheless not to censor such research activities, first, because these discoveries become dangerous only when they are misused, and second, because knowledge is always preferable to ignorance.
Kitcher disputes each of these claims from a philosophical vantage he calls “modest realism.” After flashing his realist credentials in two bracing chapters, Kitcher argues that scientific investigations, however grounded in an independent and solitary reality, are embedded in their social context because what scientists consider interesting is strongly influenced by human history and aspiration. Kitcher uses map-making to clarify his point: Maps, like science, as much reflect the reality of their subject (the landscape) as they do the interests of their users (hikers, for example, need information on elevation, not soil alkalinity). There is no way to produce a map without recourse to human interests, and as a result, social interests intervene when cartographers decide which features to map.
While reflecting human interests, scientific knowledge also in turn modifies them by enabling previously impossible activities. A good trail guide for a previously unmapped wilderness, to return to Kitcher’s metaphor, increases foot traffic and therefore alters the activities of wildlife. Hikers seeking glimpses of rare fauna then require maps that describe still remoter portions of the wilderness. At the same time, increased hiker volume in well-mapped regions erodes trails over time, necessitating the revision of our old maps. One of the key arguments of Kitcher’s book, then, is that knowledge matters.
Kitcher extends this argument to scientific discoveries that have dangerous social implications. He rejects the view that subversive truths have some intrinsic worth or that they eventually lead to social benefits, and argues instead that the value of a particular line of inquiry hinges entirely on an assessment of what society deems to be the ends of knowledge. A properly functioning society, therefore, requires what Kitcher terms a “well-ordered science” that can “satisfy the preferences of the citizens in the society in which it is practiced.” His ordered science would involve a three-phase democratic procedure for arranging scientific priorities: In the first, deliberators would decide on the issues they would like scientific inquiry to address; next, they would match those goals to particular scientific strategies; and in the third stage, they would evaluate the most appropriate ways of applying the resultant scientific knowledge.
Would a Kilgore victory have better approximated Kitcher’s ideal? Kitcher states that Bush’s Science, the Endless Frontier might have been “groping towards the ideal of well-ordered science” and in the end was “a thoughtful and rightly influential brief for science.” Perhaps it was, compared with what preceded it. However bold his prescriptions, Kitcher holds back from offering substantive analysis of various attempts to “order” science. The cover of Science, Truth, and Democracy, for example, depicts Jacques Louis David’s famous The Oath of the Tennis Court at Versailles–probably because it somehow evokes democracy in the making. It turns out that David actually spearheaded what was probably the first modern attempt to democratize science when he led efforts to close the Paris Academy of Sciences during the French Revolution. Kitcher regretfully misses this chance to provide some historical context for his discussions, though I suppose expecting history from a philosopher is a bit like looking for falafel recipes in an Italian cookbook.
Kitcher is, nevertheless, unequivocal in his assessment that “contemporary scientists don’t work in a state of well-ordered science.” Human genetics research forges intrepidly ahead despite the absence of any meaningful protections against genetic discrimination. Commercial pressures virtually guarantee that genetic tests will be marketed before any such protections are enacted. Attempts to impose democratic discipline on genetic research have been converted into ethical ornamentation. Meanwhile, the medical benefits of this research are oversold to an untutored public by a scientific community more interested in research budgets than cures.
Kitcher’s tone is gentle and didactic, and his arguments are impeccably structured. But while chapters begin with disarming geniality, the subject matter is complex, and Kitcher frequently abandons his readers for the thick woods with sentences like this one: “Even though it would be inconsistent to suppose that the efficient pursuit of inquiry is a uniquely rational goal, so that achieving a unified system of scientific laws is paramount, it’s easy to see how, given an initial (nonrational) commitment to regard the provision of strategies adaptable to the broadest collection of ends as the optimal state, one might be led to see the efficient development of science and technology as an overriding imperative.” Despite this, the book earnestly labors to connect with scientists and other nonphilosophers, and Kitcher deserves praise for attempting to, on the one hand, push science toward a democratic ideal and, on the other, furnish science’s would-be democratizers with sounder philosophical foundations.
Daniel S. Greenberg’s Science, Money and Politics: Political Triumph and Ethical Erosion provides a more sweeping survey of the disorder that afflicts contemporary science. For more than forty years Greenberg has chronicled the foibles of the scientific establishment with inimitable vigor and cheek, first as news reporter for Science, and then as editor of the small newsletter Science and Government Report. His previous book, The Politics of Pure Science (first issued in 1967 but printed as a second edition in 1999), is considered a classic by science policy enthusiasts.
Greenberg’s irreverence has always offered a square meal for those woozy from the puffery and piety served up by most science journalists. The present book doesn’t disappoint; Greenberg is especially adept at capturing folly and hypocrisy with memorable phrases, as when he states that money for science policy reports goes as fast as “ice cream at a teenage pajama party.” His book consists of a series of episodes in which the scientific community (he prefers to call it the “scientific enterprise”) has compromised its integrity, abused its cultural authority or retreated from public responsibility. Some of the incidents Greenberg describes are trivial at best. Others, however, justify his froth. He shows how presidential science advisers serve politics rather than the “values of science” by defending viewpoints that run contrary to their own judgments (as was the case when Reagan’s science and technology adviser, George Keyworth, denounced opponents of the Strategic Defense Initiative). We learn how the National Science Foundation concocted a flawed and self-serving report that projected science labor shortfalls. Despite the report’s failure to undergo peer review, many science spokespersons uncritically paraded the figures as evidence of crisis. Greenberg describes an abortive attempt by several prominent scientists–with funding from the National Science Foundation–to develop an ER-like television drama about science in order to woo public affection for research.
Greenberg also devotes a considerable portion of the book to crushing chestnuts. These include reverential portrayals of Vannevar Bush (many of his Science, the Endless Frontier recommendations in fact went unheeded), a widespread perception that the 1960s represented a “golden age” of research funding (it turns out that many contemporary scientists perceived funding to be in crisis then) and the belief that funding for scientific research is directly linked to the public’s scientific sophistication (actually, the National Cancer Institute’s support for breast cancer research in 1998 trailed Department of Energy support for less photogenic high-energy physics research by $180 million in 1998).
Greenberg portrays a scientific establishment that has grown risk-averse and conservative as the resources it commands for academic research have ballooned from $705 million in 1960 to $26.3 billion in 1998 (in current dollars). Two strategies have helped to assure this sustained growth. Like a pilot fish living comfortably but nimbly in the shadow of its swift and fickle host, the scientific enterprise has carefully avoided any activities that could be construed as insubordination. Referring to sociologist Robert Putnam, Greenberg asserts that science increasingly “bowls alone.” Where scientists once founded disarmament groups like the Federation of American Scientists, the Union of Concerned Scientists and the Pugwash Conferences, today such public engagement is rare, and membership in these organizations is declining. Greenberg’s book came out too early to note the stunning silence of molecular biology’s mandarins (aside from Matthew Meselson) when George W. withdrew from negotiations to strengthen the Biological Weapons Convention.
Science is, nevertheless, far from disinterested when it comes to politics: Its esoteric endeavors depend on government patronage, and it benefits from a generous bequest of autonomy and deregulation. The second survival strategy of the scientific enterprise is to conceal its interests behind neutral-sounding reports or pronouncements that project an atmosphere of crisis (in scientific literacy, international competitiveness, funding, or now, stem cell availability). Greenberg has made a career of reviewing such claims and finding them overwrought, and his present work exposes some of the sausage-making behind them.
Like Kitcher, Greenberg seems to argue that science’s appearance of political disinterestedness conceals an enterprise deeply embedded in–not independent of–today’s social and political order. Where Kitcher offers a glimpse of what a well-ordered science would look like, Greenberg shows how far we are from that ideal.
Unfortunately, Greenberg’s account suffers several shortcomings. As with Kitcher (who closes his book with a quixotic plea for scientists to help direct science along a “well ordered” course), Greenberg can be faulted for expecting too much from the scientific community. Why, in this era of rational, self-interested actors, should we expect science to forgo practices that have helped it to attain its unimpeachable social authority? Both authors’ comments might be better aimed if they spoke more directly to those most likely to contest the scientific enterprise’s autonomy. Stylistically, Greenberg’s book is maddeningly repetitive. For example, when he describes an instance where scientists organized in large numbers to oppose the nuclearphilic Barry Goldwater, he states “never again [after 1964] in significant numbers did science return to ballot-box politics.” Then on page 164 he reminds us that “the mobilization of scientists was not repeated in 1968 presidential campaign…or ever again”; then on page 181: “The political forays of scientists that began with the 1960 Kennedy campaign peaked in 1964 and then virtually disappeared.”
Greenberg’s 500-plus-page tour also feels haphazard, and it skips over numerous important estuaries where the “purity” of science mingles with the brine of politics. These include the evolving role of science in regulatory affairs and the judiciary, the rise of AIDS activism and how the scientific enterprise has responded to major controversies like recombinant DNA in the 1970s and embryonic stem-cell politics today. Given that “money” is the title’s second word, Greenberg’s book devotes surprisingly little space to discussing patent law and government policy changes that have given markets a commanding purchase at–or of–the laboratory benchtop. And he barely acknowledges a growing literature documenting the various effects of market pressures on scientific research.
Corporate domination of science and technology had certainly been in Harley Kilgore’s cross-hairs. Arrangements between Germany’s I.G. Farben and Standard Oil of New Jersey, which held several key patents involving synthetic rubber, had prevented its manufacture in the United States during the 1940s. After Japan captured natural-rubber-producing regions in Asia, the United States had no choice but to break the bottleneck by dispersing Standard Oil’s patent monopoly. Kilgore, keen to prevent such patent abuses in the future, stipulated in his various bills that patents on any inventions developed with federal funding be owned by the public.
Another of Kilgore’s more farsighted visions would have mandated that his new agency study the “impacts of research… upon the general welfare” and fund extramural social science research to help link up basic research with social needs. A half-century later Philip Kitcher was lucky enough to participate in probably the government’s closest approximation of those proposals when he was appointed to the Human Genome Project’s ethical, legal and social implications advisory panel (called the ELSI working group). The working group’s charge was to set the agenda of an ELSI program that received 3-5 percent of the Human Genome Project’s total budget.
Early on, ELSI seemed an enlightened effort at reconciling the excitement of basic science with the gravity of its social implications. Its working group consisted of numerous leading scholars, many of whom were openly dubious about the project. Though ELSI was hailed by various commentators as a “new social contract between scientists and the public,” more astute observers like historian M. Susan Lindee noted skeptically the ELSI program’s tendency to “divide the project into ‘science’ and the ‘implications’ of that science. Officially outside the ELSI territory,” she argued in 1994, “is the most important question of all: Is mapping the human genome a meaningful scientific priority?… [ELSI] may appeal to leading scientists such as [James] Watson…precisely because it suggests that scientists independently generate knowledge, which…is then applied in specific social settings, where it has ‘implications.'”
Within a year of Kitcher’s appointment, the working group’s chair, Lori Andrews, resigned and the ELSI program was reorganized after a series of disputes with the Human Genome Project’s leadership. True to Lindee’s critique, Andrews’s resignation was prompted when the genome project’s leaders refused to authorize an ELSI study that would have critically reviewed a particularly volatile line of inquiry–behavioral genetics. Near his book’s end Kitcher states, “The net result of disbanding the working group seems to have been to prevent a handful of serious scholars from opening up issues [that] leaders of the genome project would rather not have aired.” So much for science’s commitment to “value free” inquiry and subversive truths.
Science and technology are not so easily disentangled; as Kitcher points out, the two are in constant dialogue with each other. Endeavors that would attempt to right technology’s untoward social effects–whether through bioethics, technology assessment or consumer advocacy–are therefore unlikely to succeed as long as they leave science unmolested. So too are efforts that refuse to countenance the social and political interests on which our science and technology are currently grounded.
Both Kitcher and Greenberg encourage us to pry science politics open. Democratic participation in scientific governance would, of course, slow discovery and innovation somewhat. But this seems a small price to pay for replacing technologies that overpower our social and ethical commitments with ones that actually empower them.