During the presidential campaign of 1964, a bit of doggerel surfaced among liberal wits, as they pondered the popularity of Barry Goldwater on certain college campuses:
We’re the bright young men,
who wanna go back to 1910,
We’re Barry’s boys!
We’re the kids with a cause,
a government like granmama’s,
We’re Barry’s boys!
What could be more ludicrous than the spectacle of young people embracing an old reactionary who wanted to repeal the New Deal? One might as well try to revive corsets and spats. Progress in politics, as in other matters, was unstoppable.
These days the satire rings hollow; so too its hubris. Except for the spats, we really have gone back to 1910, if not earlier. The deregulation of business and the starvation of the public sector have returned us to a landscape where irresponsible capital can again roam freely, purchasing legislatures wholesale and trampling on the public interest at will. The Supreme Court has revived the late-nineteenth-century notion that corporations are people, with all the rights of citizenship that personhood entails (including the ability to convert money into free speech). This is a predictable consequence of Republican power, but what is less predictable, and more puzzling, is that the resurrection of Gilded Age politics has been accompanied throughout the culture by a resurgence of Gilded Age patterns of thought, no more so than with the revival of positivism in popular scientific writing.
More a habit of mind than a rigorous philosophy, positivism depends on the reductionist belief that the entire universe, including all human conduct, can be explained with reference to precisely measurable, deterministic physical processes. (This strain of positivism is not to be confused with that of the French sociologist Auguste Comte.) The decades between the Civil War and World War I were positivism’s golden age. Positivists boasted that science was on the brink of producing a total explanation of the nature of things, which would consign all other explanations to the dustbin of mythology. Scientific research was like an Easter egg hunt: once the eggs were gathered the game would be over, the complexities of the cosmos reduced to natural law. Science was the only repository of truth, a sovereign entity floating above the vicissitudes of history and power. Science was science.
Though they often softened their claims with Christian rhetoric, positivists assumed that science was also the only sure guide to morality, and the only firm basis for civilization. As their critics began to realize, positivists had abandoned the provisionality of science’s experimental outlook by transforming science from a method into a metaphysic, a source of absolute certainty. Positivist assumptions provided the epistemological foundations for Social Darwinism and pop-evolutionary notions of progress, as well as for scientific racism and imperialism. These tendencies coalesced in eugenics, the doctrine that human well-being could be improved and eventually perfected through the selective breeding of the “fit” and the sterilization or elimination of the “unfit.”
"swipe left below to view more authors"Swipe →
The Nixonian “New York Times” Stonewalls on a Discredited Article About Hamas and Rape
The Nixonian “New York Times” Stonewalls on a Discredited Article About Hamas and Rape
Will the Heritage Foundation’s Project 2025 Turn Trumpism Into a Governing Agenda?
Will the Heritage Foundation’s Project 2025 Turn Trumpism Into a Governing Agenda?
Every schoolkid knows about what happened next: the catastrophic twentieth century. Two world wars, the systematic slaughter of innocents on an unprecedented scale, the proliferation of unimaginably destructive weapons, brushfire wars on the periphery of empire—all these events involved, in various degrees, the application of scientific research to advanced technology. All showed that science could not be elevated above the agendas of the nation-state: the best scientists were as corruptible by money, power or ideology as anyone else, and their research could as easily be bent toward mass murder as toward the progress of humankind. Science was not merely science. The crowning irony was that eugenics, far from “perfecting the race,” as some American progressives had hoped early in the twentieth century, was used by the Nazis to eliminate those they deemed undesirable. Eugenics had become another tool in the hands of unrestrained state power. As Theodor Adorno and Max Horkheimer argued near the end of World War II in Dialectic of Enlightenment, the rise of scientific racism betrayed the demonic undercurrents of the positivist faith in progress. Zygmunt Bauman refined the argument forty-two years later in Modernity and the Holocaust: the detached positivist worldview could be pressed into the service of mass extermination. The dream of reason bred real monsters.
The midcentury demise of positivism was a consequence of intellectual advances as well as geopolitical disasters. The work of Franz Boas, Claude Lévi-Strauss and other anthropologists promoted a relativistic understanding of culture, which undercut scientific racism and challenged imperial arrogance toward peoples who lagged behind in the Western march of progress. Meanwhile, scientists in disciplines ranging from depth psychology to quantum physics were discovering a physical reality that defied precise definition as well as efforts to reduce it to predictable laws. Sociologists of knowledge, along with historians and philosophers of science (including Karl Mannheim, Peter Berger and Thomas Kuhn), all emphasized the provisionality of scientific truth, its dependence on a shifting expert consensus that could change or even dissolve outright in light of new evidence. Reality—or at least our apprehension of it—could be said to be socially constructed. This meant that our understanding of the physical world is contingent on the very things—the methods of measurement, the interests of the observer—required to apprehend it.
None of this ferment discredited the role of science as a practical means of promoting human well-being: midcentury laboratories produced vaccines and sulfa drugs as well as nuclear weapons. Nor did it prove the existence (or even the possibility) of God, as apologists for religion sometimes claimed. But it did undermine the positivist faith in science as a source of absolute certainty and moral good. As ethical guides, scientists had proved to be no more reliable than anyone else. Apart from a few Strangelovian thinkers (the physicist Edward Teller comes to mind), scientists retreated from making ethical or political pronouncements in the name of science.
* * *
During the past several decades, there has been a revival of positivism alongside the resurgence of laissez-faire economics and other remnants of late-nineteenth-century social thought. E.O. Wilson’s Sociobiology (1975) launched pop-evolutionary biologism on the way to producing “evolutionary psychology”—a parascience that reduces complex human social interactions to adaptive behaviors inherited from our Pleistocene ancestors. Absence of evidence from the Pleistocene did not deter evolutionary psychologists from telling Darwinian stories about the origins of contemporary social life. Advances in neuroscience and genetics bred a resurgent faith in the existence of something called human nature and the sense that science is on the verge of explaining its workings, usually with reference to brains that are “hard-wired” for particular kinds of adaptive, self-interested behavior. In the problematic science of intelligence testing, scientific racism made a comeback with the publication of Richard Herrnstein and Charles Murray’s The Bell Curve in 1994.
This resurgent positivism provoked ferocious criticism, most of it serious and justified. Stephen Jay Gould took dead aim at what he called “Darwinian Fundamentalism,” arguing that strict adaptationist accounts of evolutionary thought presented “a miserly and blinkered picture of evolution,” impoverished not only by the lack of evidence but also by the reductionist tendency to insist on the simplest possible explanation for the complexities of human and animal behavior. Other critics—Noam Chomsky, Richard Lewontin—joined Gould in noting the tendency of Darwinian fundamentalists to “prove” adaptationist arguments by telling “just-so stories.” These are narratives about evolution based on hypotheses that are plausible, and internally consistent with the strict adaptationist program, but lacking the essential component of the scientific method: falsifiability. This was a powerful argument.
Within the wider culture, however, reductionism reigned. Hardly a day went by without journalists producing another just-so story about primitive life on the savanna thousands of years ago, purporting to show why things as they are have to be the way they are. In these stories, the parched fruits of a mirthless and minor imagination, all sorts of behavior, from generals’ exaggerations of their armies’ strength to the promiscuity of powerful men, could be viewed as an adaptive strategy, embedded in a gene that would be passed on to subsequent generations. In the late twentieth century, as in the late nineteenth, positivism’s account of human behavior centered on the idea that the relentless assertion of advantage by the strong serves the evolutionary interests of the species. Positivism remained a mighty weapon of the status quo, ratifying existing arrangements of wealth, power and prestige.
The terrorist attacks of September 11, 2001, injected positivism with a missionary zeal. “Once I had experienced all the usual mammalian gamut of emotions, from rage to nausea, I also discovered that another sensation was contending for mastery,” Christopher Hitchens wrote several months after 9/11. “On examination, and to my own surprise and pleasure, it turned out to be exhilaration. Here was the most frightful enemy—theocratic barbarism—in plain view…. I realized that if the battle went on until the last day of my life, I would never get bored in prosecuting it to the utmost” [see “Images in a Rearview Mirror,” December 3, 2001]. Putting aside the question of how Hitchens intended to “prosecute” this battle other than pontificating about it, and the irrelevance of his boredom to dead and maimed soldiers and civilians, one cannot deny that he embraced, from a safe distance, the “war on terror” as an Enlightenment crusade. He was not alone. Other intellectuals fell into line, many holding aloft the banner of science and reason against the forces of “theocratic barbarism.” Most prominent were the intellectuals the media chose to anoint, with characteristic originality, as the New Atheists, a group that included Hitchens, Daniel Dennett, Richard Dawkins and Sam Harris. In the shadow of 9/11, they were ready to press the case against religion with renewed determination and fire.
Atheism has always been a tough sell in the United States. In Europe, where for centuries religious authority was intertwined with government power, atheists were heroic dissenters against the unholy alliance of church and state. In the United States, where the two realms are constitutionally separate, Protestant Christianity suffused public discourse so completely in the late nineteenth and early twentieth centuries that some positivists felt the need to paper over their differences with religion. US politics has frequently been flooded by waves of Christian fervor. Sometimes religion has bolstered the forces of political sanctimony and persecution, as with Prohibition in the 1920s and anticommunism during the cold war; but it has also encouraged dissenters to speak truth to power—to abolish slavery, to regulate capitalism, to end the Vietnam War.
The Christian right, which had risen to prominence in the late twentieth century, provided an unprecedented target for New Atheists’ barbs. Here was a particularly noxious form of religion in American politics—more dangerous than the bland piety of politicians or the ad nauseam repetition of “God Bless America.” From the Reagan administration to that of George W. Bush, the Christian right succeeded in shifting political debates from issues of justice and equality to moral and cultural questions, often persuading working-class voters to cast ballots for candidates whose policies undercut their economic interests. Rage about abortion and same-sex marriage drowned out discussion of job security and tax equity. Fundamentalist Christians denied global warming and helped to derail federal funding for stem-cell research. Most catastrophically, they supplied the language of Providence that sanctified Bush’s “war on terror” as a moral crusade.
Still, it remains an open question how much this ideological offensive depended on religious dogma, and how much it was the work of seasoned political players, such as plutocrats bent on deregulating business and dismantling progressive taxation, corporate-sponsored media eager to curry favor with the powerful and military contractors hoping to sup at the public trough. Even the rhetoric of Providential mission owed more to romantic nationalism than to orthodox Christianity, which has long challenged the cult of the nation-state as a form of idolatry.
* * *
The New Atheists did not bother with such nuance. Hitchens and Harris, in particular, wasted no time enlisting in Bush’s crusade, which made their critique of religion selective. It may have targeted Christianity and occasionally Judaism, but hatred and fear of Islam was its animating force. Despite their disdain for public piety, the New Atheists provided little in their critique to disturb the architects and proselytizers of American empire: indeed, Hitchens and Harris asserted a fervent rationale for it. Since 9/11, both men have made careers of posing as heroic outsiders while serving the interests of the powerful.
Of the two, Harris has the more impressive credentials. In addition to being a prolific pundit on websites, a marquee name on the lecture circuit and the author of three popular books, The End of Faith (2004), Letter to a Christian Nation (2006) and The Moral Landscape (2010), he is a practicing neuroscientist who emerges from the lab to reveal the fundamental truths he claims to have learned there. Chief among them are the destructive power of religion, which Harris always defines in the most literal and extreme terms, and the immediate global threat of radical Islam. Everything can be explained by the menace of mobilized religious dogma, which is exacerbated by liberal tolerance. Stupefied by cultural relativism, we refuse to recognize that some ways of being in the world—our own especially—are superior to others. As a consequence, we are at the mercy of fanatics who will stop at nothing until they “refashion the societies of Europe into a new Caliphate.” They are natural-born killers, and we are decadent couch potatoes. Our only defense, Harris insists, is the rejection of both religion and cultural relativism, and the embrace of science as the true source of moral value.
Harris claims he is committed to the reasonable weighing of evidence against the demands of blind faith. This is an admirable stance, but it conceals an absolutist cast of mind. He tells us that because “the well-being of conscious [and implicitly human] creatures” is the only reliable indicator of moral good, and science the only reliable means for enhancing well-being, only science can be a source of moral value. Experiments in neuroimaging, Harris argues, reveal that the brain makes no distinction between judgments of value and judgments of fact; from this finding he extracts the non sequitur that fact and value are the same. We may not know all the moral truths that research will unearth, but we will soon know many more of them. Neuroscience, he insists, is on the verge of revealing the keys to human well-being: in brains we trust.
To define science as the source of absolute truth, Harris must first ignore the messy realities of power in the world of Big Science. In his books there is no discussion of the involvement of scientists in the military-industrial complex or in the pharmacological pursuit of profit. Nor is any attention paid to the ways that chance, careerism and intellectual fashion can shape research: how they can skew data, promote the publication of some results and consign others to obscurity, channel financial support or choke it off. Rather than provide a thorough evaluation of evidence, Harris is given to sweeping, unsupported generalizations. His idea of an argument about religious fanaticism is to string together random citations from the Koran or the Bible. His books display a stunning ignorance of history, including the history of science. For a man supposedly committed to the rational defense of science, Harris is remarkably casual about putting a thumb on the scale in his arguments.
If we evaluate those arguments according to their resonance with public policy debates, the results are sobering. Harris’s convictions reveal his comfortable cohabitation with imperial power. From him we learn, among other things, that torture is just another form of collateral damage in the “war on terror”—regrettable, maybe, but a necessary price to pay in the crucial effort to save Western civilization from the threat of radical Islam. We also learn that pacifism, despite its (allegedly) high moral standing, is “immoral” because it leaves us vulnerable to “the world’s thugs.” As in the golden age of positivism, a notion of sovereign science is enlisted in the service of empire. Harris dispenses with the Christian rhetoric of his imperialist predecessors but not with their rationalizations for state-sponsored violence. Posing as a renegade on the cutting edge of scientific research and moral enlightenment, Harris turns out to be one of the bright young men who want to go back to 1910.
* * *
The End of Faith, written in the wake of 9/11, bears all the marks of that awful time: hysteria, intolerance, paranoia; cankered demands for unity and the demonization of dissent. The argument is simple: the attacks on the World Trade Center awakened us to the mortal danger posed by dogmatic religion. Enlightened atheists must take up Voltaire’s challenge and crush the infamous thing at last—with the weight of scientific arguments if possible, with the force of military might if necessary. Though The End of Faith includes a chapter of complaint about the Christian right and Bush’s God-intoxicated White House, Harris singles out Islam as his enemy: “Anyone who says that the doctrines of Islam have ‘nothing to do with terrorism’…is just playing a game with words.”
The politics of Harris’s argument are rooted in the Manichaean moralism of Samuel Huntington’s 1993 article in Foreign Affairs about the “clash of civilizations” between the West and an emerging “Islamic-Confucian” civilization. Huntington may have been wrong about the Confucian element, but his apocalyptic dualism fed the revenge fantasies of the post-9/11 United States. Harris endorses Huntington’s argument uncritically, with characteristic indifference to historical evidence: “One need only read the Koran to know” he tells us, that Huntington was right. I am reminded of my fellow naval officers’ insistence, during the Vietnam War, that one need only read The Communist Manifesto to ascertain the Kremlin’s blueprint for world domination.
Harris’s tunnel vision leads him to overlook the roots of radical Islam, including the delusion of a revived caliphate, in the twentieth-century politics of imperial rivalries and anti-imperial resistance. (Indeed, under scrutiny, Islamic jihad is looking less like a revolutionary religious movement and more like the guerrilla fantasy of some angry young Arab men—educated, unemployed and humiliated by actual or imagined imperial arrogance. Radical Islam often provides an idiom for their anger, but its centrality has been exaggerated.) Terrorism is not linked to poverty, oppression or humiliation, Harris insists: the world is full of poor people who are not terrorists. Terrorism is the rough beast of Islam, which is “undeniably a religion of conquest.” Our choices are clear: “The West must either win the argument [with Muslim orthodoxy] or win the war. All else will be bondage.” Ironically, “the only thing that currently stands between us and the roiling ocean of Muslim unreason is a wall of tyranny and human rights abuses [in Arab countries] that we have helped to erect.” It is time to remake the Middle East in the name of science and democracy, to convert the Muslim believers to unbelief and save them from themselves. The recent, extraordinary revolution in Egypt, a nationwide, nonsectarian call for democratic reform and a more equitable distribution of resources, underscores the provincial arrogance of this perspective.
But the intellectual problems run deeper. The conceptual muddle at the core of Harris’s argument is directly traceable to Huntington’s essay. Groping for a global conflict to replace the recently ended cold war, Huntington fell into the fatal error of confusing civilizations with nations. As William Pfaff reminds us, “Islamic civilization is huge”:
Nearly all of the Muslim nations except Iran…conduct normal political and economic relations with most if not all of the Western countries. The notion that the members of this global religious civilization are at “war” with Western civilization, or are vulnerable to political radicalization by a few thousand Arab mujahideen because of Middle Eastern and South Asian political issues—of which most of the global Muslim population knows little—is a Western fantasy.
Fantastic as it is, the vision of Armageddon appeals to the longing for clarity and certainty at the heart of the positivist sensibility. “All pretensions to theological knowledge should now be seen from the perspective of a man who was just beginning his day on the one hundredth floor of the World Trade Center on the morning of September 11, 2001,” Harris writes. That is a pretty limited perspective.
Harris is as narrow in his views as the believers he condemns. Consider his assault on “the demon of relativism,” which, he declares, leaves us unprepared to face our ignorant tribal adversaries and robs us of the moral resources needed to prevail in the Armageddon against unreason. This conviction stems from a profound ignorance of philosophy. Harris finds it “interesting” that Sayyid Qutb, Osama bin Laden’s favorite thinker, felt that philosophical pragmatism “would spell the death of American civilization.” Pragmatism causes its devotees “to lose the conviction that you can actually be right—about anything,” Harris announces. One can only imagine the astonishment of pragmatists such as William James, who opposed America’s imperial adventures in Cuba and the Philippines, or John Dewey, a staunch defender of progressive education, if told that their inclination to evaluate ideas with respect to their consequences somehow prevented them from holding convictions. For Harris, pragmatism and relativism undermine the capacity “to admit that not all cultures are at the same stage of moral development,” and to acknowledge our moral superiority to most of the rest of the world. By preventing us from passing judgment on others’ beliefs, no matter how irrational, “religious tolerance” has become “one of the principal forces driving us toward the abyss.” Harris treats the recognition of legitimate moral differences as a sign of moral incompetence, and it is this sort of posturing that has cemented the New Atheists’ reputation for bold iconoclasm.
* * *
Harris’s argument against relativism is muddled and inconsistent on its own terms, but it is perfectly consistent with the aims of the national security state. It depends on the assumption that Americans (and “the West”) exist on a higher moral plane than just about anyone else. “As a culture, we have clearly outgrown our tolerance for the deliberate torture and murder of innocents,” Harris writes in The End of Faith. “We would do well to realize that much of the world has not.” He dismisses equations of state-sponsored violence (which creates collateral damage) and terrorist violence (which deliberately targets civilians): “Any honest witness to current events will realize that there is no moral equivalence between the kind of force civilized democracies project in the world, warts and all, and the internecine violence that is perpetrated by Muslim militants, or indeed by Muslim governments.” He asks critics of civilian casualties in the Iraq War to imagine if the situation were reversed, and the Iraqi Republican Guard had invaded Washington. Do they think Iraqis would have taken as great care to spare civilians as the Americans did? “We cannot ignore human intentions. Where ethics are concerned, intentions are everything.”
One would think that Harris’s intentionalism would have him distinguish between the regrettable accidents of collateral damage and the deliberate cruelty of torture. But after invoking a series of fantastic scenarios ranging from the familiar ticking time bomb to demonic killers preparing to asphyxiate 7-year-old American girls, Harris concludes that the larger intentions animating torture can be as noble as those that cause collateral damage: there is “no ethical difference” between them, he says. Torture, from this bizarrely intentionalist view, is somehow now a form of collateral damage. Both are necessary tactics in a fight to the death against Islamic unreason. “When your enemy has no scruples, your own scruples become another weapon in his hand,” Harris writes. “We cannot let our qualms over collateral damage paralyze us because our enemies know no such qualms.” Most treacherous are the qualms of pacifists, whose refusal to fight is really “nothing more than a willingness to die, and to let others die, at the pleasure of the world’s thugs.” (Reading this passage, one can’t help wondering why in 2005 PEN bestowed its Martha Albrand Award for First Nonfiction upon The End of Faith.) Given the implacable opposition between Islam and Western modernity, “it seems certain that collateral damage, of various sorts, will be a part of our future for many years to come.” It is the endless war against evil, the wet dream of every armchair combatant from Dick Cheney to Norman Podhoretz.
The only difference is that, unlike those pious gents, Harris dismisses not only Islam but also all the Western monotheisms as “dangerously retrograde” obstacles to the “global civilization” we must create if we are to survive. His critique of religion is a stew of sophomoric simplifications: he reduces all belief to a fundamentalist interpretation of sacred texts, projecting his literalism and simple-mindedness onto believers whose faith may foster an epistemology far more subtle than his positivist convictions. Belief in scriptural inerrancy is Harris’s only criterion for true religious faith. This eliminates a wide range of religious experience, from pain and guilt to the exaltation of communal worship, the ecstasy of mystical union with the cosmos and the ambivalent coexistence of faith and doubt.
But Harris is not interested in religious experience. He displays an astonishing lack of knowledge or even curiosity about the actual content of religious belief or practice, announcing that “most religions have merely canonized a few products of ancient ignorance and derangement and passed them down to us as though they were primordial truths.” Unlike medicine, engineering or even politics, religion is “the mere maintenance of dogma, is one area of discourse that does not admit of progress.” Religion keeps us anchored in “a dark and barbarous past,” and what is generally called sacred “is not sacred for any reason other than that it was thought sacred yesterday.” Harris espouses the Enlightenment master narrative of progress, celebrating humans’ steady ascent from superstition to science; no other sort of knowledge, still less wisdom, will do.
There is one religious practice Harris does admit to tolerating: Buddhist meditation, which allows one to transcend mind-body dualism and view the self as process. Only the wisdom of the East offers any access to this experience of self, Harris insists, as he tosses off phrases plucked at random from a Zen handbook. Given the persistent popularity of the wisdom of the East among the existential homeless of the West, the exemption Harris grants Buddhism is perfectly predictable, as is his thoroughgoing ignorance of Western intellectual tradition. “Thousands of years have passed since any Western philosopher imagined that a person should be made happy, peaceful, or even wise, in the ordinary sense, by his search for truth,” Harris proclaims, ignoring Montaigne, Erasmus, Ignatius of Loyola, Thomas Merton, Martin Buber, Meister Eckhart and a host of other Protestants, Catholics, Jews and humanists. Harris’s lack of curiosity complements his subservience to cultural fashion.
* * *
Similar weaknesses abound in Letter to a Christian Nation, in which Harris taunts the many Christians infuriated by his first book. Harris admits up front that “the ‘Christian’ I address throughout is a Christian in a narrow sense of the term.” Aiming comfortably at this caricature, he repeats his insistence that there is a fatal clash of civilizations afoot, between Islam and the West but also between science and religion. Armageddon still looms.
This screed is striking only because it affirms Harris’s positivistic fundamentalism with exceptional clarity. “When considering the truth of a proposition,” he writes, “one is either engaged in an honest appraisal of the evidence and logical arguments, or one isn’t.” But consider the ambiguities of statistical research in Harris’s field of brain science, in particular the difficulty psychologists and neuroscientists have had in replicating results over time—a development recently surveyed by Jonah Lehrer in The New Yorker. Lehrer discusses the research of the psychologist Jonathan Schooler, who revealed that describing a face makes recognizing it more difficult rather than easier. This phenomenon is called the “verbal overshadowing” effect, which became big news in the scientific study of memory, and made Schooler’s career. But over time, and despite scrupulous attention to detail and careful design of his experiments, Schooler found it increasingly difficult to replicate his earlier findings. His research had been run aground by the baffling “decline effect” that scientists have struggled with for decades, a result (or nonresult) that suggests that there may be disturbing limitations to the scientific method, at least in the statistically based behavioral sciences.
Some decline effects can arise from less mysterious sources, beginning with the vagaries of chance and the statistical drift toward the mean. But in other cases, Lehrer explains, statistical samples can change over time. Drugs that have passed clinical trials—such as the “second-generation antipsychotics” Abilify, Seroquel and Zyprexa—can be initially tested on schizophrenics, then prescribed to people with milder symptoms for whom they are less effective. Conceptual foundations for research can also be shaky, such as the notion that female swallows choose male mates on the basis of their “symmetry.” The questions arise: How does one precisely measure a symmetrical arrangement of feathers? At what point does symmetry end and asymmetry begin? These sorts of problems make replicating results more difficult, and the difficulties are compounded by the standard practices of professional science. Initial research success is written up for scientific journals, rewarded with grants and promotions, and reported to credulous nonscientists; subsequent failures to replicate results remain largely invisible—except to the researchers, who, if they are honest in their appraisal of the evidence, find it hard to accept simple-minded notions of statistically based certainty. The search for scientific truth is not as straightforward as Harris would like to believe.
Methodological and professional difficulties of this sort do not clutter The Moral Landscape, Harris’s recent effort to fashion a science of ethics. Incredibly, nearly a decade after 9/11, Harris continues to dwell on the fear of Muslim extremists establishing a new caliphate across Europe, making unreason the law of the land and forcing Parisian shopgirls to wear burqas. In looking for examples of religious barbarism, Harris always turns first to what he calls “the especially low-hanging fruit of conservative Islam.” But his main target in this book is the “multiculturalism, moral relativism, political correctness, tolerance even of intolerance” that hobbles “the West” in its war against radical Islam.
He is especially offended by anthropology. Too often, he says, “the fire-lit scribblings of one or another dazzled ethnographer” have sanctioned some destructive practice (human sacrifice, female genital mutilation) by explaining its adaptive or social function. At their worst, ethnographers have created a cult of the noble savage that celebrates primitive cultures we should rightfully scorn. His scornfulness aside, Harris is not wrong about ethnographic sentimentality, but he thoroughly misunderstands cultural relativism. He seems to think it means cultivating a bland indifference to ethical questions rather than making a serious effort to understand ethical perspectives radically different from our own without abandoning our own. He is ignorant of the relevant anthropological literature on the subjects that vex him the most, such as Hanna Papanek’s study of Pakistani women, which described the burqa as “portable seclusion,” a garment that allowed women to go out into the world while protecting them from associating with unrelated men. As the anthropologist Lila Abu-Lughod writes, the burqa is a “mobile home” in patriarchal societies where women are otherwise confined to domestic space. Harris cannot imagine that Islamic women might actually choose to wear one; but some do. Nor is he aware of the pioneering work of Christine Walley on female genital mutilation in Africa. Walley illuminates the complex significance of the practice without ever expressing tolerance for it, and she uses cross-cultural understanding as a means of connecting with local African women seeking to put an end to it.
Harris’s version of scientific ethics does not allow for complexity. In The Moral Landscape, he describes his philosophical position as a blend of moral realism (“moral claims can really be true or false”) and consequentialism (“the rightness of an act depends on how it impacts the well-being of conscious creatures”). He does not explain why he has abandoned the intentionalism he espoused in The End of Faith. Nor does he spell out how his newfound consequentialism can allow him to maintain his justification of collateral damage (which surely “impacts the well-being of conscious creatures”), or how his new view differs from the pragmatism he had previously condemned. Pragmatism, the argument that ideas become true or false as their impact on the world unfolds, is nothing if not consequentialist.
* * *
Competing philosophical claims merge, for Harris, in “the moral brain.” Moral truth is not divine in origin, nor is it merely a product of “evolutionary pressure and cultural invention.” It is a scientific fact. Or it soon will be: “The world of measurement and the world of meaning must eventually be reconciled.” This is not an argument for Western ethnocentrism, Harris insists, but rather for the idea “that the most basic facts about human flourishing must transcend culture, just as most other facts do.” No one can dispute the desirability of human flourishing or the possibility that neuroscience may lead us closer to it. But the big questions always lead outward from the brain to the wider world. If altruism has an innate biological basis, as some research suggests, how can societies be made to enhance it rather than undermine it?
Harris’s reductionism leads him in the opposite direction. His confidence in scientific ethics stems from his discovery that “beliefs about facts and beliefs about values seem to arise from similar processes at the level of the brain.” Much of The Moral Landscape is devoted to teasing inferences from this finding. Sometimes Harris merely belabors the obvious. For instance, he points out that the medial prefrontal cortex (MPFC), which records feelings of reward and “self-relevance,” also registers the difference between belief and disbelief. When research subjects are presented with a moral dilemma—to save five people by killing one—the prospect of direct personal involvement more strongly activates brain regions associated with emotion. As Harris observes, “pushing a person to his death is guaranteed to traumatize us in a way that throwing a switch will not.” We do not need neuroscience to confirm the comparative ease of killing at a distance: Bauman’s work on the Holocaust, along with many other studies, demonstrated this decades ago.
More commonly, though, Harris depends on the MPFC to make more provocative claims. He says nothing about the pool of test subjects or the methods used to evaluate evidence in these experiments. Instead he argues by assertion. As he writes, “involvement of the MPFC in belief processing…suggests that the physiology of belief may be the same regardless of a proposition’s content. It also suggests that the division between facts and values does not make much sense in terms of underlying brain function.” This is uncontroversial but beside the point. The nub of the matter is not the evaluation of the fact-value divide “in terms of underlying brain function” but the conscious fashioning of morality. Harris is undaunted. He asks, “If, from the point of view of the brain, believing ‘the sun is a star’ is importantly similar to believing ‘cruelty is wrong,’ how can we say that scientific and ethical judgments have nothing in common?” But can the brain be said to have a “point of view”? If so, is it relevant to morality?
There is a fundamental reductionist confusion here: the same biological origin does not constitute the same cultural or moral significance. In fact, one could argue, Harris shows that the brain cannot distinguish between facts and values, and that the elusive process of moral reasoning is not reducible to the results of neuroimaging. All we are seeing, here and elsewhere, is that “brain activity” increases or decreases in certain regions of the brain during certain kinds of experiences—a finding so vague as to be meaningless. Yet Harris presses forward to a grandiose and unwarranted conclusion: if the fact-value distinction “does not exist as a matter of human cognition”—that is, measurable brain activity—then science can one day answer the “most pressing questions of human existence”: Why do we suffer? How can we be happy? And is it possible to love our neighbor as ourselves?
These high-minded questions conceal a frightening Olympian agenda. Harris is really a social engineer, with a thirst for power that sits uneasily alongside his allegedly disinterested pursuit of moral truth. We must use science, he says, to figure out why people do silly and harmful things in the name of morality, what kinds of things they should do instead and how to make them abandon their silly and harmful practices in order “to live better lives.” Harris’s engineering mission envelops human life as a whole. “Given recent developments in biology, we are now poised to consciously engineer our further evolution,” he writes. “Should we do this, and if so, in which ways? Only a scientific understanding of the possibilities of human well-being could guide us.” Harris counsels that those wary of the arrogance, and the potential dangers, of the desire to perfect the biological evolution of the species should observe the behavior of scientists at their professional meetings: “arrogance is about as common at a scientific conference as nudity.” Scientists, in Harris’s telling, are the saints of circumspection.
* * *
If that’s true, then Harris breaks the mold. Nowhere is this clearer, or more chilling, than in his one extended example of a specific social change that could be effected by scientific ethics. Convinced that brain science has located the biological sources of “bias”—the areas of the brain that cause us to deviate from the norms of factual and moral reasoning—Harris predicts that this research will lead to the creation of foolproof lie detectors. He does not say how these devices will be deployed. Will they be worn on the body, implanted in the brain, concealed in public locations? What he does say is that they will be a great leap forward to a world without deception—which, we must understand, is one of the chief sources of evil. “Whether or not we ever crack the neural code, enabling us to download a person’s private thoughts, memories, and perceptions without distortion,” he declares, the detectors will “surely be able to determine, to a moral certainty, whether a person is representing his thoughts, memories, and perceptions honestly in conversation.” (As always, the question arises, who are “we”?) Technology will create a brave new world of perfect transparency, and legal scholars who might worry about the Fifth Amendment implications are being old-fashioned. The “prohibition against compelled testimony itself appears to be a relic of a more superstitious age,” Harris writes, when people were afraid “that lying under oath would damn a person’s soul for eternity.” He does admit that because “no technology is ever perfect,” it’s likely that a few innocent people will be condemned; but the courts do that already, he notes, and besides, deception will have become obsolete. Rarely in all his oeuvre has Harris’s indifference to power and its potential abuse been more apparent or more abominable.
Maybe this explains why Harris remains an optimist despite all the “dangerously retrograde” orthodoxies on the loose. Moral progress is unmistakable, he believes, at least in “the developed world.” His chief example is how far “we” have moved beyond racism. Even if one accepts this flimsy assertion, the inconvenient historical fact is that, intellectually at least, racism was undone not by positivistic science, which underwrote it, but by the cultural relativism Harris despises. Ultimately his claims for moral progress range more widely, as he reports that “we” in “the developed world” are increasingly “disturbed by our capacity to do one another harm.” What planet does this man live on? Besides our wars in Afghanistan and Iraq, “we” in the United States are engaged in a massive retreat from the welfare state and from any notion that we have a responsibility to one another or to a larger public good that transcends private gain. This retreat has little to do with Islamic radicalism or the militant piety of the Christian right, though the latter does remain a major obstacle to informed debate. The problem in this case is not religion. Despite the fundamental (or perhaps even innate) decency of most people, our political and popular culture does little to encourage altruism. The dominant religion of our time is the worship of money, and the dominant ethic is “To hell with you and hooray for me.”
Harris is oblivious to this moral crisis. His self-confidence is surpassed only by his ignorance, and his writings are the best argument against a scientific morality—or at least one based on his positivist version of science and ex cathedra pronouncements on politics, ethics and the future of humanity. In The Moral Landscape he observes that people (presumably including scientists) often acquire beliefs about the world for emotional and social rather than cognitive reasons: “It is also true that the less competent a person is in a given domain, the more he will tend to overestimate his abilities. This often produces an ugly marriage of confidence and ignorance that is very difficult to correct for.” The description fits Harris all too aptly, as he wanders from neuroscience into ethics and politics. He may well be a fine neuroscientist. He might consider spending more time in his lab.