Brave Neuro World

Brave Neuro World

As neurotechnology expands our abilities to rejuvenate aging brains, rebound from trauma and enhance moods or sexual prowess, we need a consistent set of neuroethics about how that technology should be used.

Copy Link
Facebook
X (Twitter)
Bluesky
Pocket
Email

Research support for this article was provided by the Investigative Fund of The Nation Institute.

The accident happened during the construction of a railroad in Vermont, in 1848, and it happened fast: A three-foot-long tamping iron sparked an explosion, shot skyward and sailed through the frontal cortex of the project’s foreman, Phineas Gage. Gage, famously, got a whole new personality, and students of the brain got perhaps their most iconic case study. In transforming Gage from the amiable and responsible person he had been before the accident to the temperamental and bawdy one he became after, the iron bar also drilled a hole in Cartesian dualism, the intuitive distinction we all make between our minds and our brains. As the foreman had the misfortune to demonstrate, altering the physical brain can alter personality, behavior, mood–virtually everything we think of as constituting our essential (and incorporeal) self.

Scientifically fruitful construction accidents happen only just so often, thankfully, and brain research has traditionally been hamstrung by ethical constraints on experimenting with human subjects. In recent years, however, scientists have developed minimally invasive and comparatively benign techniques for exploring–and altering–the brain. Like advances in genetics (another field that investigates the biological substrata of selfhood), these developments raise significant philosophical, legal and ethical issues. Yet while genetics has spawned a robust watchdog industry, complete with academic departments, annual conferences and dedicated funding, neuroscience currently receives far less scrutiny.

Ultimately, though, neuroscience may raise even more troubling ethical issues, for the simple reason that it is easier to predict and control behavior by manipulating neurons than by manipulating genes. Even if all ethical and practical constraints on altering our DNA vanished tomorrow, we’d have to wait for years (or decades) to see the outcome of genetic experiments–and all the while environmental factors would confound our tinkering. Intervening on the brain, by contrast, can produce startlingly rapid results, as anyone knows who has ever downed too many margaritas or, for that matter, too many chocolate-covered coffee beans.

Caffeine and tequila are helpful reminders that, one way or another, we have been meddling with our brains since time immemorial. But the latest developments in neuroscience are sufficiently unique–different from coffee, and also different from cloning–to require a rethinking of both personal and social ethics. Broadly speaking, these developments can be divided into those technologies that seek to map the brain and those that seek to alter it.

I. Mapping Brains

Tools for peering inside the human body are not new–and nor are trepidations about them. When X-rays were discovered, doomsayers invoked Faust and Frankenstein, pundits fretted about privacy and entrepreneurial types began hawking X-ray-proof underwear. In retrospect those reactions seem unwarranted, but sometimes technologies that change how we see ourselves do have profound repercussions. Sonograms, for instance, fundamentally altered the abortion debate by changing our understanding of fetal development. Likewise, new technologies for mapping the brain will invite new interpretations of human intellect, agency and behavior.

These technologies, which include positron emission tomography (PET scans) and functional magnetic resonance imaging (fMRI), work by identifying the brain areas involved in performing a given task–recognizing faces, making decisions, recalling memories. Scientists are currently using these tools to search for the neurological underpinnings of virtually the entire sweep of human experience: the propensity for violence; the capacity for cooperation; conscious or unconscious racial attitudes and sexual preferences; religious feeling; truth-telling versus lying; real memories versus false ones; and personality traits such as extroversion, pessimism, risk aversion and empathy.

“It sounds like science fiction. You know, high-tech phrenology–we’re going to scan your head and see what kind of person you are,” acknowledges Martha Farah, director of the University of Pennsylvania’s Center for Cognitive Neuroscience. In the 1970s imaging basically was science fiction; the best technology merely confirmed known facts about the brain. By the 1980s, though, scientists were isolating parts of the brain involved in complex cognition and, a decade later, moods and emotions. Recently, imaging made another crucial leap–from providing information about the average brain to providing information about a specific brain. That is, it began to reveal differences in how individual people think and feel. “We still can’t stick someone in a scanner and say, ‘This person has an intelligence score of such-and-such,'” Farah says. “But increase the predictive power by a factor of two, and I think you’ll get applications beyond the laboratory.”

Actually, at least one such application already exists, although it relies on an older technology, electroencephalograms. “Brain fingerprinting,” a technique patented by neuroscientist-cum-entrepreneur Lawrence Farwell, determines whether a subject recognizes information by tracking electric waves called P300s, which the brain emits in response to familiar stimuli. In 2000 Farwell conducted a brain fingerprinting test on Terry Harrington, who was serving a life sentence for murder in Iowa, and found that Harrington did not recognize details the killer would have known. The trial court, and later the Iowa Supreme Court, admitted the evidence. (Harrington was ultimately acquitted on different grounds.) Farwell is now marketing brain fingerprinting for medical applications, corporate security, advertising, criminal justice and counterterrorism.

Technologies that glean information directly from our brains give some people the willies, but in essence brain fingerprinting is simply a fancy lie detector–or, in the case of advertising applications, a fancy focus group. The equipment may be new, but the ethical issues it raises are old hat: concerns about accuracy, privacy and the right of suspects to demand testing. These issues are urgent, but our society has both precedents and means for handling them.

However, neuro-imaging does raise a novel ethical issue for our justice system–one that is subtler but potentially farther-reaching than the specter of mind reading. Criminal law is inherently interested in mental states, and specifically in the mens rea, the guilty mind. If I am charged with a crime, the court doesn’t merely consider whether I committed the act; it also attempts to establish whether I did so of my own free will. If I acted in self-defense or under extreme coercion, I am deemed innocent. Likewise, if I am judged not wholly competent (e.g., because of mental impairment, childhood abuse or alcohol addiction), I will generally be treated with greater leniency. These mitigating factors are understood as a kind of internal coercion–not the proverbial gun to the head but rather a gun in the head: a brain state that I cannot control and that, on the contrary, controls me. In determining my guilt or innocence, then, the court must decide when I am in control of my actions–that is, when my brain stops running the show and my nonbrain essence, my “me,” takes over. But neuroscience brooks no distinction between me and the physical processes of my brain. It therefore rejects the notion of a freely willed act, because I have no “will” above and beyond the neurochemical reactions that make me tick.

This is hardly a new contention–philosophers have been playing free-will tug-of-war for ages–but courts that are indifferent to Spinoza tend to listen to science. “Arguments are nice, but physical demonstrations are far more compelling,” write Princeton psychologists Joshua Greene and Jonathan Cohen in the journal Philosophical Transactions of the Royal Society of London. It’s one thing to tell a jury that the accused had a troubled childhood, or even to note that early abuse imperils development. It’s another thing to explain the exact mechanism that renders a specific person incapable of judgment, empathy or impulse control.

In locating criminality in our brain chemistry–rather than in a corrupted soul or a malignant heart–neuroscience could help forge a less punitive justice system. But that is hardly a foregone conclusion. “It could make people think [criminals] deserve help–the trend we’ve seen with alcoholism,” says Eric Parens, senior research scholar at the Hastings Center, a bioethics institute in New York. But, he cautions, “it could just as easily be interpreted to suggest that they are bad to the bone and should be locked away forever.”

The former outcome is presumably preferable to progressives, but it raises its own troubling question: If bad brains cause bad acts, does the law have the right to try to make bad brains better? Imagine, for a moment, the Phineas Gage experiment in reverse: altering the brain to transform a cantankerous lech into a responsible citizen. If we had the power to effect that transformation, would we also have the right?

II. Changing Brains

Or what if we had other powers? What if we could sleep less, rejuvenate our aging brains, rebound quickly from emotional trauma, improve our memories, regulate our moods, enhance our sexual response? Increasingly, we do have those powers, thanks to neurological interventions that range from psychopharmaceuticals to surgery to brain-machine interfaces.

Currently, the most famous neuro-interventions are SSRIs (the class of antidepressants that includes Prozac and Zoloft) and the anti-ADD medication Ritalin. But rivals are on the way. Consider, for instance, the drug modafinil. Marketed in the United States as Provigil, modafinil was developed to treat narcolepsy, but doctors and patients quickly realized that it enabled healthy people to stay awake for far longer than normal–anecdotally, for more than three days. “Suddenly, narcoleptics had a lot of friends,” Martha Farah says with a laugh.

Who can deny the allure of modafinil? Not surgeons, one suspects, or long-haul truck drivers, or military personnel on multiday missions–or, for that matter, journalists on deadline. Modafinil’s intended use, like that of Prozac and Ritalin before it, is dwarfed by its nonmedical potential. Because many therapies that help the sick can also benefit the healthy, this slippery slope from treatment to enhancement is a defining feature of the neurotechnology landscape. Thus Alzheimer’s drugs could improve normal memory, brain-machine interfaces for Lou Gehrig’s disease patients could be adapted for Air Force pilots and modafinil could make workaholics of us all.

Here, however, are some other salient details about modafinil: There are no long-term studies of its effects, its mechanism remains mysterious and the role of sleep in regulating human health is largely unknown. These facts point to the most basic ethical concern about neurotechnologies–to wit, their safety.

Arthur Caplan, director of the University of Pennsylvania’s Center for Bioethics, is a champion of neuroenhancement, but he acknowledges that “technologically, we can’t even build a dam that doesn’t break.” Theoretically, safety issues could be handled by careful oversight, but history does not inspire optimism. To illustrate the point in a debate with Caplan, University of Minnesota bioethicist Carl Elliot cited “three of the most commercially successful medical enhancements of recent years”: SSRIs, hormone replacement therapy and the diet drug Fen-Phen. All three were FDA-approved and widely used before the public learned that the first are associated with suicide; the second with stroke, pulmonary emboli and breast cancer; and the third with heart disease and hypertension.

Nor does history suggest that we will establish careful rules about when, why and by whom neurotechnologies may be used. “The most relevant forerunner may be reproductive technologies, and what’s happened there is an absolute lack of oversight,” Caplan says. “We’ve got no rules about counseling, about describing the risks of side effects. We have no agreement about who can use these services. The whole thing has been treated as a Wild West free market.”

That frontier free-market mentality does not bode well for the poor. Given that we do not guarantee basic healthcare in this country or fully fund such low-tech equalizing efforts as Head Start, it’s tough to imagine that we will insure access to neuroenhancements for those who can’t afford them. If enhancement becomes widespread, then, the advantages it confers will only exacerbate existing disparities in education and employment.

Caplan is quick to point out that this injustice lies with society, not with science. Plenty of other “technologies” magnify disparities, from private schools to test-prep courses. But neuroenhancement could make it not merely difficult but biologically impossible for the poor to compete with the wealthy. It is worth asking, as Elliot has, whether we want to widen the gap by commodifying basic human traits and inviting the pharmaceutical industry to market them.

Alongside questions about equitable access to neuroenhancers are equally grave concerns about the freedom not to use them. Already, some schools refuse to let “difficult” students attend class unless they take ADD medicine. The Defense Department’s Advanced Research Projects Agency funds research on modafinil because “eliminating the need for sleep while maintaining a high level of both cognitive and physical performance…will create a fundamental change in war fighting.” Certain nonmilitary employees could also be required or coerced into using neuroenhancements; imagine, for instance, a drug that improves concentration among air-traffic controllers. And then there is the thought experiment I posed earlier: Should the law be allowed to mandate neurological interventions that decrease violence? Current case law suggests that the answer could be yes; courts have ruled that if the state can administer the death penalty, it can also intervene in ways that stop short of death (e.g., chemical castration). As Farah points out, these uses of neurotechnologies threaten to violate a kind of freedom that, to date, is barely adumbrated within the law–the freedom to have our own personalities and control our inner lives.

For most of us, though, the freedom not to enhance our brains won’t be jeopardized by the military or the courts but by a society obsessed with competition and self-improvement. As we have learned from professional sports, it is exceedingly difficult to make thoughtful choices about enhancement technologies once they are widely used. When I asked Caplan how we might protect people from pressure to alter their neurochemistry, he said, “I think you have to build niches for respecting those people, like how we deal with the Amish. They have opted away from common technologies, but we accommodate them, and they get to ride their little buggies around.” Considering that the Amish make up less than 0.0003 percent of the population, and that cars are perhaps the most common and coveted technology in the country, I asked whether the analogy was apt. He thought it was: “I predict that many of these interventions will be like caffeine, which is an omnipresent enhancer. I’m not sure anybody will have a giant implant in their head anytime soon, but as for the pill that lets you stay awake longer, or the this-helps-me-focus pill, or the memory booster–I think it’s crazy not to start anticipating that.”

III. Changing Minds

Safety, equity and freedom are crucial issues in neuroethics, but even if we could protect all three, many people would still have qualms about meddling with their brains. Caplan dismisses such squeamishness as knee-jerk loyalty to an older order (“like wanting your kid to use an abacus”); quasi-sadistic faith in the principle of “no pain, no gain”; or a religious aversion to “altering God’s wise design.”

Undoubtedly, some people do resist enhancement technologies out of religious or spiritual conviction. But there are other legitimate reasons to balk–political and moral ideals not grounded in technophobia or theology. One is the belief that, as Eric Parens succinctly stated, “means matter.” That is, the choices we make do not merely achieve a desired end; they also express an underlying value.

One such value might be commitment to social change. “It’s easier to try to solve societal problems with a technocratic fix, an electric shock or a pill, than by changing social structures and the distribution of power,” says Parens’s colleague Bruce Jennings. Put differently, it’s easier to change brains than minds. The commitment to changing minds, however, expresses the belief that minds are, in fact, changeable. That is a fundamentally political belief–the anti-Hobbesian conviction that people and nations can be reformed, that we can make citizens healthier and happier by making society more just.

That conviction is plainly liberal–and yet, so is support for science, and so is the belief that people should decide for themselves what constitutes a meaningful life. These conflicting ideals help explain why the left has thus far been unable to articulate a consistent position on biotechnology. The same is true for the right, which must struggle to balance faith in the free market, respect for “traditional values” and belief in a natural order. Small wonder, then, that resistance to enhancement has made bedfellows out of, for instance, progressive intellectual Bill McKibben, conservative political scientist Francis Fukuyama and the far-right former chair of the President’s Council on Bioethics, Leon Kass.

As unsavory as that alliance might seem, it is, in some ways, salutary. It suggests that neuroethical issues are too complex for politics-as-usual–so complex, in fact, that they uncover concerns shared by most of us, right and left. “I’m as committed as the next person to changing minds instead of bodies,” says Parens, “but I don’t want to sacrifice anyone on the altar of a noble social ideal. Sometimes, the technological fix is the right one. On this issue, I don’t see any way around ambivalence. I think the best we can do is sit down together and try to specify what we hope for, and what we dread, when we picture a future with these technologies.”

That discussion must include the broadest possible range of people, because even those who don’t have the means or inclination to avail themselves of neurotechnology will be affected by the policies that govern it. And it must pose the broadest possible questions–not “Don’t you want your kid to be smarter?” but “What kind of society do you want your kid to live in?” And “How should we expand human potential, and within what limits, and toward what ends?”

If we fail to have that discussion, we risk winding up with a social policy for neuroscience based on tactical decisions, not ethical ones; benefiting the few, not the many; and obscuring the complex relationship between personal decisions about our minds and public decisions about our culture. That is a social policy we need like a hole in the head.

Ad Policy
x