We live in the age of the fMRI machine, dazzled and bamboozled by pictures of brains âlighting upâ in living Technicolor. Before these neuroscientific glory days, the mysteries of the mind had to be approached by rather less alluring methods: postmortem examination of the brains of psychiatric patients, animal experiments of legendary cruelty and intelligence testing after pioneering brain surgeries, to name but a few. During the knife-happy decades of the mid-twentieth century, surgical treatments for seizure disorders generated especially startling insights into human brain function. In Montreal, the neurosurgeon Wilder Penfield developed an exquisitely sensitive procedure for protecting the neurological integrity of his epileptic patients, keeping them conscious on the operating table and asking them to describe their sensations as he gently stimulated their exposed brains with electrodes while a stenographer in a little glass booth transcribed every word. A byproduct of this work was âPenfieldâs Homunculus,â a cartoon character whose proportions corresponded to the area of cortex devoted to each body part: huge thumbs and outsize lips (parts under voluntary control) and a titchy little penis (not so voluntary). The most famous neurological patient in recent history, the amnesiac âH.M.,â was an epileptic who had areas of his temporal lobes removed in 1953. His seizures became less frequent, but he lost the ability to form long-term memories. Precise analysis of what he could and could not remember after his operation revolutionized psychology. Overall, the wave of experimental neurosurgeries between the 1930s and the â70s established the principle of the division of the brain into specialized task areas. This scientific phrenology now goes under the name âmodularity,â and the broad outlines have been confirmed (though the details are still hotly debated) by the evidence piling up from brain-scanning techniques.
Of all the surgeries to treat epilepsy, one of the most radical is the severing of the corpus callosum, the layer of nerve fibers that connect the two hemispheres of the brain. Pioneered by a surgeon who had observed that epileptic patients with tumors in the corpus callosum tended to suffer fewer seizures, the operation was first performed in the 1940s, leaving twenty-six people walking around with brains that had been split down the middle. Astonishingly, the patients reported no side effects from the surgery beyond blessed relief from their symptoms. Despite its success, the radicalism of the treatment rendered it controversial and almost two decades passed before a precocious Dartmouth undergraduate named Michael Gazzaniga applied for permission to test the patients. The animals in the experimental psychology lab where he had been working showed effects from the severing of the two hemispheres. Surely the same must be true of humans. Amazingly, permission was granted. Fizzing with excitement, Gazzaniga drove to Rochester, New York, in a car full of nifty new equipment from the Dartmouth psychology department. When he arrived, however, it turned out that someone had gotten cold feet at the prospect of this ambitious young man probing loss of function in the patients, and he was turned away.
The following summer, opportunity knocked again. A war veteran with intractable seizures was judged to be a good candidate for the split-brain procedure, and Gazzanigaânow enrolled in graduate schoolâwas assigned to test him before and after surgery. He landed the job despite his tender years because he had devised a test that exploited the wiring of the human visual system. The nerves that run from our eyes to the backs of our brains divide at some point in their journey: one half stays in the same hemisphere; the other goes to the opposite hemisphere. This means that different sides of the visual field in the same eye are processed by different halves of the brain. Gazzaniga reasoned that if he showed the patient images in an appropriately restricted area of the visual fieldâif he showed the pictures to just one hemisphere, so to speakâhe might be able to figure out if the two sides of the brain were acting independently of each other. Would the patientâs language center in the left hemisphere, for example, be able to speak about objects shown only to the other side of his brain? The answer turned out to be a resounding no.
* * *
In Whoâs in Charge?, the latest in a series of books about the brain aimed at a nonspecialist audience, Gazzaniga recalls his excitement at running the first of these tests, in the summer of 1962. Adrenaline pumping and heart pounding, he showed âW.J.â an image in the part of his visual field processed by the right hemisphere. When asked to describe the picture, the patient replied, âI didnât see anything.â Gazzaniga immediately discerned the scientific importance of this response: âNot only could he no longer verbally describe, using his left hemisphere, an object presented to his freshly disconnected right hemisphere, but he did not know that it was there at all.â Just as the experimental neurosurgery on H.M. had opened up the problem of memory, W.J.âs operation promised to reveal fascinating aspects of the division of labor between the hemispheres and the nature of self-awareness. The famous psychologist Brenda Milner built her reputation on her testing of the amnesiac H.M., which she conducted when she was a graduate student. Likewise, Gazzaniga has built a distinguished career on his discovery of the split-brain phenomenon in humans. Such is the debilitating nature of epilepsy that sufferers willingly submit to radical experimental treatments. When they subsequently undergo hours of testing in the name of scientific curiosity, they become the unsung heroes of the history of the neurosciences.
Popular
"swipe left below to view more authors"Swipe â
It has been half a century since Gazzaniga discovered that W.J.âs left brain did not know what his right brain was doing, and this slim, accessible volume sets out some of what Gazzaniga takes to be the philosophical fruits of this revelation. The first question that work with these patients promised to illuminate was nothing less than the nature of consciousness. If W.J. and others like him could function perfectly well with what were, in effect, two separate brains, what happened to their sense of unified purpose and coherent identity? Did they have two consciousnesses? It certainly seemed like it. If a picture of a bicycle was presented to the appropriate part of W.J.âs visual field, his mouth would deny that he had seen anything but his left hand would draw a bicycle. If the right hemisphere was shown the word âkeyâ and the left side was shown the word âring,â his mouth would say âringâ but his left hand would choose a key from an array of objects in front of him. The implications were profound, and Gazzaniga seems to have lost none of his youthful excitement at having a crack at one of the oldest conundrums in the philosophical book: âWHY WHY WHY was there this apparent feeling of unity?â
The question, of course, doesnât apply only to split-brain patients. Gazzanigaâs testing of the two hemispheres separately revealed that humans have the most asymmetrical brains in the animal kingdom. Neural task specialization is a feature of all primate brains, but we appear to take it to an extreme. One of the many rewards of Whoâs in Charge? is a compelling account of the evolution of our hyper-modularity. Gazzaniga explains that nervous systems run on connections between cells. As brains became larger over the course of mammalian evolution, connectivity had to become more specialized, in order not to become self-defeatingly complex. (Think of a social-networking dystopia in which everybody is compelled to âfriendâ everybody else, with no blocking mechanisms allowed.) To compensate for their increasing scale, primate brainsâlike social networksâare built according to âsmall worldâ architectural principles: dense local connections, linked by a few long-distance fibers, allowing for fast processing in specialized areas, alongside economical communication to the global network. âThe end result,â Gazzaniga says, âis thousands of modules, each doing their own thing.â So how do any of us, let alone split-brain patients, extract a coherent sense of self from all this atomized complexity?
More than a decade later, Gazzaniga began to probe this question deliberately with a new cohort of split-brain patients on the East Coast. In one experiment, the subject was asked to perform an association test. He was shown a picture on a screen and had to choose an accompanying image from a set of cards in front of him. The left hemisphere was shown an image of a chicken claw, and with his right hand he chose the card with a picture of a chicken. The right hemisphere was shown a house in the snow, and his left hand picked out the card with a shovel on it (a natural association for an East Coaster used to having to shovel his driveway). Then Gazzaniga asked him to say why he had picked out those two cards (speaking being a left hemisphere task). The split-brain patient looked down at his choices and answered, âOh, thatâs simple. The chicken claw goes with the chickenâŚand you need a shovel to clean out the chicken shed.â Without missing a beat, he had come up with a perfectly fluent rationalization for the unlinked associations made by the two halves of his brain. Hence one reason that the original cohort of split-brain patients reported no side effects of the operation: using the dreamerâs capacity to make a story out of random stuff, they were able to weave any right-brain anomalies seamlessly into the conscious left-brain fabric of their lives.
* * *
Drawing on dozens of results of this kind, Gazzaniga suggests that one of the modules in the human brain should go under the name of the âInterpreter.â This systemâlocated in the left hemisphere, along with the speech centerâis what concocts a coherent narrative out of all the brainâs activity, and the annals of neuroscience are now full of bizarre neurological conditions and deft experiments that reveal this constant creative act at work. Of great importance to Gazzanigaâs argument are some oft-cited experiments purportedly demonstrating that conscious awareness of making a decision registers only after the brain has primed itself for that course of action, and sometimes even after the action has been performed. Gazzaniga calls this living in âa post-hoc worldâ and gives an example from his childhood in the California desert. If he jumped at a rustle in the dry grass that turned out to be nothing but a breeze, he would explain to the irritated sibling onto whose foot he had just landed, âI thought I saw a snake.â His point is that thereâs actually no thinking involved in the jump: itâs a reflex, executed by his nervous system via a shortcut in the brain that bypasses the whole intricate baggage of conscious decision-making. Conscious choice takes time, and time is exactly what juvenile primates donât have if they want to survive in a dangerous environment. But we still need to make sense of what our reflexes do, and this is where the Interpreter comes in, providing the story: âI thought I saw a snake.â According to Gazzaniga, the stories the Interpreter tells tend to be bravely forward-looking, all about steering the ship of fate into uncertain waters, equipped with free will and unity of purpose; but these parables of moral courage are no more than specious retrospective rationalizations for things we do automatically.
The issue of unity of consciousness thereby leads directly to the other hoary question that drives the argument of Whoâs in Charge?: How can free will exist in a deterministic world? If our brains act according to the causal laws governing all matter, in what sense can we be said to be free? And if freedom of choice is just a story we tell ourselves to make sense of our reflex actions, what happens to notions of moral responsibility? This has been a worry for philosophers for millenniums; now neuroscientists have begun to ponder what their work might mean for our cherished notions of human freedom and ethical accountability. Gazzaniga is impressed by the experiments showing that there is a crucial time lapse between our brains priming us to do something and our conscious awareness of making a decision, but he is anxious about the corrosive effect that this revelation might have on the American legal system. If we accept the implications of living in a post-hoc world, then âMy brain made me do itâ threatens to become a get-out-of-jail-free card available to everyone, not just to sufferers of fetal alcohol syndrome or schizophrenia.
Gazzanigaâs answer to this knotty little problem lies in his gloss on the term âemergence.â Roughly speaking, emergence tries to capture the sense in which things are more than a sum of their parts. Gazzanigaâs example is traffic. Traffic is composed of cars, and would not exist without them, but it cannot be reduced to cars, and it certainly cannot be characterized or predicted on the basis of the properties of a carburetor. It is a complex system in which weather and time of day and urban planning and individual events and scores of other elements all combine with cars to produce traffic. Traffic is traffic, and is not reducible. Similarly, Gazzaniga argues that free will does not reside in individual brains but is an emergent property of groups of brains: if brains are cars, free will is traffic. For Gazzaniga, freedom is a slightly vacuous concept. Free from what, exactly? He opines that responsibility, not freedom, is the essential concept for the continued operation of the legal system, and since responsibility has no meaning in the absence of others to whom to be accountable, it is better understood as an emergent property of social groups than as something possessed by individuals.
* * *
Gazzaniga opens up a number of intriguing avenues of inquiry with the concept of emergence, but his ideas about responsibility, though, are something of a philosophical dead end. He is so focused on the danger of letting people off the hook, he loses sight of the opposite hazard. If moral responsibility does not reside in the individual but in the existence of the group, then presumably all members of the human family are equally responsible for their actions, irrespective of their individual mental health or state of mind. So keen is he to evict individualism from our notions of responsibility that he cites statistics showing that people with frontal lobe lesions commit violent crimes only 10 percent more frequently than the rest of us. From this he concludes that even permanent damage to the brainâs executive function should not count as a mitigating condition. But the concept of diminished responsibility is almost as much a pillar of the Anglo-American legal system as responsibility itself, and its actual erosionâas in the tabloid-stoked trend in Britain of trying minors as adultsâis at least as troubling as its still-theoretical extension to all of us. Agreed, brain scans are probably not the right tools for establishing diminished responsibility, but the concept has been refined by witnesses, judges and juries ever since naturalistic accounts of mental illness began to gain traction, and it seems fairly robust as an intuition about justice.
The problem may be that Gazzaniga is using the wrong tools for the job. His work on split-brain patients obeys the basic logic of the âexperimental lesionââdestroy an area of the brain, see what the animal is now unable to do, and infer the function of that area based on this new deficit. This may be a sound basis for making preliminary inferences about brain function, but itâs a shaky foundation on which to build a whole philosophical edifice. The fact that the workings of the Interpreter are most clearly revealed through an extreme pathology (severing of the corpus callosum) combined with a strenuously artificial experimental setup (show things to one hemisphere at a time) may be what gives the whole proposal its cynical cast. (Complicating matters is the infinite regress involved in interpreting the Interpreterâs interpretations, an issue that Gazzaniga passes over.) Under these conditions, the Interpreter module comes across as a self-congratulatory fabulist devoted to propping up our speciesâ self-esteem with the fiction of the soulâs freedom.
But if we shift the emphasis away from pathology, the pieces fall into a different pattern. Take the example from Gazzanigaâs neurologically intact childhood. He suggests that âI thought I saw a snakeâ is the Interpreter at work, misrepresenting reflex action as rational choice. It seems to me that itâs actually rather good shorthand for âAt the perceptual stimulus of a rustle in the grass, my nervous system, primed by millenniums of natural selection to avoid serpentine hazards, sent a message via my amygdala to my legs instructing them to jump out of the way.â What is remarkable about the Interpreter is how well it works most of the time. We compulsively make patterns out of the available data, some of which are specious, but we also have self-correcting abilities, and the capacity to develop new layers of self-awareness. Philosophers tend to overestimate the part that rationality plays in human affairs, but neuroscientists seem to suffer from the opposite tendency. How is it that Gazzaniga, whose entire career has been based on the application of the scientific method, has so little regard for the workings of reason? With this denial he seems to claim rationality for himself and his fellow neuroscientists while consigning the rest of us to automaton status.
* * *
Now take the famous experiment purporting to show that voluntary choice is no such thing. The task that the participants were asked to perform was to move the hand and wrist. The subjects were encouraged to relax in a lounge chair and let their minds wander, moving only when they felt like it. With each subject wired to an EEG to detect activity in the area of the motor cortex that instructs the hand to move, the moment when the brain was ready to go with the action was recorded. The participants were also asked to note the time at which they were conscious of making the decision to move the hand. From the finding that conscious awareness happened crucial milliseconds after the brain was primed, the post-hoc nature of all our choices was supposedly confirmed. The precise nature of the taskâits meaninglessness, capriciousness and whimsicalityâwas designed to represent âan incontrovertible and ideal example of a fully endogenous and âfreely voluntaryâ act.â If this existential languor is experimental psychologyâs highest ideal of freedom, itâs clear why Gazzaniga has contempt for the concept. Anyone who has idly wondered when and how they will generate the momentum to get out of bed on a Sunday morning will recognize this as freedom of a pleasant but peculiar sort, in which the promptings of the bodyâa wiggling of the toes, a yawn, a stretchâoften discernibly precede the directives of the intellect.
Contrast this feline ideal of liberty with the humanist notion of self-determination implicit in the consent forms that the experimental subjects doubtless had to sign. Informed consent rests on the same assumptions about the relationship between freedom and reason that ground our notions of diminished responsibility. The exercise of such freedom is not anarchic, whimsical or capricious; it is a highly structured intellectual activity, and there is a threshold of ability below which people, such as 12-year-olds, should not be held fully responsible for their actions or their signatures. Gazzaniga does make the connection between reason, freedom and responsibility, but all he does with it is wag an admonishing finger: âCriminals can follow the rules,â he lectures sternly in his one-paragraph conclusion. âThey donât commit their crimes in front of policemen. They are able to inhibit their actions when the cop walks by. They have made a choice based on their experience. This is what makes us responsible agents, or not.â For him, either youâve got reason or you havenât (and if youâve got it, by God, youâre going down). The legal notion of diminished responsibility, by contrast, recognizes that moral thinkingâthe ability to see things from the perspective of others and then be constrained or not by this insightâis a more mature skill than the toddlerâs ability to figure out how to get what he wants. The line the law draws between competency and incompetency can never be definitive, but isnât it better to have an approximation of a standard than no standard at all?
One thing Gazzaniga stresses repeatedly is how ineradicable the notion of free will is. However much philosophers and scientists may pontificate about its nonexistence, he argues, even the most die-hard determinist cannot go about her business in the world without it. But this breezy confidence is somewhat undermined by his account of a fascinating recent experiment. Two groups of students were asked to play a game that had built into it the potential to cheat. As they waited to play, subjects were given some reading material consisting of excerpts from a text denying free will. For those of us who labor under the long neo-eugenic shadow of Francis Crick, it is delicious to learn that the passages came from Astonishing Hypothesis, Crickâs dogmatically scientistic account of the human mind/brain relationship. The subjects were divided into two groups. One was given a passage expressing Crickâs strong views on the illusory nature of freedom; the other read a paragraph that was neutral with regard to the question. The group primed by the determinist sentiments cheated on the tests at significantly higher rates than the control group. Whoâs in Charge? comes tantalizingly close to transcending the lowering effects of neuroscientific determinism, but it is thwarted at the final fence by the authorâs preoccupation with damaged brains at the expense of such effective instruments as his own finely tuned cerebrum.
